diff --git a/.github/ISSUE_TEMPLATE/story.md b/.github/ISSUE_TEMPLATE/story.md index 3c6d4cbae..a67793a47 100644 --- a/.github/ISSUE_TEMPLATE/story.md +++ b/.github/ISSUE_TEMPLATE/story.md @@ -19,11 +19,21 @@ It is [planned and ready](https://fleetdm.com/handbook/company/development-group | I want to _________________________________________ | so that I can _________________________________________. +## Context +- Requestor(s): _________________________ +- Product designer: _________________________ + + + ## Changes ### Product - [ ] UI changes: TODO -- [ ] CLI usage changes: TODO +- [ ] CLI usage changes: TODO - [ ] REST API changes: TODO - [ ] Permissions changes: TODO - [ ] Outdated documentation changes: TODO @@ -35,14 +45,6 @@ It is [planned and ready](https://fleetdm.com/handbook/company/development-group > ℹ️  Please read this issue carefully and understand it. Pay [special attention](https://fleetdm.com/handbook/company/development-groups#developing-from-wireframes) to UI wireframes, especially "dev notes". -## Context -- Requestor(s): _________________________ - - ## QA ### Risk assessment diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml index 5d97bc6d9..37e1fd985 100644 --- a/.github/workflows/codeql-analysis.yml +++ b/.github/workflows/codeql-analysis.yml @@ -47,6 +47,11 @@ jobs: steps: - name: Checkout repository uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3.5.3 + + - name: Set up Go + uses: actions/setup-go@93397bea11091df50f3d7e59dc26a7711a8bcfbe # v4.1.0 + with: + go-version: ${{ vars.GO_VERSION }} # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL diff --git a/.github/workflows/goreleaser-orbit.yaml b/.github/workflows/goreleaser-orbit.yaml index 613b5fea0..1efd3faec 100644 --- a/.github/workflows/goreleaser-orbit.yaml +++ b/.github/workflows/goreleaser-orbit.yaml @@ -66,7 +66,7 @@ jobs: uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v2 with: name: orbit-macos - path: dist + path: dist/orbit-macos_darwin_all/orbit goreleaser-linux: runs-on: ubuntu-20.04 @@ -94,7 +94,7 @@ jobs: uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v2 with: name: orbit-linux - path: dist + path: dist/orbit_linux_amd64_v1/orbit goreleaser-windows: runs-on: windows-2022 @@ -122,4 +122,4 @@ jobs: uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce # v2 with: name: orbit-windows - path: dist + path: dist/orbit_windows_amd64_v1/orbit.exe diff --git a/CHANGELOG.md b/CHANGELOG.md index 1a50c0c80..019abf32b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,61 @@ +## Fleet 4.45.0 (Feb 20, 2024) + +### Changes + +* **Endpoint operations**: + - Added two new API endpoints for running provided live query SQL on a single host. + - Added `fleetctl gitops` command for GitOps workflow synchronization. + - Added capabilities to the `gitops` role to support reading queries/policies and writing scripts. + - Updated policy names to be unique per team. + - Updated fleetd-chrome to use the latest wa-sqlite v0.9.11. + - Updated "Add hosts" modal UI to dynamically include the `--enable-scripts` flag. + - Added count of upcoming activities to host vitals UI. + - Updated UI to include upcoming activity counts in host vitals. + - Updated 405 response for `POST` requests on the root path to highlight misconfigured osquery instances. + +* **Device management (MDM)**: + - Added MDM command payloads to the response of `GET /api/_version_/fleet/mdm/commandresults`. + - Changed several MDM-related endpoints to be platform-agnostic. + - Added script capabilities to UI for Linux hosts. + - Added UI for locking and unlocking hosts managed by Fleet MDM. + - Added `fleetctl mdm lock` and `fleetctl mdm unlock` commands. + - Added validation to reject script enqueue requests for hosts without fleetd. + - Added the `host_mdm_actions` DB table for MDM lock and wipe functionality. + - Updated backend MDM migration flow and added logging. + - Updated UI text for disk encryption to reflect cross-platform functionality. + - Renamed and updated fields in MDM configuration profiles for clarity. + - Improved validation of Windows profiles to prevent delivery errors. + - Improved Windows MDM profile error tooltip messages. + - Fixed MDM unlock flow and updated lock/unlock functionality for Windows and Linux. + - Fixed a bug that would cause OS Settings verification to fail with MySQL's `only_full_group_by` mode enabled. + +* **Vulnerability management**: + - Windows OS Vulnerabilities now include a `resolved_in_version` in the `/os_versions` API response. + - Fixed an issue where software from a Parallels VM would incorrectly appear as the host's software. + - Implemented permission checks for software and software titles. + - Fixed software title aggregation when triggering vulnerability scans. + +### Bug fixes and improvements + - Updated text and style across the app for consistency and clarity. + - Improved UI for the view disk encryption key, host details activity card, and "Add hosts" modal. + - Addressed a bug where updating the search field caused unwanted loss of focus. + - Corrected alignment bugs on empty table states for software details. + - Updated URL query parameters to reset when switching tabs. + - Fixed device page showing invalid date for the last restarted. + - Fixed visual display issues with chevron right icons on Chrome. + - Fixed Windows vulnerabilities without exploit/severity from crashing the software page. + - Fixed issues with checkboxes in hidden modals and long enroll secrets overlapping action buttons. + - Fixed a bug with built-in platform labels. + - Fixed enroll secret error messaging showing secret in cleartext. + - Fixed various UI bugs including disk encryption key input icons, alignment issues, and dropdown menus. + - Fixed dropdown behavior in administrative settings and software title/version tables. + - Fixed various UI and style bugs, including issues with long OS names causing table render issues. + - Fixed a bug where checkboxes within a hidden modal were not correctly hidden. + - Fixed vulnerable software dropdown from switching back to all teams. + - Fixed wall_time to report in milliseconds for consistency with other query performance stats. + - Fixed generating duplicate activities when locking or unlocking a host with scripts disabled. + - Fixed how errors are reported to APM to avoid duplicates and improve stack trace accuracy. + ## Fleet 4.44.1 (Feb 13, 2024) ### Bug fixes diff --git a/Dockerfile-desktop-linux b/Dockerfile-desktop-linux index 63fd08aeb..eb3828939 100644 --- a/Dockerfile-desktop-linux +++ b/Dockerfile-desktop-linux @@ -1,4 +1,4 @@ -FROM --platform=linux/amd64 golang:1.21.6-bullseye@sha256:fa52abd182d334cfcdffdcc934e21fcfbc71c3cde568e606193ae7db045b1b8d +FROM --platform=linux/amd64 golang:1.21.7-bullseye@sha256:447afe790df28e0bc19d782a9f776a105ce3b8417cdd21f33affc4ed6d38f9d5 LABEL maintainer="Fleet Developers" RUN apt-get update && apt-get install -y \ diff --git a/articles/fleet-4.26.0.md b/articles/fleet-4.26.0.md index 92cf01190..20d7522c4 100644 --- a/articles/fleet-4.26.0.md +++ b/articles/fleet-4.26.0.md @@ -48,8 +48,6 @@ You already have a lot of raw data to sift through in your data lake, especially Fleet 4.26.0 reduces the number of calls you have to make to pull software data with the REST API. Each time a host has software added, updated, or deleted, a `host_software_updated_at` timestamp gets updated for that host. The `host_software_updated_at` timestamp is exposed through the API. This lets you send the latest software data to your data lake, so you can avoid drowning in outdated information. - - ## Fleet MDM **MDM features are not ready for production and are currently in development. These features are disabled by default.** diff --git a/articles/fleet-4.27.0.md b/articles/fleet-4.27.0.md index 6638c2cf0..8ecccd7ff 100644 --- a/articles/fleet-4.27.0.md +++ b/articles/fleet-4.27.0.md @@ -21,8 +21,6 @@ In the UI an account administrator will see the following information: If you pair this new login activity with the audit improvements from [release 4.26](https://fleetdm.com/releases/fleet-4.26.0) you can now set up an alert if multiple failed login attempts occur. - - ## Better search filters on the ‘Select Targets’ screen in Fleet **Available in Fleet Free and Fleet Premium** diff --git a/articles/fleet-4.28.0.md b/articles/fleet-4.28.0.md index 3392e0526..26fd4ea61 100644 --- a/articles/fleet-4.28.0.md +++ b/articles/fleet-4.28.0.md @@ -32,8 +32,6 @@ Premium and Ultimate Fleet plans have the ability to import the CIS benchmarks i For more information on adding CIS Benchmarks, check out the [documentation here](https://fleetdm.com/docs/using-fleet/cis-benchmarks#how-to-add-cis-benchmarks). - - ## Reduced false negatives from MS Office products related to vulnerabilities reported in the NVD A false negative occurs when a policy reports there is not a vulnerability, but there actually is a vulnerability. Even if a policy reports zero vulnerabilities, that does not imply there are no vulnerabilities present. Both of these types of errors can cause problems when trying to identify vulnerabilities that need attention. @@ -69,8 +67,6 @@ For more information on enabling this functionality, check out the [documentati * Enabled installation and auto-updates of Nudge via Orbit. * Added support for providing macos\_settings.custom\_settings profiles for team (with Fleet Premium) and no-team levels via fleetctl apply. - - #### List of other features * Added --policies-team flag to fleetctl apply to easily import a group of policies into a team. diff --git a/articles/fleet-4.29.0.md b/articles/fleet-4.29.0.md index 39b9d7be6..9c13562ac 100644 --- a/articles/fleet-4.29.0.md +++ b/articles/fleet-4.29.0.md @@ -27,8 +27,6 @@ Users created via JIT provisioning can be assigned Fleet roles using SAML custom Learn more about [JIT user role setting](https://fleetdm.com/docs/deploying/configuration#just-in-time-jit-user-provisioning). - - ## CIS benchmarks manual intervention _Available in Fleet Premium and Fleet Ultimate_ @@ -65,8 +63,6 @@ Fleet updated translation rules to provide better 🟢 Results and avoid false p * Added MDM profiles status filter to hosts endpoints. * Added indicators of aggregate host count for each possible status of MDM-enforced mac settings (hidden until 4.30.0). - - #### List of other features * As part of JIT provisioning, read user roles from SAML custom attributes. diff --git a/articles/fleet-4.45.0.md b/articles/fleet-4.45.0.md new file mode 100644 index 000000000..6574a26a2 --- /dev/null +++ b/articles/fleet-4.45.0.md @@ -0,0 +1,120 @@ +# Fleet 4.45.0 | Remote lock, Linux script library, osquery storage location. + +![Fleet 4.45.0](../website/assets/images/articles/fleet-4.45.0-1600x900@2x.png) + +Fleet 4.45.0 is live. Check out the full [changelog](https://github.com/fleetdm/fleet/releases/tag/fleet-v4.45.0) or continue reading to get the highlights. +For upgrade instructions, see our [upgrade guide](https://fleetdm.com/docs/deploying/upgrading-fleet) in the Fleet docs. + +## Highlights + +* Remote lock for macOS, Windows, and Linux +* Linux script library +* Customizable osquery data storage location + + +### Remote lock for macOS, Windows, and Linux + +Fleet expands its device management capabilities with remote lock functionalities for macOS, Windows, and Linux systems. This development allows administrators to enhance security protocols and respond swiftly to potential security breaches by either locking a device remotely. This feature is particularly crucial in scenarios involving lost or stolen devices or when a device is suspected to be compromised. By integrating these remote actions, Fleet empowers IT and security teams with robust tools to protect organizational data and maintain device security. This update aligns with Fleet's values of ownership and results, as it offers users more control over their device fleet while ensuring effective response measures are in place for critical security incidents. + + +### Linux script library + +A script library specifically designed for Linux hosts has been added. This complements Fleet's existing script execution functionalities and script libraries for macOS and Windows. The script library for Linux allows administrators to store, manage, and execute scripts efficiently using the Fleet UI or API, facilitating streamlined operations and maintenance tasks on Linux-based systems. This addition underscores Fleet's commitment to adaptability and inclusiveness, ensuring users can leverage the platform's full potential regardless of their operating system environment. By providing a dedicated script library for Linux, Fleet reinforces its dedication to delivering versatile and user-centric solutions that cater to the diverse needs of IT and security professionals. + + +### Customizable osquery data storage location + +Fleet introduces a new `--osquery-db` flag to the `fleetctl` package command, catering to a unique requirement for virtual machine (VM) environments. This feature allows users to specify or update the osquery database directory for `fleetd` at the time of packaging or through an environment variable. By enabling the customization of the osquery data storage location, users can direct `fleetd` to utilize directories with more available space, optimizing resource use in VM setups. This enhancement demonstrates Fleet's commitment to ownership by giving users greater control over their Fleet configuration and results and facilitating more efficient data management in resource-constrained environments. + + + +## Changes + +* **Endpoint operations**: + - Added two new API endpoints for running provided live query SQL on a single host. + - Added `fleetctl gitops` command for GitOps workflow synchronization. + - Added capabilities to the `gitops` role to support reading queries/policies and writing scripts. + - Updated policy names to be unique per team. + - Updated fleetd-chrome to use the latest wa-sqlite v0.9.11. + - Updated "Add hosts" modal UI to dynamically include the `--enable-scripts` flag. + - Added count of upcoming activities to host vitals UI. + - Updated UI to include upcoming activity counts in host vitals. + - Updated 405 response for `POST` requests on the root path to highlight misconfigured osquery instances. + +* **Device management (MDM)**: + - Added MDM command payloads to the response of `GET /api/_version_/fleet/mdm/commandresults`. + - Changed several MDM-related endpoints to be platform-agnostic. + - Added script capabilities to UI for Linux hosts. + - Added UI for locking and unlocking hosts managed by Fleet MDM. + - Added `fleetctl mdm lock` and `fleetctl mdm unlock` commands. + - Added validation to reject script enqueue requests for hosts without fleetd. + - Added the `host_mdm_actions` DB table for MDM lock and wipe functionality. + - Updated backend MDM migration flow and added logging. + - Updated UI text for disk encryption to reflect cross-platform functionality. + - Renamed and updated fields in MDM configuration profiles for clarity. + - Improved validation of Windows profiles to prevent delivery errors. + - Improved Windows MDM profile error tooltip messages. + - Fixed MDM unlock flow and updated lock/unlock functionality for Windows and Linux. + - Fixed a bug that would cause OS Settings verification to fail with MySQL's `only_full_group_by` mode enabled. + +* **Vulnerability management**: + - Windows OS Vulnerabilities now include a `resolved_in_version` in the `/os_versions` API response. + - Fixed an issue where software from a Parallels VM would incorrectly appear as the host's software. + - Implemented permission checks for software and software titles. + - Fixed software title aggregation when triggering vulnerability scans. + +### Bug fixes and improvements + - Updated text and style across the app for consistency and clarity. + - Improved UI for the view disk encryption key, host details activity card, and "Add hosts" modal. + - Addressed a bug where updating the search field caused unwanted loss of focus. + - Corrected alignment bugs on empty table states for software details. + - Updated URL query parameters to reset when switching tabs. + - Fixed device page showing invalid date for the last restarted. + - Fixed visual display issues with chevron right icons on Chrome. + - Fixed Windows vulnerabilities without exploit/severity from crashing the software page. + - Fixed issues with checkboxes in hidden modals and long enroll secrets overlapping action buttons. + - Fixed a bug with built-in platform labels. + - Fixed enroll secret error messaging showing secret in cleartext. + - Fixed various UI bugs including disk encryption key input icons, alignment issues, and dropdown menus. + - Fixed dropdown behavior in administrative settings and software title/version tables. + - Fixed various UI and style bugs, including issues with long OS names causing table render issues. + - Fixed a bug where checkboxes within a hidden modal were not correctly hidden. + - Fixed vulnerable software dropdown from switching back to all teams. + - Fixed wall_time to report in milliseconds for consistency with other query performance stats. + - Fixed generating duplicate activities when locking or unlocking a host with scripts disabled. + - Fixed how errors are reported to APM to avoid duplicates and improve stack trace accuracy. + +## Fleet 4.44.1 (Feb 13, 2024) + +### Bug fixes + +* Fixed a bug where long enrollment secrets would overlap with the action buttons on top of them. +* Fixed a bug that caused OS Settings to never be verified if the MySQL config of Fleet's database had 'only_full_group_by' mode enabled (enabled by default). +* Ensured policy names are now unique per team, allowing different teams to have policies with the same name. +* Fixed the visual display of chevron right icons on Chrome. +* Renamed the 'mdm_windows_configuration_profiles' and 'mdm_apple_configuration_profiles' 'updated_at' field to 'uploaded_at' and removed the automatic setting of the value, setting it explicitly instead. +* Fixed a small alignment bug in the setup flow. +* Improved the validation of Windows profiles to prevent errors when delivering the profiles to the hosts. If you need to embed a nested XML structure (for example, for Wi-Fi profiles), you can either: + - Escape the XML. + - Use a wrapping `` element. +* Fixed an issue where an inaccurate message was returned after running an asynchronous (queued) script. +* Fixed URL query parameters to reset when switching tabs. +* Fixed the vulnerable software dropdown from switching back to all teams. +* Added fleetctl gitops command: + - Synchronize Fleet configuration with the provided file. This command is intended to be used in a GitOps workflow. +* Updated the response for 'GET /api/v1/fleet/hosts/:id/activities/upcoming' to include the count of all upcoming activities for the host. +* Fixed an issue where software from a Parallels VM on a MacOS host would show up in Fleet as if it were the host's software. +* Removed unnecessary nested database transactions in batch-setting of MDM profiles. +* Added count of upcoming activities to host vitals UI. + + +## Ready to upgrade? + +Visit our [Upgrade guide](https://fleetdm.com/docs/deploying/upgrading-fleet) in the Fleet docs for instructions on updating to Fleet 4.45.0. + + + + + + + diff --git a/articles/using-fleet-and-tines-together.md b/articles/using-fleet-and-tines-together.md index 11169e431..5c2353168 100644 --- a/articles/using-fleet-and-tines-together.md +++ b/articles/using-fleet-and-tines-together.md @@ -74,8 +74,6 @@ The final email with the above definition looks like this: The Fleet API is very flexible, but with the addition of Tines, the options for data transformation are endless. In the above example, we easily connected to the Fleet API and transformed the data response with a single Tines Transform function, and allowed the end user to receive a customized report of vulnerable software on an individual host. - - diff --git a/changes/10476-lock-unlock-api-changes b/changes/10476-lock-unlock-api-changes deleted file mode 100644 index a2f897f8c..000000000 --- a/changes/10476-lock-unlock-api-changes +++ /dev/null @@ -1 +0,0 @@ -* Added tracking of Windows and Linux' scripts to lock or unlock the host, report the proper current and pending states. diff --git a/changes/13643-fleetctl-gitops b/changes/13643-fleetctl-gitops deleted file mode 100644 index be7855dec..000000000 --- a/changes/13643-fleetctl-gitops +++ /dev/null @@ -1,2 +0,0 @@ -Added fleetctl gitops command: -- Synchronize Fleet configuration with provided file. This command is intended to be used in a GitOps workflow. diff --git a/changes/13643-gitops-role b/changes/13643-gitops-role deleted file mode 100644 index d7b4676b6..000000000 --- a/changes/13643-gitops-role +++ /dev/null @@ -1 +0,0 @@ -gitops role can now read queries/policies and write (but not execute) scripts diff --git a/changes/13643-policy-name-uniqueness b/changes/13643-policy-name-uniqueness deleted file mode 100644 index ad220cb29..000000000 --- a/changes/13643-policy-name-uniqueness +++ /dev/null @@ -1 +0,0 @@ -Policy names are now unique per team -- different teams can have policies with the same name. diff --git a/changes/14444-mdm-migration-debug b/changes/14444-mdm-migration-debug deleted file mode 100644 index e030e3d20..000000000 --- a/changes/14444-mdm-migration-debug +++ /dev/null @@ -1 +0,0 @@ -- Updated backend MDM migration flow and added logging to aid in debugging migration errors. \ No newline at end of file diff --git a/changes/14713-fix-apm-stacktrace-and-duplicates b/changes/14713-fix-apm-stacktrace-and-duplicates deleted file mode 100644 index 464589267..000000000 --- a/changes/14713-fix-apm-stacktrace-and-duplicates +++ /dev/null @@ -1 +0,0 @@ -* Fixed how errors are sent to APM (Elastic) to avoid duplicates, cover more errors in background tasks (cron and worker jobs) and fix the reported stack trace. diff --git a/changes/14850-fix-ui-settings-action-dropdowns b/changes/14850-fix-ui-settings-action-dropdowns deleted file mode 100644 index 22b61a4e2..000000000 --- a/changes/14850-fix-ui-settings-action-dropdowns +++ /dev/null @@ -1 +0,0 @@ -- Fixed UI issues where dropdown menus were not displaying correctly in the administrative settings page. diff --git a/changes/15082-make-endpoints-consistent b/changes/15082-make-endpoints-consistent deleted file mode 100644 index c72bd7b56..000000000 --- a/changes/15082-make-endpoints-consistent +++ /dev/null @@ -1,9 +0,0 @@ -- Changed the following endpoints to be platform-agnostic. The old routes still work but are deprecated. - - POST /mdm/apple/setup/eula was replaced by POST /mdm/setup/eula - - GET /mdm/apple/setup/eula/metadata was replaced by GET /mdm/setup/eula/metadata - - DELETE /mdm/apple/setup/eula/:token was replaced by DELETE /mdm/setup/eula/:token - - GET /mdm/apple/setup/eula/:token was replaced by GET /mdm/setup/eula/:token - - POST /mdm/apple/bootstrap was replaced by POST /mdm/bootstrap - - GET /mdm/apple/bootstrap/:team_id/metadata was replaced by GET /mdm/bootstrap/:team_id/metadata - - DELETE /mdm/apple/bootstrap/:team_id was replaced by DELETE /mdm/bootstrap/:team_id - - GET /mdm/apple/bootstrap/summary was replaced by GET /mdm/bootstrap/summary diff --git a/changes/15283-linux-scripts b/changes/15283-linux-scripts deleted file mode 100644 index cfeb8d85f..000000000 --- a/changes/15283-linux-scripts +++ /dev/null @@ -1 +0,0 @@ -- Added script capabilities to UI for Linux hosts. \ No newline at end of file diff --git a/changes/15332-scep-renew b/changes/15332-scep-renew new file mode 100644 index 000000000..c66e9b726 --- /dev/null +++ b/changes/15332-scep-renew @@ -0,0 +1 @@ +* Automatically renew macOS identity certificates for devices 30 days prior to their expiration. diff --git a/changes/15703-wall_time b/changes/15703-wall_time deleted file mode 100644 index 43eb3a6b2..000000000 --- a/changes/15703-wall_time +++ /dev/null @@ -1 +0,0 @@ -wall_time is now reported in milliseconds (as opposed to seconds), consistent with other query performance stats. diff --git a/changes/15855-vm-software b/changes/15855-vm-software deleted file mode 100644 index 7cb935cfe..000000000 --- a/changes/15855-vm-software +++ /dev/null @@ -1,2 +0,0 @@ -- Fixes issue where software from a Parallels VM on a MacOS host would show up in Fleet as if it - were the host's software. \ No newline at end of file diff --git a/changes/15893-team-users b/changes/15893-team-users deleted file mode 100644 index dc8fcd801..000000000 --- a/changes/15893-team-users +++ /dev/null @@ -1 +0,0 @@ -- Change verbiage around team members to users \ No newline at end of file diff --git a/changes/15923-page-descriptions-part-2 b/changes/15923-page-descriptions-part-2 new file mode 100644 index 000000000..ee2daab27 --- /dev/null +++ b/changes/15923-page-descriptions-part-2 @@ -0,0 +1 @@ +- Update page descriptions diff --git a/changes/15968-rename-team b/changes/15968-rename-team new file mode 100644 index 000000000..4d8f29f7d --- /dev/null +++ b/changes/15968-rename-team @@ -0,0 +1 @@ +- UI Edit team more properly labeled as rename team diff --git a/changes/16014-add-osquery-db-flag-to-fleetd b/changes/16014-add-osquery-db-flag-to-fleetd deleted file mode 100644 index 0a38db7b0..000000000 --- a/changes/16014-add-osquery-db-flag-to-fleetd +++ /dev/null @@ -1 +0,0 @@ -* Add `--osquery-db` flag to `fleetctl package` command to configure a custom directory for osquery's database (`fleetctl package --osquery-db=/path/to/osquery.db`). diff --git a/changes/16025-empty-policy-state b/changes/16025-empty-policy-state new file mode 100644 index 000000000..f72256971 --- /dev/null +++ b/changes/16025-empty-policy-state @@ -0,0 +1 @@ +- Update UI's empty policy states diff --git a/changes/16029-account-page b/changes/16029-account-page new file mode 100644 index 000000000..ee343e7ed --- /dev/null +++ b/changes/16029-account-page @@ -0,0 +1 @@ +- User settings/profile page officially renamed to account page diff --git a/changes/16051-rename-update-timestamp-mdm-profiles b/changes/16051-rename-update-timestamp-mdm-profiles deleted file mode 100644 index 43aa0d53d..000000000 --- a/changes/16051-rename-update-timestamp-mdm-profiles +++ /dev/null @@ -1 +0,0 @@ -* Renamed the `mdm_windows_configuration_profiles` and `mdm_apple_configuration_profiles` `updated_at` field to `uploaded_at` and removed the automatic setting of the value, set explicity instead. diff --git a/changes/16133-icons b/changes/16133-icons deleted file mode 100644 index 27202b9e1..000000000 --- a/changes/16133-icons +++ /dev/null @@ -1 +0,0 @@ -* Fix visual display of chevron right icons on Chrome diff --git a/changes/16155-enroll-secret-bug b/changes/16155-enroll-secret-bug deleted file mode 100644 index fc66b23e3..000000000 --- a/changes/16155-enroll-secret-bug +++ /dev/null @@ -1 +0,0 @@ -- Fix a bug where long enroll enroll secrets would overlap with the action buttons on top of them. diff --git a/changes/16182-fail-post-to-root b/changes/16182-fail-post-to-root deleted file mode 100644 index caa704a31..000000000 --- a/changes/16182-fail-post-to-root +++ /dev/null @@ -1,5 +0,0 @@ -* Return 405 when receiving `POST` requests on the root path. -WARNING: -We found that misconfigured (empty `logger_tls_endpoint`) osquery instances were sending log results (`POST` requests) to the root path and Fleet was incorrectly returning HTTP 200 responses on such root path. -This version will now return HTTP 405 (Method Not Allowed) when receiving `POST` requests on the root path so that this misconfiguration can be detected by administrators. -If you deploy this version of Fleet and there's log traffic on the root path it could cause increased network usage on your infrastructure because osquery will retry sending the logs and these will accumulate (up to a limit configured by logger flags). Thus, before upgrading, make sure there's no osquery traffic (`POST` requests) to Fleet's root path. diff --git a/changes/16232-resolved-in-version-windows b/changes/16232-resolved-in-version-windows deleted file mode 100644 index 25b24841b..000000000 --- a/changes/16232-resolved-in-version-windows +++ /dev/null @@ -1 +0,0 @@ -- Windows OS Vulnerabilities now include a `resolved_in_version` in the `/os_versions` API response \ No newline at end of file diff --git a/changes/16273-remove-nested-transactions b/changes/16273-remove-nested-transactions deleted file mode 100644 index e7fa04430..000000000 --- a/changes/16273-remove-nested-transactions +++ /dev/null @@ -1 +0,0 @@ -* Removed unnecessary nested database transactions in batch-setting of MDM profiles. diff --git a/changes/16316-windows-xml-validation b/changes/16316-windows-xml-validation deleted file mode 100644 index def14d8ae..000000000 --- a/changes/16316-windows-xml-validation +++ /dev/null @@ -1,5 +0,0 @@ -* Improved the validation of Windows profiles to prevent errors when the - profiles are delivered to the hosts. If you need to embed a nested XML - structure (for example for Wi-Fi profiles) you can either: - - Escape the XML - - Use a wrapping `` element diff --git a/changes/16381-add-hosts-modal-enable-scripts b/changes/16381-add-hosts-modal-enable-scripts deleted file mode 100644 index b3e40d4d0..000000000 --- a/changes/16381-add-hosts-modal-enable-scripts +++ /dev/null @@ -1,2 +0,0 @@ -- Updated "Add hosts" modal UI to dynamically include the `--enable-scripts` flag unless scripts are - disabled in the server settings. diff --git a/changes/16382-fleetctl-copy b/changes/16382-fleetctl-copy deleted file mode 100644 index 6b0317202..000000000 --- a/changes/16382-fleetctl-copy +++ /dev/null @@ -1 +0,0 @@ -- Updates the copy in `fleetctl`'s output to reference `fleetd`. \ No newline at end of file diff --git a/changes/16383-lock-cli b/changes/16383-lock-cli deleted file mode 100644 index e78fb887c..000000000 --- a/changes/16383-lock-cli +++ /dev/null @@ -1,2 +0,0 @@ -- Adds the `fleetctl mdm` commands `lock` and `unlock` -- Adds missing functionality for lock/unlock flows for Windows and Linux \ No newline at end of file diff --git a/changes/16386-host-lock-schema b/changes/16386-host-lock-schema deleted file mode 100644 index a36317345..000000000 --- a/changes/16386-host-lock-schema +++ /dev/null @@ -1 +0,0 @@ -- Adds the `host_mdm_actions` DB table to support MDM lock and wipe functionality. \ No newline at end of file diff --git a/changes/16394-fleetd-chrome-runtime-error b/changes/16394-fleetd-chrome-runtime-error deleted file mode 100644 index d6c03976d..000000000 --- a/changes/16394-fleetd-chrome-runtime-error +++ /dev/null @@ -1 +0,0 @@ -Updated fleetd-chrome to use the latest wa-sqlite v0.9.11 diff --git a/changes/16394-fleetd-chrome-runtime-error-fix b/changes/16394-fleetd-chrome-runtime-error-fix new file mode 100644 index 000000000..9c9003a68 --- /dev/null +++ b/changes/16394-fleetd-chrome-runtime-error-fix @@ -0,0 +1 @@ +In fleetd-chrome, fixed RuntimeError seen by some hosts. diff --git a/changes/16416-cmd-debugging b/changes/16416-cmd-debugging deleted file mode 100644 index 9fbfabad1..000000000 --- a/changes/16416-cmd-debugging +++ /dev/null @@ -1,4 +0,0 @@ -* Added MDM command payloads to the response of `GET /api/_version_/fleet/mdm/commandresults`. -* Added a new column named "PAYLOAD" to the output of `fleetctl get mdm-command-results` with the request payload. -* Replaced CmdID values in favor of the LocURI for messages for failed profiles. -* Added a new comment over CmdID elements generated by Fleet in Windows profiles and commands to make evident that Fleet is in control of those values. diff --git a/changes/16426-add-upcoming-activity-count b/changes/16426-add-upcoming-activity-count deleted file mode 100644 index 782518c46..000000000 --- a/changes/16426-add-upcoming-activity-count +++ /dev/null @@ -1,2 +0,0 @@ -- Updated `GET /api/v1/fleet/hosts/:id/activities/upcoming` response to include the count of all - upcoming activities for the host. diff --git a/changes/16426-host-upcoming-activities-count-ui b/changes/16426-host-upcoming-activities-count-ui deleted file mode 100644 index c82070f23..000000000 --- a/changes/16426-host-upcoming-activities-count-ui +++ /dev/null @@ -1 +0,0 @@ -- Added count of upcoming activities to host vitals UI. \ No newline at end of file diff --git a/changes/16431-scripts-result-message b/changes/16431-scripts-result-message deleted file mode 100644 index 3d29c82fd..000000000 --- a/changes/16431-scripts-result-message +++ /dev/null @@ -1 +0,0 @@ -- Fixes issue where an inaccurate message was returned after running an async (queued) script. \ No newline at end of file diff --git a/changes/16466-transfer-hosts-to-No-team b/changes/16466-transfer-hosts-to-No-team deleted file mode 100644 index d39f7e283..000000000 --- a/changes/16466-transfer-hosts-to-No-team +++ /dev/null @@ -1 +0,0 @@ -fleetctl can now transfer hosts to No team like: fleetctl hosts transfer --team '' --hosts yourHost diff --git a/changes/16480-fix-capturing-errors-in-sentry b/changes/16480-fix-capturing-errors-in-sentry new file mode 100644 index 000000000..0638ba660 --- /dev/null +++ b/changes/16480-fix-capturing-errors-in-sentry @@ -0,0 +1,4 @@ +* Fixed issues with how errors were captured in Sentry: + - The stack trace is now more precise. + - More error paths will now get captured in Sentry. + - **NOTE: Many more entries could be generated in Sentry compared to earlier Fleet versions.** Sentry capacity should be planned accordingly. diff --git a/changes/16506-page-descriptions b/changes/16506-page-descriptions new file mode 100644 index 000000000..5bdbd499f --- /dev/null +++ b/changes/16506-page-descriptions @@ -0,0 +1 @@ +- Update page description styling diff --git a/changes/16541-create-user-with-bad-team b/changes/16541-create-user-with-bad-team deleted file mode 100644 index 2c92ab6a6..000000000 --- a/changes/16541-create-user-with-bad-team +++ /dev/null @@ -1 +0,0 @@ -Improved error message when creating a new user (via API or fleetctl) with a team that does not exist. diff --git a/changes/16569-setup-flow-alignment b/changes/16569-setup-flow-alignment deleted file mode 100644 index 3b8b317bf..000000000 --- a/changes/16569-setup-flow-alignment +++ /dev/null @@ -1 +0,0 @@ -* Fix a small alignment bug in the setup flow diff --git a/changes/16621-obfuscate-enroll-secret b/changes/16621-obfuscate-enroll-secret deleted file mode 100644 index accfff76f..000000000 --- a/changes/16621-obfuscate-enroll-secret +++ /dev/null @@ -1 +0,0 @@ -When attempting to set an enroll secret which already exists in DB, error message no longer contains the secret in cleartext. diff --git a/changes/16648-windows-mdm-cmd-type b/changes/16648-windows-mdm-cmd-type new file mode 100644 index 000000000..a4ff45382 --- /dev/null +++ b/changes/16648-windows-mdm-cmd-type @@ -0,0 +1,2 @@ +- Fixes issue where the "Type" column was empty for Windows MDM profile commands when running + `fleetctl get mdm-commands` and `fleetctl get mdm-command-results`. \ No newline at end of file diff --git a/changes/16649-ui-activity-disk-encryption b/changes/16649-ui-activity-disk-encryption deleted file mode 100644 index 5ca2a0e06..000000000 --- a/changes/16649-ui-activity-disk-encryption +++ /dev/null @@ -1 +0,0 @@ -- Updated UI text for disk encryption activities to reflect cross-platform functionality. \ No newline at end of file diff --git a/changes/16669-fix-hardcoded-label-bug b/changes/16669-fix-hardcoded-label-bug deleted file mode 100644 index 396dea3ef..000000000 --- a/changes/16669-fix-hardcoded-label-bug +++ /dev/null @@ -1 +0,0 @@ -- Fixed built in platform labels bug diff --git a/changes/16672-software-url-states-bug b/changes/16672-software-url-states-bug deleted file mode 100644 index 7fcd985da..000000000 --- a/changes/16672-software-url-states-bug +++ /dev/null @@ -1,2 +0,0 @@ -- Fix URL query params to reset when switching tabs -- Fix vulnerable software dropdown from switching back to all teams diff --git a/changes/16681-device-last-restarted-bug b/changes/16681-device-last-restarted-bug deleted file mode 100644 index 912291b03..000000000 --- a/changes/16681-device-last-restarted-bug +++ /dev/null @@ -1 +0,0 @@ -- Fix device page showing invalid date for last restarted diff --git a/changes/16700-scripts-disabled-osquery-only b/changes/16700-scripts-disabled-osquery-only deleted file mode 100644 index 3e01f1788..000000000 --- a/changes/16700-scripts-disabled-osquery-only +++ /dev/null @@ -1,2 +0,0 @@ -- Added validation to reject requests to enqueue scripts for hosts that do not have fleetd installed - (i.e. plain osquery hosts). diff --git a/changes/16701-move-show-query-button b/changes/16701-move-show-query-button new file mode 100644 index 000000000..ec0d5868e --- /dev/null +++ b/changes/16701-move-show-query-button @@ -0,0 +1 @@ +- Move show query button so it shows in report page even with no results diff --git a/changes/16724-capitalization-fixes b/changes/16724-capitalization-fixes deleted file mode 100644 index 90d4c6d63..000000000 --- a/changes/16724-capitalization-fixes +++ /dev/null @@ -1 +0,0 @@ -- Fix title case to sentence case and a few other headers diff --git a/changes/16752-blur-on-software-search b/changes/16752-blur-on-software-search deleted file mode 100644 index ef80c99a3..000000000 --- a/changes/16752-blur-on-software-search +++ /dev/null @@ -1,2 +0,0 @@ -- Fix a bug where updating the search field for the Software titles page caused an unwanted loss of - focus from the search field on rerender. diff --git a/changes/16765-windows-software-vuln-crash b/changes/16765-windows-software-vuln-crash deleted file mode 100644 index 5ecbff968..000000000 --- a/changes/16765-windows-software-vuln-crash +++ /dev/null @@ -1 +0,0 @@ -- Fix windows vulnerabilities without exploit/severity from crashing the page when rendered diff --git a/changes/16805-new-live-query-on-host-endpoint b/changes/16805-new-live-query-on-host-endpoint deleted file mode 100644 index 84918569b..000000000 --- a/changes/16805-new-live-query-on-host-endpoint +++ /dev/null @@ -1 +0,0 @@ -* Add two new API endpoints to run a live query SQL on one host: `POST /api/latest/fleet/hosts/identifier/{identifier}/query` and `POST /api/_version_/fleet/hosts/{id}/query`. diff --git a/changes/16820-loading-state-auto-enroll-ui b/changes/16820-loading-state-auto-enroll-ui new file mode 100644 index 000000000..4eee73b5f --- /dev/null +++ b/changes/16820-loading-state-auto-enroll-ui @@ -0,0 +1 @@ +- Fixed UI styling of loading state for automatic enrollment settings page. diff --git a/changes/16856-fix-duplicate-activities-lock-unlock-scripts b/changes/16856-fix-duplicate-activities-lock-unlock-scripts deleted file mode 100644 index 0600cc82e..000000000 --- a/changes/16856-fix-duplicate-activities-lock-unlock-scripts +++ /dev/null @@ -1 +0,0 @@ -* Fixed generating duplicate activities when locking or unlocking a host with scripts disabled. diff --git a/changes/16910-sw-table-breakpoint b/changes/16910-sw-table-breakpoint deleted file mode 100644 index 9bc478cc6..000000000 --- a/changes/16910-sw-table-breakpoint +++ /dev/null @@ -1,2 +0,0 @@ -- Fix a style bug where the controls on the software title and versions table would wrap and bump into - each other. diff --git a/changes/16912-hide–modal-checkboxes b/changes/16912-hide–modal-checkboxes deleted file mode 100644 index 100d9ddd8..000000000 --- a/changes/16912-hide–modal-checkboxes +++ /dev/null @@ -1 +0,0 @@ -- Fix a bug where checkboxes within a hidden modal would not be hidden with the rest of the modal content. diff --git a/changes/16941-sw-os-table-overflows b/changes/16941-sw-os-table-overflows deleted file mode 100644 index 502ce6c44..000000000 --- a/changes/16941-sw-os-table-overflows +++ /dev/null @@ -1 +0,0 @@ -- Fix a bug where long OS names caused the table to render outside its bounds with smaller viewports diff --git a/changes/16942-empty-swversion-swos-details-tables b/changes/16942-empty-swversion-swos-details-tables deleted file mode 100644 index 7b8391660..000000000 --- a/changes/16942-empty-swversion-swos-details-tables +++ /dev/null @@ -1,2 +0,0 @@ -* Fix alignment bugs on the Software > OS > details and Software > Versions > details empty table -states. diff --git a/changes/17029-update-policy-count b/changes/17029-update-policy-count new file mode 100644 index 000000000..f7a4dce85 --- /dev/null +++ b/changes/17029-update-policy-count @@ -0,0 +1 @@ +- Deleting a policy updates the policy count diff --git a/changes/17048-updating-policy-name b/changes/17048-updating-policy-name new file mode 100644 index 000000000..1e250c992 --- /dev/null +++ b/changes/17048-updating-policy-name @@ -0,0 +1,2 @@ +Fixed bug where updating policy name can result with multiple policies with the same name in a team. +- This bug was introduced in fleet v4.44.1. Any duplicate policy names in the same team will be renamed by adding a number to the end of the policy name. diff --git a/changes/issue-10477-ui-for-locking-unlocking b/changes/issue-10477-ui-for-locking-unlocking deleted file mode 100644 index 86599a3dd..000000000 --- a/changes/issue-10477-ui-for-locking-unlocking +++ /dev/null @@ -1 +0,0 @@ -- add UI for locking and unlocking hosts managed by fleet mdm. diff --git a/changes/issue-16052-add-permission-checks-to-software-titles b/changes/issue-16052-add-permission-checks-to-software-titles deleted file mode 100644 index 98c672e5c..000000000 --- a/changes/issue-16052-add-permission-checks-to-software-titles +++ /dev/null @@ -1 +0,0 @@ -- Implemented permission checks for endpoints and UI routes related to software and software titles, restricting visibility to team-specific hosts. diff --git a/changes/issue-16417-improve-windows-profile-error-tooltip b/changes/issue-16417-improve-windows-profile-error-tooltip deleted file mode 100644 index 390aae231..000000000 --- a/changes/issue-16417-improve-windows-profile-error-tooltip +++ /dev/null @@ -1 +0,0 @@ -- improve windows mdm profile error tooltip messages. diff --git a/changes/issue-16747-fix-disk-encryption-key-input b/changes/issue-16747-fix-disk-encryption-key-input deleted file mode 100644 index 32b20e04a..000000000 --- a/changes/issue-16747-fix-disk-encryption-key-input +++ /dev/null @@ -1 +0,0 @@ -- fix UI bug for the view disk encryption key input icons diff --git a/changes/issue-16794-update-go-to-1.21.7 b/changes/issue-16794-update-go-to-1.21.7 new file mode 100644 index 000000000..7eeccbb9c --- /dev/null +++ b/changes/issue-16794-update-go-to-1.21.7 @@ -0,0 +1 @@ +- upgrade golang version to 1.21.7 diff --git a/changes/issue-16854-fix-software-version-and-os-loading b/changes/issue-16854-fix-software-version-and-os-loading new file mode 100644 index 000000000..924833d0e --- /dev/null +++ b/changes/issue-16854-fix-software-version-and-os-loading @@ -0,0 +1 @@ +- fix UI loading state for software versions and os for the inital request. diff --git a/changes/jve-lock-host-auth b/changes/jve-lock-host-auth deleted file mode 100644 index 9d842be43..000000000 --- a/changes/jve-lock-host-auth +++ /dev/null @@ -1 +0,0 @@ -- Adds authorization tests for the MDM lock and unlock features. \ No newline at end of file diff --git a/changes/jve-macos-special-case b/changes/jve-macos-special-case deleted file mode 100644 index 33be35ffb..000000000 --- a/changes/jve-macos-special-case +++ /dev/null @@ -1,2 +0,0 @@ -- Updates the MDM unlock flow to allow the PIN to unlock MacOS machines to be viewed as many times -as needed. \ No newline at end of file diff --git a/changes/lock-perms-docs b/changes/lock-perms-docs deleted file mode 100644 index b326b1843..000000000 --- a/changes/lock-perms-docs +++ /dev/null @@ -1 +0,0 @@ -- Updates the permissions docs to include permissions for lock/unlock/wipe actions on a host. \ No newline at end of file diff --git a/changes/profiles-fix b/changes/profiles-fix deleted file mode 100644 index dfd4ed028..000000000 --- a/changes/profiles-fix +++ /dev/null @@ -1 +0,0 @@ -* Fixed a bug that would cause OS Settings to never get verified if the MySQL config of Fleet's database has `only_full_group_by` mode enabled (enabled by default). diff --git a/charts/fleet/Chart.yaml b/charts/fleet/Chart.yaml index a4f768464..973a09604 100644 --- a/charts/fleet/Chart.yaml +++ b/charts/fleet/Chart.yaml @@ -8,7 +8,7 @@ version: v6.0.2 home: https://github.com/fleetdm/fleet sources: - https://github.com/fleetdm/fleet.git -appVersion: v4.44.1 +appVersion: v4.45.0 dependencies: - name: mysql condition: mysql.enabled diff --git a/charts/fleet/values.yaml b/charts/fleet/values.yaml index 89dd2fad4..d4e7f8ec2 100644 --- a/charts/fleet/values.yaml +++ b/charts/fleet/values.yaml @@ -2,7 +2,7 @@ # All settings related to how Fleet is deployed in Kubernetes hostName: fleet.localhost replicas: 3 # The number of Fleet instances to deploy -imageTag: v4.44.1 # Version of Fleet to deploy +imageTag: v4.45.0 # Version of Fleet to deploy podAnnotations: {} # Additional annotations to add to the Fleet pod serviceAccountAnnotations: {} # Additional annotations to add to the Fleet service account resources: diff --git a/cmd/fleet/cron.go b/cmd/fleet/cron.go index 05ecdefd8..4a17dc8b1 100644 --- a/cmd/fleet/cron.go +++ b/cmd/fleet/cron.go @@ -32,7 +32,6 @@ import ( "github.com/fleetdm/fleet/v4/server/vulnerabilities/utils" "github.com/fleetdm/fleet/v4/server/webhooks" "github.com/fleetdm/fleet/v4/server/worker" - "github.com/getsentry/sentry-go" kitlog "github.com/go-kit/log" "github.com/go-kit/log/level" "github.com/hashicorp/go-multierror" @@ -41,7 +40,6 @@ import ( func errHandler(ctx context.Context, logger kitlog.Logger, msg string, err error) { level.Error(logger).Log("msg", msg, "err", err) - sentry.CaptureException(err) ctxerr.Handle(ctx, err) } @@ -710,6 +708,7 @@ func newCleanupsAndAggregationSchedule( logger kitlog.Logger, enrollHostLimiter fleet.EnrollHostLimiter, config *config.FleetConfig, + commander *apple_mdm.MDMAppleCommander, ) (*schedule.Schedule, error) { const ( name = string(fleet.CronCleanupsThenAggregation) @@ -810,6 +809,12 @@ func newCleanupsAndAggregationSchedule( return verifyDiskEncryptionKeys(ctx, logger, ds, config) }, ), + schedule.WithJob( + "renew_scep_certificates", + func(ctx context.Context) error { + return service.RenewSCEPCertificates(ctx, logger, ds, config, commander) + }, + ), schedule.WithJob("query_results_cleanup", func(ctx context.Context) error { config, err := ds.AppConfig(ctx) if err != nil { diff --git a/cmd/fleet/serve.go b/cmd/fleet/serve.go index 5bbe21337..9755b97bd 100644 --- a/cmd/fleet/serve.go +++ b/cmd/fleet/serve.go @@ -46,6 +46,7 @@ import ( "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/push" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/push/buford" nanomdm_pushsvc "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/push/service" + scep_depot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" "github.com/fleetdm/fleet/v4/server/pubsub" "github.com/fleetdm/fleet/v4/server/service" "github.com/fleetdm/fleet/v4/server/service/async" @@ -57,7 +58,6 @@ import ( "github.com/go-kit/kit/log/level" kitprometheus "github.com/go-kit/kit/metrics/prometheus" "github.com/go-kit/log" - scep_depot "github.com/micromdm/scep/v2/depot" "github.com/ngrok/sqlmw" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" @@ -681,7 +681,11 @@ the way that the Fleet server works. }() if err := cronSchedules.StartCronSchedule(func() (fleet.CronSchedule, error) { - return newCleanupsAndAggregationSchedule(ctx, instanceID, ds, logger, redisWrapperDS, &config) + var commander *apple_mdm.MDMAppleCommander + if appCfg.MDM.EnabledAndConfigured { + commander = apple_mdm.NewMDMAppleCommander(mdmStorage, mdmPushService) + } + return newCleanupsAndAggregationSchedule(ctx, instanceID, ds, logger, redisWrapperDS, &config, commander) }); err != nil { initFatal(err, "failed to register cleanups_then_aggregations schedule") } diff --git a/cmd/fleetctl/get.go b/cmd/fleetctl/get.go index 438549d22..3fbc6274a 100644 --- a/cmd/fleetctl/get.go +++ b/cmd/fleetctl/get.go @@ -1510,10 +1510,14 @@ func getMDMCommandResultsCommand() *cli.Command { } formattedPayload = r.Payload } + reqType := r.RequestType + if len(reqType) == 0 { + reqType = "InstallProfile" + } data = append(data, []string{ r.CommandUUID, r.UpdatedAt.Format(time.RFC3339), - r.RequestType, + reqType, r.Status, r.Hostname, string(formattedPayload), @@ -1561,10 +1565,14 @@ func getMDMCommandsCommand() *cli.Command { // print the results as a table data := [][]string{} for _, r := range results { + reqType := r.RequestType + if len(reqType) == 0 { + reqType = "InstallProfile" + } data = append(data, []string{ r.CommandUUID, r.UpdatedAt.Format(time.RFC3339), - r.RequestType, + reqType, r.Status, r.Hostname, }) diff --git a/cmd/fleetctl/get_test.go b/cmd/fleetctl/get_test.go index 995ab3646..f1808cbdf 100644 --- a/cmd/fleetctl/get_test.go +++ b/cmd/fleetctl/get_test.go @@ -2365,7 +2365,6 @@ func TestGetMDMCommandResults(t *testing.T) { CommandUUID: commandUUID, Status: "200", UpdatedAt: time.Date(2023, 4, 4, 15, 29, 0, 0, time.UTC), - RequestType: "test", Payload: []byte(winPayloadXML), Result: []byte(winResultXML), }, @@ -2374,7 +2373,6 @@ func TestGetMDMCommandResults(t *testing.T) { CommandUUID: commandUUID, Status: "500", UpdatedAt: time.Date(2023, 4, 4, 15, 29, 0, 0, time.UTC), - RequestType: "test", Payload: []byte(winPayloadXML), Result: []byte(winResultXML), }, @@ -2518,89 +2516,89 @@ func TestGetMDMCommandResults(t *testing.T) { }) t.Run("windows command results", func(t *testing.T) { - expectedOutput := strings.TrimSpace(`+-----------+----------------------+------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ -| ID | TIME | TYPE | STATUS | HOSTNAME | PAYLOAD | RESULTS | -+-----------+----------------------+------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ -| valid-cmd | 2023-04-04T15:29:00Z | test | 200 | host1 | | | -| | | | | | | | -| | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | 1.2 | -| | | | | | | DM/1.2 | -| | | | | | | 48 | -| | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | 2 | -| | | | | | | | -| | | | | | | https://roperzh-fleet.ngrok.io/api/mdm/microsoft/management | -| | | | | | ./Device/Vendor/MSFT/Policy/Config/Bluetooth/AllowDiscoverableMode | | -| | | | | | | | -| | | | | | | 1F28CCBDCE02AE44BD2AAC3C0B9AD4DE | -| | | | | | int | | -| | | | | | | | -| | | | | | 1 | | -| | | | | | | | -| | | | | | | 1 | -| | | | | | | 1 | -| | | | | | | 0 | -| | | | | | | SyncHdr | -| | | | | | | 200 | -| | | | | | | | -| | | | | | | | -| | | | | | | 2 | -| | | | | | | 1 | -| | | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | -| | | | | | | Atomic | -| | | | | | | 200 | -| | | | | | | | -| | | | | | | | -| | | | | | | 3 | -| | | | | | | 1 | -| | | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | -| | | | | | | Replace | -| | | | | | | 200 | -| | | | | | | | -| | | | | | | | -| | | | | | | | -| | | | | | | | -| | | | | | | | -+-----------+----------------------+------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ -| valid-cmd | 2023-04-04T15:29:00Z | test | 500 | host2 | | | -| | | | | | | | -| | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | 1.2 | -| | | | | | | DM/1.2 | -| | | | | | | 48 | -| | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | 2 | -| | | | | | | | -| | | | | | | https://roperzh-fleet.ngrok.io/api/mdm/microsoft/management | -| | | | | | ./Device/Vendor/MSFT/Policy/Config/Bluetooth/AllowDiscoverableMode | | -| | | | | | | | -| | | | | | | 1F28CCBDCE02AE44BD2AAC3C0B9AD4DE | -| | | | | | int | | -| | | | | | | | -| | | | | | 1 | | -| | | | | | | | -| | | | | | | 1 | -| | | | | | | 1 | -| | | | | | | 0 | -| | | | | | | SyncHdr | -| | | | | | | 200 | -| | | | | | | | -| | | | | | | | -| | | | | | | 2 | -| | | | | | | 1 | -| | | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | -| | | | | | | Atomic | -| | | | | | | 200 | -| | | | | | | | -| | | | | | | | -| | | | | | | 3 | -| | | | | | | 1 | -| | | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | -| | | | | | | Replace | -| | | | | | | 200 | -| | | | | | | | -| | | | | | | | -| | | | | | | | -| | | | | | | | -| | | | | | | | -+-----------+----------------------+------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ + expectedOutput := strings.TrimSpace(`+-----------+----------------------+----------------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ +| ID | TIME | TYPE | STATUS | HOSTNAME | PAYLOAD | RESULTS | ++-----------+----------------------+----------------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ +| valid-cmd | 2023-04-04T15:29:00Z | InstallProfile | 200 | host1 | | | +| | | | | | | | +| | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | 1.2 | +| | | | | | | DM/1.2 | +| | | | | | | 48 | +| | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | 2 | +| | | | | | | | +| | | | | | | https://roperzh-fleet.ngrok.io/api/mdm/microsoft/management | +| | | | | | ./Device/Vendor/MSFT/Policy/Config/Bluetooth/AllowDiscoverableMode | | +| | | | | | | | +| | | | | | | 1F28CCBDCE02AE44BD2AAC3C0B9AD4DE | +| | | | | | int | | +| | | | | | | | +| | | | | | 1 | | +| | | | | | | | +| | | | | | | 1 | +| | | | | | | 1 | +| | | | | | | 0 | +| | | | | | | SyncHdr | +| | | | | | | 200 | +| | | | | | | | +| | | | | | | | +| | | | | | | 2 | +| | | | | | | 1 | +| | | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | +| | | | | | | Atomic | +| | | | | | | 200 | +| | | | | | | | +| | | | | | | | +| | | | | | | 3 | +| | | | | | | 1 | +| | | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | +| | | | | | | Replace | +| | | | | | | 200 | +| | | | | | | | +| | | | | | | | +| | | | | | | | +| | | | | | | | +| | | | | | | | ++-----------+----------------------+----------------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ +| valid-cmd | 2023-04-04T15:29:00Z | InstallProfile | 500 | host2 | | | +| | | | | | | | +| | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | 1.2 | +| | | | | | | DM/1.2 | +| | | | | | | 48 | +| | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | 2 | +| | | | | | | | +| | | | | | | https://roperzh-fleet.ngrok.io/api/mdm/microsoft/management | +| | | | | | ./Device/Vendor/MSFT/Policy/Config/Bluetooth/AllowDiscoverableMode | | +| | | | | | | | +| | | | | | | 1F28CCBDCE02AE44BD2AAC3C0B9AD4DE | +| | | | | | int | | +| | | | | | | | +| | | | | | 1 | | +| | | | | | | | +| | | | | | | 1 | +| | | | | | | 1 | +| | | | | | | 0 | +| | | | | | | SyncHdr | +| | | | | | | 200 | +| | | | | | | | +| | | | | | | | +| | | | | | | 2 | +| | | | | | | 1 | +| | | | | | | 90dbfca8-d4ac-40c9-bf57-ba5b8cbf1ce0 | +| | | | | | | Atomic | +| | | | | | | 200 | +| | | | | | | | +| | | | | | | | +| | | | | | | 3 | +| | | | | | | 1 | +| | | | | | | 81a141b2-5064-4dc3-a51a-128b8caa5438 | +| | | | | | | Replace | +| | | | | | | 200 | +| | | | | | | | +| | | | | | | | +| | | | | | | | +| | | | | | | | +| | | | | | | | ++-----------+----------------------+----------------+--------+----------+---------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------+ `) platform = "windows" @@ -2644,6 +2642,14 @@ func TestGetMDMCommands(t *testing.T) { Status: "200", Hostname: "host2", }, + // This represents a command generated by fleet as part of a Windows profile + { + HostUUID: "h2", + CommandUUID: "u3", + UpdatedAt: time.Date(2023, 4, 11, 9, 5, 0, 0, time.UTC), + Status: "200", + Hostname: "host2", + }, }, nil } @@ -2669,6 +2675,8 @@ func TestGetMDMCommands(t *testing.T) { +----+----------------------+---------------------------------------+--------------+----------+ | u2 | 2023-04-11T09:05:00Z | ./Device/Vendor/MSFT/Reboot/RebootNow | 200 | host2 | +----+----------------------+---------------------------------------+--------------+----------+ +| u3 | 2023-04-11T09:05:00Z | InstallProfile | 200 | host2 | ++----+----------------------+---------------------------------------+--------------+----------+ `)) } diff --git a/cmd/osquery-perf/agent.go b/cmd/osquery-perf/agent.go index 3321fc6ba..ac0a58ca9 100644 --- a/cmd/osquery-perf/agent.go +++ b/cmd/osquery-perf/agent.go @@ -629,6 +629,7 @@ func (a *agent) runOrbitLoop() { HardwareSerial: a.SerialNumber, Hostname: a.CachedString("hostname"), }, + nil, ) if err != nil { log.Println("creating orbit client: ", err) diff --git a/docs/Configuration/configuration-files/kubernetes/fleet-deployment.yml b/docs/Configuration/configuration-files/kubernetes/fleet-deployment.yml index da6fc2b67..e7badbf04 100644 --- a/docs/Configuration/configuration-files/kubernetes/fleet-deployment.yml +++ b/docs/Configuration/configuration-files/kubernetes/fleet-deployment.yml @@ -1,4 +1,4 @@ -apiVersion: apps/v1beta2 +apiVersion: apps/v1 kind: Deployment metadata: name: fleet-webserver @@ -20,10 +20,10 @@ spec: secretName: fleet-tls containers: - name: fleet-webserver - image: fleetdm/fleet:4.0.1 + image: fleetdm/fleet:v4.43.3 command: ["fleet", "serve"] ports: - - containerPort: 443 + - containerPort: 8443 volumeMounts: - name: fleet-tls mountPath: "/secrets/fleet-tls" @@ -37,14 +37,14 @@ spec: name: fleet-database-mysql key: mysql-password - name: FLEET_REDIS_ADDRESS - value: fleet-cache-redis:6379 + value: fleet-cache-redis-master:6379 - name: FLEET_REDIS_PASSWORD valueFrom: secretKeyRef: name: fleet-cache-redis key: redis-password - name: FLEET_SERVER_ADDRESS - value: "0.0.0.0:443" + value: "0.0.0.0:8443" - name: FLEET_SERVER_CERT value: "/secrets/fleet-tls/tls.crt" - name: FLEET_SERVER_KEY diff --git a/docs/Configuration/configuration-files/kubernetes/fleet-migrations.yml b/docs/Configuration/configuration-files/kubernetes/fleet-migrations.yml index f6dc7ebfc..8e432b189 100644 --- a/docs/Configuration/configuration-files/kubernetes/fleet-migrations.yml +++ b/docs/Configuration/configuration-files/kubernetes/fleet-migrations.yml @@ -9,7 +9,7 @@ spec: spec: containers: - name: fleet - image: fleetdm/fleet:4.0.1 + image: fleetdm/fleet:v4.43.3 command: ["fleet", "prepare", "db"] env: - name: FLEET_MYSQL_ADDRESS diff --git a/docs/Configuration/configuration-files/kubernetes/fleet-service.yml b/docs/Configuration/configuration-files/kubernetes/fleet-service.yml index 098270f02..621199dba 100644 --- a/docs/Configuration/configuration-files/kubernetes/fleet-service.yml +++ b/docs/Configuration/configuration-files/kubernetes/fleet-service.yml @@ -9,7 +9,7 @@ spec: ports: - name: proxy-tls port: 443 - targetPort: 443 + targetPort: 8443 protocol: TCP - name: proxy-http port: 80 diff --git a/docs/REST API/rest-api.md b/docs/REST API/rest-api.md index d87cb8c7b..380a7c1e7 100644 --- a/docs/REST API/rest-api.md +++ b/docs/REST API/rest-api.md @@ -1825,6 +1825,8 @@ None. - [Get host's scripts](#get-hosts-scripts) - [Get hosts report in CSV](#get-hosts-report-in-csv) - [Get host's disk encryption key](#get-hosts-disk-encryption-key) +- [Lock host](#lock-host) +- [Unlock host](#unlock-host) - [Get host's past activity](#get-hosts-past-activity) - [Get host's upcoming activity](#get-hosts-upcoming-activity) - [Live query one host (ad-hoc)](#live-query-one-host-ad-hoc) @@ -2019,7 +2021,9 @@ If `after` is being used with `created_at` or `updated_at`, the table must be sp "encryption_key_available": false, "enrollment_status": null, "name": "", - "server_url": null + "server_url": null, + "device_status": "unlocked", + "pending_action": "" }, "software": [ { @@ -2451,6 +2455,8 @@ Returns the information of the specified host. "enrollment_status": null, "name": "", "server_url": null, + "device_status": "unlocked", + "pending_action": "", "macos_settings": { "disk_encryption": null, "action_required": null @@ -2660,6 +2666,8 @@ Returns the information of the host specified using the `uuid`, `hardware_serial "enrollment_status": null, "name": "", "server_url": null, + "device_status": "unlocked", + "pending_action": "lock", "macos_settings": { "disk_encryption": null, "action_required": null @@ -3758,6 +3766,67 @@ Retrieves a list of the configuration profiles assigned to a host. } ``` +### Lock host + +_Available in Fleet Premium_ + +Sends a command to lock the specified macOS, Linux, or Windows host. The host is locked once it comes online. + +To lock a macOS host, the host must have MDM turned on. To lock a Windows or Linux host, the host must have [scripts enabled](https://fleetdm.com/docs/using-fleet/scripts). + + +`POST /api/v1/fleet/hosts/:id/lock` + +#### Parameters + +| Name | Type | In | Description | +| ---------- | ----------------- | ---- | ----------------------------------------------------------------------------- | +| id | integer | path | **Required**. ID of the host to be locked. | + +#### Example + +`POST /api/v1/fleet/hosts/123/lock` + +##### Default response + +`Status: 204` + +### Unlock host + +_Available in Fleet Premium_ + +Sends a command to unlock the specified Windows or Linux host, or retrieves the unlock PIN for a macOS host. + +To unlock a Windows or Linux host, the host must have [scripts enabled](https://fleetdm.com/docs/using-fleet/scripts). + +`POST /api/v1/fleet/hosts/:id/unlock` + +#### Parameters + +| Name | Type | In | Description | +| ---------- | ----------------- | ---- | ----------------------------------------------------------------------------- | +| id | integer | path | **Required**. ID of the host to be unlocked. | + +#### Example + +`POST /api/v1/fleet/hosts/:id/unlock` + +##### Default response (Windows or Linux hosts) + +`Status: 204` + +##### Default response (macOS hosts) + +`Status: 200` + +```json +{ + "host_id": 8, + "unlock_pin": "123456" +} +``` + + ### Get host's past activity `GET /api/v1/fleet/hosts/:id/activites/past` @@ -4874,7 +4943,8 @@ This endpoint returns the results for a specific custom MDM command. "updated_at": "2023-04-04:00:00Z", "request_type": "ProfileList", "hostname": "mycomputer", - "result": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPCFET0NUWVBFIHBsaXN0IFBVQkxJQyAiLS8vQXBwbGUvL0RURCBQTElTVCAxLjAvL0VOIiAiaHR0cDovL3d3dy5hcHBsZS5jb20vRFREcy9Qcm9wZXJ0eUxpc3QtMS4wLmR0ZCI-CjxwbGlzdCB2ZXJzaW9uPSIxLjAiPgo8ZGljdD4KICAgIDxrZXk-Q29tbWFuZDwva2V5PgogICAgPGRpY3Q-CiAgICAgICAgPGtleT5NYW5hZ2VkT25seTwva2V5PgogICAgICAgIDxmYWxzZS8-CiAgICAgICAgPGtleT5SZXF1ZXN0VHlwZTwva2V5PgogICAgICAgIDxzdHJpbmc-UHJvZmlsZUxpc3Q8L3N0cmluZz4KICAgIDwvZGljdD4KICAgIDxrZXk-Q29tbWFuZFVVSUQ8L2tleT4KICAgIDxzdHJpbmc-MDAwMV9Qcm9maWxlTGlzdDwvc3RyaW5nPgo8L2RpY3Q-CjwvcGxpc3Q-" + "payload": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4NCjwhRE9DVFlQRSBwbGlzdCBQVUJMSUMgIi0vL0FwcGxlLy9EVEQgUExJU1QgMS4wLy9FTiIgImh0dHA6Ly93d3cuYXBwbGUuY29tL0RURHMvUHJvcGVydHlMaXN0LTEuMC5kdGQiPg0KPHBsaXN0IHZlcnNpb249IjEuMCI+DQo8ZGljdD4NCg0KCTxrZXk+UGF5bG9hZERlc2NyaXB0aW9uPC9rZXk+DQoJPHN0cmluZz5UaGlzIHByb2ZpbGUgY29uZmlndXJhdGlvbiBpcyBkZXNpZ25lZCB0byBhcHBseSB0aGUgQ0lTIEJlbmNobWFyayBmb3IgbWFjT1MgMTAuMTQgKHYyLjAuMCksIDEwLjE1ICh2Mi4wLjApLCAxMS4wICh2Mi4wLjApLCBhbmQgMTIuMCAodjEuMC4wKTwvc3RyaW5nPg0KCTxrZXk+UGF5bG9hZERpc3BsYXlOYW1lPC9rZXk+DQoJPHN0cmluZz5EaXNhYmxlIEJsdWV0b290aCBzaGFyaW5nPC9zdHJpbmc+DQoJPGtleT5QYXlsb2FkRW5hYmxlZDwva2V5Pg0KCTx0cnVlLz4NCgk8a2V5PlBheWxvYWRJZGVudGlmaWVyPC9rZXk+DQoJPHN0cmluZz5jaXMubWFjT1NCZW5jaG1hcmsuc2VjdGlvbjIuQmx1ZXRvb3RoU2hhcmluZzwvc3RyaW5nPg0KCTxrZXk+UGF5bG9hZFNjb3BlPC9rZXk+DQoJPHN0cmluZz5TeXN0ZW08L3N0cmluZz4NCgk8a2V5PlBheWxvYWRUeXBlPC9rZXk+DQoJPHN0cmluZz5Db25maWd1cmF0aW9uPC9zdHJpbmc+DQoJPGtleT5QYXlsb2FkVVVJRDwva2V5Pg0KCTxzdHJpbmc+NUNFQkQ3MTItMjhFQi00MzJCLTg0QzctQUEyOEE1QTM4M0Q4PC9zdHJpbmc+DQoJPGtleT5QYXlsb2FkVmVyc2lvbjwva2V5Pg0KCTxpbnRlZ2VyPjE8L2ludGVnZXI+DQogICAgPGtleT5QYXlsb2FkUmVtb3ZhbERpc2FsbG93ZWQ8L2tleT4NCiAgICA8dHJ1ZS8+DQoJPGtleT5QYXlsb2FkQ29udGVudDwva2V5Pg0KCTxhcnJheT4NCgkJPGRpY3Q+DQoJCQk8a2V5PlBheWxvYWRDb250ZW50PC9rZXk+DQoJCQk8ZGljdD4NCgkJCQk8a2V5PmNvbS5hcHBsZS5CbHVldG9vdGg8L2tleT4NCgkJCQk8ZGljdD4NCgkJCQkJPGtleT5Gb3JjZWQ8L2tleT4NCgkJCQkJPGFycmF5Pg0KCQkJCQkJPGRpY3Q+DQoJCQkJCQkJPGtleT5tY3hfcHJlZmVyZW5jZV9zZXR0aW5nczwva2V5Pg0KCQkJCQkJCTxkaWN0Pg0KCQkJCQkJCQk8a2V5PlByZWZLZXlTZXJ2aWNlc0VuYWJsZWQ8L2tleT4NCgkJCQkJCQkJPGZhbHNlLz4NCgkJCQkJCQk8L2RpY3Q+DQoJCQkJCQk8L2RpY3Q+DQoJCQkJCTwvYXJyYXk+DQoJCQkJPC9kaWN0Pg0KCQkJPC9kaWN0Pg0KCQkJPGtleT5QYXlsb2FkRGVzY3JpcHRpb248L2tleT4NCgkJCTxzdHJpbmc+RGlzYWJsZXMgQmx1ZXRvb3RoIFNoYXJpbmc8L3N0cmluZz4NCgkJCTxrZXk+UGF5bG9hZERpc3BsYXlOYW1lPC9rZXk+DQoJCQk8c3RyaW5nPkN1c3RvbTwvc3RyaW5nPg0KCQkJPGtleT5QYXlsb2FkRW5hYmxlZDwva2V5Pg0KCQkJPHRydWUvPg0KCQkJPGtleT5QYXlsb2FkSWRlbnRpZmllcjwva2V5Pg0KCQkJPHN0cmluZz4wMjQwREQxQy03MERDLTQ3NjYtOTAxOC0wNDMyMkJGRUVBRDE8L3N0cmluZz4NCgkJCTxrZXk+UGF5bG9hZFR5cGU8L2tleT4NCgkJCTxzdHJpbmc+Y29tLmFwcGxlLk1hbmFnZWRDbGllbnQucHJlZmVyZW5jZXM8L3N0cmluZz4NCgkJCTxrZXk+UGF5bG9hZFVVSUQ8L2tleT4NCgkJCTxzdHJpbmc+MDI0MEREMUMtNzBEQy00NzY2LTkwMTgtMDQzMjJCRkVFQUQxPC9zdHJpbmc+DQoJCQk8a2V5PlBheWxvYWRWZXJzaW9uPC9rZXk+DQoJCQk8aW50ZWdlcj4xPC9pbnRlZ2VyPg0KCQk8L2RpY3Q+DQoJPC9hcnJheT4NCjwvZGljdD4NCjwvcGxpc3Q+", + "result": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4NCjwhRE9DVFlQRSBwbGlzdCBQVUJMSUMgIi0vL0FwcGxlLy9EVEQgUExJU1QgMS4wLy9FTiIgImh0dHA6Ly93d3cuYXBwbGUuY29tL0RURHMvUHJvcGVydHlMaXN0LTEuMC5kdGQiPg0KPHBsaXN0IHZlcnNpb249IjEuMCI+DQo8ZGljdD4NCiAgICA8a2V5PkNvbW1hbmRVVUlEPC9rZXk+DQogICAgPHN0cmluZz4wMDAxX0luc3RhbGxQcm9maWxlPC9zdHJpbmc+DQogICAgPGtleT5TdGF0dXM8L2tleT4NCiAgICA8c3RyaW5nPkFja25vd2xlZGdlZDwvc3RyaW5nPg0KICAgIDxrZXk+VURJRDwva2V5Pg0KICAgIDxzdHJpbmc+MDAwMDgwMjAtMDAwOTE1MDgzQzgwMDEyRTwvc3RyaW5nPg0KPC9kaWN0Pg0KPC9wbGlzdD4=" } ] } diff --git a/docs/Using Fleet/Scripts.md b/docs/Using Fleet/Scripts.md index c80fa4f48..dfd8a5429 100644 --- a/docs/Using Fleet/Scripts.md +++ b/docs/Using Fleet/Scripts.md @@ -1,7 +1,5 @@ # Scripts -_Available in Fleet Premium_ - In Fleet you can execute a custom script to remediate an issue on your macOS, Windows, and Linux hosts. Shell scripts are supported on macOS and Linux. All scripts will run in the host's (root) default shell (`/bin/sh`). Other interpreters are not supported yet. @@ -34,9 +32,7 @@ Fleet UI: 3. On your target host's host details page, select the **Scripts** tab and select **Actions** to run the script. -> Currently, you can only run scripts on macOS and Windows hosts in the Fleet UI. To run a script on a Linux host, use the Fleet API or fleetctl CLI. - -Fleet API: API documentation is [here](https://fleetdm.com/docs/rest-api/rest-api#run-script) +Fleet API: API documentation is [here](https://fleetdm.com/docs/rest-api/rest-api#run-script] fleetctl CLI: diff --git a/ee/fleetd-chrome/package-lock.json b/ee/fleetd-chrome/package-lock.json index 32acbb513..dfdbb4e32 100644 --- a/ee/fleetd-chrome/package-lock.json +++ b/ee/fleetd-chrome/package-lock.json @@ -1,12 +1,12 @@ { "name": "fleetd-for-chrome", - "version": "1.1.3", + "version": "1.2.0", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "fleetd-for-chrome", - "version": "1.1.3", + "version": "1.2.0", "dependencies": { "dotenv": "^16.0.3", "wa-sqlite": "github:rhashimoto/wa-sqlite#v0.9.11" diff --git a/ee/fleetd-chrome/package.json b/ee/fleetd-chrome/package.json index e1a0d61b7..ad2d9a08f 100644 --- a/ee/fleetd-chrome/package.json +++ b/ee/fleetd-chrome/package.json @@ -1,7 +1,7 @@ { "name": "fleetd-for-chrome", "description": "Extension for Fleetd on ChromeOS", - "version": "1.1.3", + "version": "1.2.0", "dependencies": { "dotenv": "^16.0.3", "wa-sqlite": "github:rhashimoto/wa-sqlite#v0.9.11" diff --git a/ee/fleetd-chrome/src/tables/Table.ts b/ee/fleetd-chrome/src/tables/Table.ts index 93deadb6e..42b163051 100644 --- a/ee/fleetd-chrome/src/tables/Table.ts +++ b/ee/fleetd-chrome/src/tables/Table.ts @@ -14,7 +14,6 @@ const CONCAT_CHROME_WARNINGS = (warnings: ChromeWarning[]): string => { class cursorState { rowIndex: number; rows: Record[]; - error: any; } interface ChromeWarning { @@ -121,10 +120,10 @@ export default abstract class Table implements SQLiteModule { } cursorState.rows = tableDataReturned.data; } catch (err) { - // Throwing here doesn't seem to work as expected in testing (the error doesn't seem to be - // thrown in a way that it can be caught appropriately), so instead we save the error and - // throw in xEof. - cursorState.error = err; + // We cannot throw inside SQLITE function because it may cause the wasm stack to run out of memory. + // See: https://github.com/rhashimoto/wa-sqlite/issues/156#issuecomment-1942477704 + console.warn("Error generating table data: %s", err); + return SQLite.SQLITE_ERROR; } return SQLite.SQLITE_OK; }); @@ -133,6 +132,9 @@ export default abstract class Table implements SQLiteModule { xNext(pCursor: number): number { // Advance the row index for the cursor. const cursorState = this.cursorStates.get(pCursor); + if (!cursorState || !cursorState.rows) { + return SQLite.SQLITE_ERROR; + } cursorState.rowIndex += 1; return SQLite.SQLITE_OK; } @@ -140,10 +142,8 @@ export default abstract class Table implements SQLiteModule { xEof(pCursor: number): number { // Check whether we've returned all rows (cursor index is beyond number of rows). const cursorState = this.cursorStates.get(pCursor); - // Throw any error saved in the cursor state (because throwing in xFilter doesn't seem to work - // correctly with async code). - if (cursorState.error) { - throw cursorState.error; + if (!cursorState || !cursorState.rows) { + return 1; } return Number(cursorState.rowIndex >= cursorState.rows.length); } diff --git a/ee/fleetd-chrome/src/tables/network_interfaces.ts b/ee/fleetd-chrome/src/tables/network_interfaces.ts index 8e57d575f..2da372dfa 100644 --- a/ee/fleetd-chrome/src/tables/network_interfaces.ts +++ b/ee/fleetd-chrome/src/tables/network_interfaces.ts @@ -5,6 +5,18 @@ export default class TableNetworkInterfaces extends Table { columns = ["mac", "ipv4", "ipv6"]; async generate() { + if (!chrome.enterprise) { + return { + data: [], + warnings: [ + { + column: "mac", + error_message: "chrome.enterprise API is not available for network details", + }, + ], + }; + } + // @ts-expect-error @types/chrome doesn't yet have the getNetworkDetails Promise API. const networkDetails = (await chrome.enterprise.networkingAttributes.getNetworkDetails()) as chrome.enterprise.networkingAttributes.NetworkDetails; const ipv4 = networkDetails.ipv4; diff --git a/ee/fleetd-chrome/updates-beta.xml b/ee/fleetd-chrome/updates-beta.xml index 668c0f64d..03bddca04 100644 --- a/ee/fleetd-chrome/updates-beta.xml +++ b/ee/fleetd-chrome/updates-beta.xml @@ -1,6 +1,6 @@ - + \ No newline at end of file diff --git a/ee/fleetd-chrome/updates.xml b/ee/fleetd-chrome/updates.xml index 9881e9203..7bf2b24b2 100644 --- a/ee/fleetd-chrome/updates.xml +++ b/ee/fleetd-chrome/updates.xml @@ -1,6 +1,6 @@ - + \ No newline at end of file diff --git a/frontend/components/EmailTokenRedirect/EmailTokenRedirect.tsx b/frontend/components/EmailTokenRedirect/EmailTokenRedirect.tsx index 358278fbb..acb242ede 100644 --- a/frontend/components/EmailTokenRedirect/EmailTokenRedirect.tsx +++ b/frontend/components/EmailTokenRedirect/EmailTokenRedirect.tsx @@ -25,7 +25,7 @@ const EmailTokenRedirect = ({ if (currentUser && token) { try { await usersAPI.confirmEmailChange(currentUser, token); - router.push(PATHS.USER_SETTINGS); + router.push(PATHS.ACCOUNT); renderFlash("success", "Email updated successfully!"); } catch (error) { console.log(error); diff --git a/frontend/components/forms/RegistrationForm/RegistrationForm.jsx b/frontend/components/forms/RegistrationForm/RegistrationForm.jsx index ec8479919..14fba0679 100644 --- a/frontend/components/forms/RegistrationForm/RegistrationForm.jsx +++ b/frontend/components/forms/RegistrationForm/RegistrationForm.jsx @@ -120,7 +120,7 @@ class RegistrationForm extends Component {
-

Setup user

+

Set up user

{ expect( container.querySelectorAll(".user-registration__container--admin").length ).toEqual(1); - expect(screen.getByText("Setup user")).toBeInTheDocument(); + expect(screen.getByText("Set up user")).toBeInTheDocument(); }); it("renders OrgDetails on the second page", () => { diff --git a/frontend/components/forms/RegistrationForm/_styles.scss b/frontend/components/forms/RegistrationForm/_styles.scss index cbe28f73c..e4dfb02a9 100644 --- a/frontend/components/forms/RegistrationForm/_styles.scss +++ b/frontend/components/forms/RegistrationForm/_styles.scss @@ -181,7 +181,7 @@ .button { width: 160px; - margin-top: $pad-xxlarge; + margin-top: $pad-medium; // 40px total (24px gap + 16px more) } } } diff --git a/frontend/components/forms/fields/InputFieldWithIcon/InputFieldWithIcon.stories.tsx b/frontend/components/forms/fields/InputFieldWithIcon/InputFieldWithIcon.stories.tsx index 97a2e074b..1fd893fb8 100644 --- a/frontend/components/forms/fields/InputFieldWithIcon/InputFieldWithIcon.stories.tsx +++ b/frontend/components/forms/fields/InputFieldWithIcon/InputFieldWithIcon.stories.tsx @@ -63,7 +63,7 @@ export default { "all-hosts", "alerts", "logout", - "user-settings", + "account", "clipboard", "list-select", "grid-select", diff --git a/frontend/components/graphics/EmptyPolicies.tsx b/frontend/components/graphics/EmptyPolicies.tsx index da2d725d8..f170a3830 100644 --- a/frontend/components/graphics/EmptyPolicies.tsx +++ b/frontend/components/graphics/EmptyPolicies.tsx @@ -3,128 +3,113 @@ import React from "react"; const EmptyPolicies = () => { return ( - + + + + - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + - - + + - + - - + + + diff --git a/frontend/components/queries/queryResults/QueryResultsHeading/QueryResultsHeading.tsx b/frontend/components/queries/queryResults/QueryResultsHeading/QueryResultsHeading.tsx index d8668dc6e..8e5567e7a 100644 --- a/frontend/components/queries/queryResults/QueryResultsHeading/QueryResultsHeading.tsx +++ b/frontend/components/queries/queryResults/QueryResultsHeading/QueryResultsHeading.tsx @@ -1,11 +1,13 @@ import React from "react"; +import strUtils from "utilities/strings"; + import Spinner from "components/Spinner"; import Button from "components/buttons/Button"; import TooltipWrapper from "components/TooltipWrapper"; const pluralizeHost = (count: number) => { - return count > 1 ? "hosts" : "host"; + return strUtils.pluralize(count, "host"); }; const baseClass = "query-results-heading"; diff --git a/frontend/components/top_nav/UserMenu/UserMenu.tsx b/frontend/components/top_nav/UserMenu/UserMenu.tsx index edb83318a..4e0846412 100644 --- a/frontend/components/top_nav/UserMenu/UserMenu.tsx +++ b/frontend/components/top_nav/UserMenu/UserMenu.tsx @@ -28,7 +28,7 @@ const UserMenu = ({ currentUser, isSandboxMode = false, }: IUserMenuProps): JSX.Element => { - const accountNavigate = onNavItemClick(PATHS.USER_SETTINGS); + const accountNavigate = onNavItemClick(PATHS.ACCOUNT); const dropdownItems = [ { label: "My account", diff --git a/frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/SecretField.stories.tsx b/frontend/pages/AccountPage/APITokenModal/TokenSecretField/SecretField.stories.tsx similarity index 100% rename from frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/SecretField.stories.tsx rename to frontend/pages/AccountPage/APITokenModal/TokenSecretField/SecretField.stories.tsx diff --git a/frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/SecretField.tsx b/frontend/pages/AccountPage/APITokenModal/TokenSecretField/SecretField.tsx similarity index 100% rename from frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/SecretField.tsx rename to frontend/pages/AccountPage/APITokenModal/TokenSecretField/SecretField.tsx diff --git a/frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/_styles.scss b/frontend/pages/AccountPage/APITokenModal/TokenSecretField/_styles.scss similarity index 100% rename from frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/_styles.scss rename to frontend/pages/AccountPage/APITokenModal/TokenSecretField/_styles.scss diff --git a/frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/index.ts b/frontend/pages/AccountPage/APITokenModal/TokenSecretField/index.ts similarity index 100% rename from frontend/pages/UserSettingsPage/APITokenModal/TokenSecretField/index.ts rename to frontend/pages/AccountPage/APITokenModal/TokenSecretField/index.ts diff --git a/frontend/pages/UserSettingsPage/UserSettingsPage.tsx b/frontend/pages/AccountPage/AccountPage.tsx similarity index 96% rename from frontend/pages/UserSettingsPage/UserSettingsPage.tsx rename to frontend/pages/AccountPage/AccountPage.tsx index 19c389c86..02cfcb360 100644 --- a/frontend/pages/UserSettingsPage/UserSettingsPage.tsx +++ b/frontend/pages/AccountPage/AccountPage.tsx @@ -25,17 +25,15 @@ import SidePanelContent from "components/SidePanelContent"; import CustomLink from "components/CustomLink"; import SecretField from "./APITokenModal/TokenSecretField/SecretField"; -import UserSidePanel from "./UserSidePanel"; +import AccountSidePanel from "./AccountSidePanel"; -const baseClass = "user-settings"; +const baseClass = "account-page"; -interface IUserSettingsPageProps { +interface IAccountPageProps { router: InjectedRouter; } -const UserSettingsPage = ({ - router, -}: IUserSettingsPageProps): JSX.Element | null => { +const AccountPage = ({ router }: IAccountPageProps): JSX.Element | null => { const { config, currentUser } = useContext(AppContext); const { renderFlash } = useContext(NotificationContext); @@ -237,7 +235,7 @@ const UserSettingsPage = ({ - void; onGetApiToken: () => void; } -const baseClass = "user-side-panel"; +const baseClass = "account-side-panel"; -const UserSidePanel = ({ +const AccountSidePanel = ({ currentUser, onChangePassword, onGetApiToken, -}: IUserSidePanelProps): JSX.Element => { +}: IAccountSidePanelProps): JSX.Element => { const { isPremiumTier, config } = useContext(AppContext); const [versionData, setVersionData] = useState(); @@ -143,4 +143,4 @@ const UserSidePanel = ({ ); }; -export default UserSidePanel; +export default AccountSidePanel; diff --git a/frontend/pages/UserSettingsPage/UserSidePanel/_styles.scss b/frontend/pages/AccountPage/AccountSidePanel/_styles.scss similarity index 98% rename from frontend/pages/UserSettingsPage/UserSidePanel/_styles.scss rename to frontend/pages/AccountPage/AccountSidePanel/_styles.scss index 9246d30f0..a810bb8c2 100644 --- a/frontend/pages/UserSettingsPage/UserSidePanel/_styles.scss +++ b/frontend/pages/AccountPage/AccountSidePanel/_styles.scss @@ -1,4 +1,4 @@ -.user-side-panel { +.account-side-panel { &__change-avatar { position: relative; padding: 0 0 20px; diff --git a/frontend/pages/AccountPage/AccountSidePanel/index.ts b/frontend/pages/AccountPage/AccountSidePanel/index.ts new file mode 100644 index 000000000..9170144a4 --- /dev/null +++ b/frontend/pages/AccountPage/AccountSidePanel/index.ts @@ -0,0 +1 @@ +export { default } from "./AccountSidePanel"; diff --git a/frontend/pages/UserSettingsPage/_styles.scss b/frontend/pages/AccountPage/_styles.scss similarity index 97% rename from frontend/pages/UserSettingsPage/_styles.scss rename to frontend/pages/AccountPage/_styles.scss index c9c4c6ce9..5d7f969f4 100644 --- a/frontend/pages/UserSettingsPage/_styles.scss +++ b/frontend/pages/AccountPage/_styles.scss @@ -1,4 +1,4 @@ -.user-settings { +.account-page { &__sandboxMode { margin-top: 70px; } diff --git a/frontend/pages/AccountPage/index.ts b/frontend/pages/AccountPage/index.ts new file mode 100644 index 000000000..cc310215a --- /dev/null +++ b/frontend/pages/AccountPage/index.ts @@ -0,0 +1 @@ +export { default } from "./AccountPage"; diff --git a/frontend/pages/ManageControlsPage/OSSettings/cards/CustomSettings/components/ProfileListItem/ProfileListItem.tsx b/frontend/pages/ManageControlsPage/OSSettings/cards/CustomSettings/components/ProfileListItem/ProfileListItem.tsx index 8fe3ce132..15b767670 100644 --- a/frontend/pages/ManageControlsPage/OSSettings/cards/CustomSettings/components/ProfileListItem/ProfileListItem.tsx +++ b/frontend/pages/ManageControlsPage/OSSettings/cards/CustomSettings/components/ProfileListItem/ProfileListItem.tsx @@ -10,7 +10,7 @@ import Button from "components/buttons/Button"; import Graphic from "components/Graphic"; import Icon from "components/Icon"; -import { pluralize } from "utilities/helpers"; +import strUtils from "utilities/strings"; const baseClass = "profile-list-item"; @@ -22,7 +22,7 @@ const LabelCount = ({ count: number; }) => (
- {`${count} ${pluralize(count, "label", "s", "")}`} + {`${count} ${strUtils.pluralize(count, "label")}`}
); diff --git a/frontend/pages/ManageControlsPage/OSUpdates/_styles.scss b/frontend/pages/ManageControlsPage/OSUpdates/_styles.scss index 93393f26a..cf1b866d1 100644 --- a/frontend/pages/ManageControlsPage/OSUpdates/_styles.scss +++ b/frontend/pages/ManageControlsPage/OSUpdates/_styles.scss @@ -7,7 +7,6 @@ &__content { display: grid; - max-width: $break-xxl; gap: $pad-xxlarge; margin: 0 auto; diff --git a/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.jsx b/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.jsx deleted file mode 100644 index 549c00d6d..000000000 --- a/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.jsx +++ /dev/null @@ -1,84 +0,0 @@ -import React, { Component } from "react"; -import PropTypes from "prop-types"; -import classnames from "classnames"; - -class Breadcrumbs extends Component { - static propTypes = { - onClick: PropTypes.func, - pageProgress: PropTypes.number, - }; - - static defaultProps = { - pageProgress: 1, - }; - - onClick = (page) => { - return (evt) => { - evt.preventDefault(); - - const { onClick: handleClick } = this.props; - - return handleClick(page); - }; - }; - - render() { - const { onClick } = this; - const { pageProgress } = this.props; - const baseClass = "registration-breadcrumbs"; - const pageBaseClass = `${baseClass}__page`; - const page1ClassName = classnames( - pageBaseClass, - `${pageBaseClass}--1`, - "button--unstyled", - { - [`${pageBaseClass}--active`]: pageProgress === 1, - [`${pageBaseClass}--complete`]: pageProgress > 1, - } - ); - const page2TabIndex = pageProgress >= 2 ? 0 : -1; - const page2ClassName = classnames( - pageBaseClass, - `${pageBaseClass}--2`, - "button--unstyled", - { - [`${pageBaseClass}--active`]: pageProgress === 2, - [`${pageBaseClass}--complete`]: pageProgress > 2, - } - ); - const page3TabIndex = pageProgress >= 3 ? 0 : -1; - const page3ClassName = classnames( - pageBaseClass, - `${pageBaseClass}--3`, - "button--unstyled", - { - [`${pageBaseClass}--active`]: pageProgress === 3, - [`${pageBaseClass}--complete`]: pageProgress > 3, - } - ); - - return ( -
- - - -
- ); - } -} - -export default Breadcrumbs; diff --git a/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tests.jsx b/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tests.jsx deleted file mode 100644 index c94ea3252..000000000 --- a/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tests.jsx +++ /dev/null @@ -1,60 +0,0 @@ -import React from "react"; -import { fireEvent, render, screen } from "@testing-library/react"; - -import Breadcrumbs from "pages/RegistrationPage/Breadcrumbs"; - -describe("Breadcrumbs - component", () => { - it("renders 3 Button components", () => { - render(); - expect(screen.getAllByRole("button").length).toEqual(3); - }); - - it("renders page 1 Button as active when the page prop is 1", () => { - const { container } = render(); - const page1Btn = container.querySelector( - "button.registration-breadcrumbs__page--1" - ); - const page2Btn = container.querySelector( - "button.registration-breadcrumbs__page--2" - ); - const page3Btn = container.querySelector( - "button.registration-breadcrumbs__page--3" - ); - - expect(page1Btn.className).toContain( - "registration-breadcrumbs__page--active" - ); - expect(page2Btn.className).not.toContain( - "registration-breadcrumbs__page--active" - ); - expect(page3Btn.className).not.toContain( - "registration-breadcrumbs__page--active" - ); - }); - - it("calls the onClick prop with the page number when clicked", () => { - const onClickSpy = jest.fn(); - const { container } = render(); - const page1Btn = container.querySelector( - "button.registration-breadcrumbs__page--1" - ); - const page2Btn = container.querySelector( - "button.registration-breadcrumbs__page--2" - ); - const page3Btn = container.querySelector( - "button.registration-breadcrumbs__page--3" - ); - - fireEvent.click(page1Btn); - - expect(onClickSpy).toHaveBeenCalledWith(1); - - fireEvent.click(page2Btn); - - expect(onClickSpy).toHaveBeenCalledWith(2); - - fireEvent.click(page3Btn); - - expect(onClickSpy).toHaveBeenCalledWith(3); - }); -}); diff --git a/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tests.tsx b/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tests.tsx new file mode 100644 index 000000000..74714ba4c --- /dev/null +++ b/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tests.tsx @@ -0,0 +1,37 @@ +import React from "react"; +import { fireEvent, render, screen } from "@testing-library/react"; +import { noop } from "lodash"; + +import Breadcrumbs from "pages/RegistrationPage/Breadcrumbs"; + +describe("Breadcrumbs - component", () => { + it("renders 3 Button components", () => { + render(); + expect(screen.getAllByRole("button").length).toEqual(3); + }); + + it("renders page 1 Button as active when the current page prop is 1", () => { + const { container } = render( + + ); + const page1Btn = container.querySelector( + "button.registration-breadcrumbs__page--1" + ); + const page2Btn = container.querySelector( + "button.registration-breadcrumbs__page--2" + ); + const page3Btn = container.querySelector( + "button.registration-breadcrumbs__page--3" + ); + + expect(page1Btn?.className).toContain( + "registration-breadcrumbs__page--active" + ); + expect(page2Btn?.className).not.toContain( + "registration-breadcrumbs__page--active" + ); + expect(page3Btn?.className).not.toContain( + "registration-breadcrumbs__page--active" + ); + }); +}); diff --git a/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tsx b/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tsx new file mode 100644 index 000000000..edeeef39b --- /dev/null +++ b/frontend/pages/RegistrationPage/Breadcrumbs/Breadcrumbs.tsx @@ -0,0 +1,64 @@ +import React, { MouseEventHandler } from "react"; +import classnames from "classnames"; + +import Button from "components/buttons/Button"; + +interface IBreadcrumbs { + onSetPage: (page: number) => void; + currentPage: number; + pageProgress: number; +} +const baseClass = "registration-breadcrumbs"; + +const Breadcrumbs = ({ + onSetPage, + currentPage = 1, + pageProgress = 1, +}: IBreadcrumbs): JSX.Element => { + const pageBaseClass = `${baseClass}__page`; + const page1ClassName = classnames(pageBaseClass, `${pageBaseClass}--1`, { + [`${pageBaseClass}--active`]: currentPage === 1, + [`${pageBaseClass}--complete`]: pageProgress > 1, + }); + + const page2TabIndex = pageProgress >= 2 ? 0 : -1; + const page2ClassName = classnames(pageBaseClass, `${pageBaseClass}--2`, { + [`${pageBaseClass}--active`]: currentPage === 2, + [`${pageBaseClass}--complete`]: pageProgress > 2, + }); + const page3TabIndex = pageProgress >= 3 ? 0 : -1; + const page3ClassName = classnames(pageBaseClass, `${pageBaseClass}--3`, { + [`${pageBaseClass}--active`]: currentPage === 3, + [`${pageBaseClass}--complete`]: pageProgress > 3, + }); + + return ( +
+ + + +
+ ); +}; + +export default Breadcrumbs; diff --git a/frontend/pages/RegistrationPage/Breadcrumbs/_styles.scss b/frontend/pages/RegistrationPage/Breadcrumbs/_styles.scss index 97b54005d..d8fe1f169 100644 --- a/frontend/pages/RegistrationPage/Breadcrumbs/_styles.scss +++ b/frontend/pages/RegistrationPage/Breadcrumbs/_styles.scss @@ -71,7 +71,6 @@ &--active { font-weight: $bold; - color: $core-white; } &--1 { @@ -97,7 +96,7 @@ &.registration-breadcrumbs__page--complete { &::before { - background-color: $core-white; + background: $core-white; background-size: auto; z-index: 2; } diff --git a/frontend/pages/RegistrationPage/Breadcrumbs/index.js b/frontend/pages/RegistrationPage/Breadcrumbs/index.ts similarity index 100% rename from frontend/pages/RegistrationPage/Breadcrumbs/index.js rename to frontend/pages/RegistrationPage/Breadcrumbs/index.ts diff --git a/frontend/pages/RegistrationPage/RegistrationPage.tsx b/frontend/pages/RegistrationPage/RegistrationPage.tsx index 5d0fbd644..84323d68e 100644 --- a/frontend/pages/RegistrationPage/RegistrationPage.tsx +++ b/frontend/pages/RegistrationPage/RegistrationPage.tsx @@ -86,8 +86,8 @@ const RegistrationPage = ({ router }: IRegistrationPageProps) => { className={`${baseClass}__logo`} /> ; + } + if (isError) { return ; } diff --git a/frontend/pages/SoftwarePage/SoftwarePage.tsx b/frontend/pages/SoftwarePage/SoftwarePage.tsx index 9a2559567..e82eb67fc 100644 --- a/frontend/pages/SoftwarePage/SoftwarePage.tsx +++ b/frontend/pages/SoftwarePage/SoftwarePage.tsx @@ -301,11 +301,9 @@ const SoftwarePage = ({ children, router, location }: ISoftwarePageProps) => { (!isPremiumTier || !isAnyTeamSelected) && "and manage automations for detected vulnerabilities (CVEs)"}{" "} on{" "} - - {isPremiumTier && isAnyTeamSelected - ? "all hosts assigned to this team" - : "all of your hosts"} - + {isPremiumTier && isAnyTeamSelected + ? "all hosts assigned to this team" + : "all of your hosts"} .

); diff --git a/frontend/pages/SoftwarePage/SoftwareTitles/SoftwareTitles.tsx b/frontend/pages/SoftwarePage/SoftwareTitles/SoftwareTitles.tsx index 1a5c76c0d..18b13e555 100644 --- a/frontend/pages/SoftwarePage/SoftwareTitles/SoftwareTitles.tsx +++ b/frontend/pages/SoftwarePage/SoftwareTitles/SoftwareTitles.tsx @@ -63,6 +63,7 @@ const SoftwareTitles = ({ const { data: titlesData, isFetching: isTitlesFetching, + isLoading: isTitlesLoading, isError: isTitlesError, } = useQuery< ISoftwareTitlesResponse, @@ -93,6 +94,7 @@ const SoftwareTitles = ({ const { data: versionsData, isFetching: isVersionsFetching, + isLoading: isVersionsLoading, isError: isVersionsError, } = useQuery< ISoftwareVersionsResponse, @@ -119,6 +121,10 @@ const SoftwareTitles = ({ } ); + if (isTitlesLoading || isVersionsLoading) { + return ; + } + if (isTitlesError || isVersionsError) { return ; } diff --git a/frontend/pages/UserSettingsPage/UserSidePanel/index.ts b/frontend/pages/UserSettingsPage/UserSidePanel/index.ts deleted file mode 100644 index 9ed263795..000000000 --- a/frontend/pages/UserSettingsPage/UserSidePanel/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from "./UserSidePanel"; diff --git a/frontend/pages/UserSettingsPage/index.ts b/frontend/pages/UserSettingsPage/index.ts deleted file mode 100644 index 2da89e3b6..000000000 --- a/frontend/pages/UserSettingsPage/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from "./UserSettingsPage"; diff --git a/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/AutomaticEnrollment.tsx b/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/AutomaticEnrollment.tsx index cf6151881..3a6bc79db 100644 --- a/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/AutomaticEnrollment.tsx +++ b/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/AutomaticEnrollment.tsx @@ -43,7 +43,11 @@ const AutomaticEnrollment = ({ router }: IAutomaticEnrollment) => { if (!isPremiumTier) return ; if (isLoadingMdmApple) { - return ; + return ( +
+ +
+ ); } if (errorMdmApple?.status === 404) { diff --git a/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/components/RenameTeamModal/RenameTeamModal.tsx b/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/components/RenameTeamModal/RenameTeamModal.tsx new file mode 100644 index 000000000..40d3bd6ce --- /dev/null +++ b/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/components/RenameTeamModal/RenameTeamModal.tsx @@ -0,0 +1,86 @@ +import React, { useState, useContext, FormEvent } from "react"; + +import { AppContext } from "context/app"; +import { + APP_CONTEXT_NO_TEAM_ID, + APP_CONTEX_NO_TEAM_SUMMARY, +} from "interfaces/team"; +import configAPI from "services/entities/config"; + +// @ts-ignore +import Dropdown from "components/forms/fields/Dropdown"; +import Modal from "components/Modal"; +import Button from "components/buttons/Button"; + +interface IRenameTeamModal { + onCancel: () => void; + defaultTeamName: string; + onUpdateSuccess: (newName: string) => void; +} + +const baseClass = "edit-team-modal"; + +const RenameTeamModal = ({ + onCancel, + defaultTeamName, + onUpdateSuccess, +}: IRenameTeamModal): JSX.Element => { + const { availableTeams } = useContext(AppContext); + + const [selectedTeam, setSelectedTeam] = useState(defaultTeamName); + + const teamNameOptions = availableTeams + ?.filter((t) => t.id >= APP_CONTEXT_NO_TEAM_ID) + .map((teamSummary) => { + return { + value: + teamSummary.name === APP_CONTEX_NO_TEAM_SUMMARY.name + ? "" + : teamSummary.name, + label: teamSummary.name, + }; + }); + + const [isLoading, setIsLoading] = useState(false); + + const onFormSubmit = async (event: FormEvent) => { + event.preventDefault(); + try { + setIsLoading(true); + const configData = await configAPI.update({ + mdm: { apple_bm_default_team: selectedTeam }, + }); + setIsLoading(false); + onUpdateSuccess(configData.mdm.apple_bm_default_team); + } finally { + onCancel(); + } + }; + + return ( + +
+
+ +
+
+ + +
+
+
+ ); +}; + +export default RenameTeamModal; diff --git a/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/components/RenameTeamModal/index.ts b/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/components/RenameTeamModal/index.ts new file mode 100644 index 000000000..89752243c --- /dev/null +++ b/frontend/pages/admin/IntegrationsPage/cards/AutomaticEnrollment/components/RenameTeamModal/index.ts @@ -0,0 +1 @@ +export { default } from "./RenameTeamModal"; diff --git a/frontend/pages/admin/TeamManagementPage/TeamDetailsWrapper/TeamDetailsWrapper.tsx b/frontend/pages/admin/TeamManagementPage/TeamDetailsWrapper/TeamDetailsWrapper.tsx index af7097ca4..367c76e11 100644 --- a/frontend/pages/admin/TeamManagementPage/TeamDetailsWrapper/TeamDetailsWrapper.tsx +++ b/frontend/pages/admin/TeamManagementPage/TeamDetailsWrapper/TeamDetailsWrapper.tsx @@ -29,7 +29,7 @@ import BackLink from "components/BackLink"; import TeamsDropdown from "components/TeamsDropdown"; import MainContent from "components/MainContent"; import DeleteTeamModal from "../components/DeleteTeamModal"; -import EditTeamModal from "../components/EditTeamModal"; +import RenameTeamModal from "../components/RenameTeamModal"; import DeleteSecretModal from "../../../../components/EnrollSecrets/DeleteSecretModal"; import SecretEditorModal from "../../../../components/EnrollSecrets/SecretEditorModal"; import AddHostsModal from "../../../../components/AddHostsModal"; @@ -131,7 +131,7 @@ const TeamDetailsWrapper = ({ const [showEnrollSecretModal, setShowEnrollSecretModal] = useState(false); const [showSecretEditorModal, setShowSecretEditorModal] = useState(false); const [showDeleteTeamModal, setShowDeleteTeamModal] = useState(false); - const [showEditTeamModal, setShowEditTeamModal] = useState(false); + const [showRenameTeamModal, setShowRenameTeamModal] = useState(false); const [backendValidators, setBackendValidators] = useState<{ [key: string]: string; }>({}); @@ -224,10 +224,10 @@ const TeamDetailsWrapper = ({ setShowDeleteTeamModal(!showDeleteTeamModal); }, [showDeleteTeamModal, setShowDeleteTeamModal]); - const toggleEditTeamModal = useCallback(() => { - setShowEditTeamModal(!showEditTeamModal); + const toggleRenameTeamModal = useCallback(() => { + setShowRenameTeamModal(!showRenameTeamModal); setBackendValidators({}); - }, [showEditTeamModal, setShowEditTeamModal, setBackendValidators]); + }, [showRenameTeamModal, setShowRenameTeamModal, setBackendValidators]); const onSaveSecret = async (enrollSecretString: string) => { // Creates new list of secrets removing selected secret and adding new secret @@ -316,7 +316,7 @@ const TeamDetailsWrapper = ({ const updatedAttrs = generateUpdateData(currentTeamDetails, formData); // no updates, so no need for a request. if (!updatedAttrs) { - toggleEditTeamModal(); + toggleRenameTeamModal(); return; } @@ -341,13 +341,13 @@ const TeamDetailsWrapper = ({ renderFlash("error", "Could not create team. Please try again."); } } finally { - toggleEditTeamModal(); + toggleRenameTeamModal(); setIsUpdatingTeams(false); } }, [ currentTeamDetails, - toggleEditTeamModal, + toggleRenameTeamModal, teamIdForApi, renderFlash, refetchTeams, @@ -423,10 +423,10 @@ const TeamDetailsWrapper = ({ }, { type: "secondary", - label: "Edit team", + label: "Rename team", buttonVariant: "text-icon", iconSvg: "pencil", - onClick: toggleEditTeamModal, + onClick: toggleRenameTeamModal, }, { type: "secondary", @@ -511,9 +511,9 @@ const TeamDetailsWrapper = ({ isUpdatingTeams={isUpdatingTeams} /> )} - {showEditTeamModal && ( - { const [isUpdatingTeams, setIsUpdatingTeams] = useState(false); const [showCreateTeamModal, setShowCreateTeamModal] = useState(false); const [showDeleteTeamModal, setShowDeleteTeamModal] = useState(false); - const [showEditTeamModal, setShowEditTeamModal] = useState(false); + const [showRenameTeamModal, setShowRenameTeamModal] = useState(false); const [teamEditing, setTeamEditing] = useState(); const [backendValidators, setBackendValidators] = useState<{ [key: string]: string; @@ -84,15 +84,15 @@ const TeamManagementPage = (): JSX.Element => { [showDeleteTeamModal, setShowDeleteTeamModal, setTeamEditing] ); - const toggleEditTeamModal = useCallback( + const toggleRenameTeamModal = useCallback( (team?: ITeam) => { - setShowEditTeamModal(!showEditTeamModal); + setShowRenameTeamModal(!showRenameTeamModal); setBackendValidators({}); team ? setTeamEditing(team) : setTeamEditing(undefined); }, [ - showEditTeamModal, - setShowEditTeamModal, + showRenameTeamModal, + setShowRenameTeamModal, setTeamEditing, setBackendValidators, ] @@ -161,10 +161,10 @@ const TeamManagementPage = (): JSX.Element => { toggleDeleteTeamModal, ]); - const onEditSubmit = useCallback( + const onRenameSubmit = useCallback( (formData: ITeamFormData) => { if (formData.name === teamEditing?.name) { - toggleEditTeamModal(); + toggleRenameTeamModal(); } else if (teamEditing) { setIsUpdatingTeams(true); teamsAPI @@ -175,7 +175,7 @@ const TeamManagementPage = (): JSX.Element => { `Successfully updated team name to ${formData.name}.` ); setBackendValidators({}); - toggleEditTeamModal(); + toggleRenameTeamModal(); refetchTeams(); }) .catch((updateError: { data: IApiError }) => { @@ -187,7 +187,7 @@ const TeamManagementPage = (): JSX.Element => { } else { renderFlash( "error", - `Could not edit ${teamEditing.name}. Please try again.` + `Could not rename ${teamEditing.name}. Please try again.` ); } }) @@ -196,14 +196,14 @@ const TeamManagementPage = (): JSX.Element => { }); } }, - [teamEditing, toggleEditTeamModal, refetchTeams, renderFlash] + [teamEditing, toggleRenameTeamModal, refetchTeams, renderFlash] ); const onActionSelection = useCallback( (action: string, team: ITeam): void => { switch (action) { - case "edit": - toggleEditTeamModal(team); + case "rename": + toggleRenameTeamModal(team); break; case "delete": toggleDeleteTeamModal(team); @@ -211,7 +211,7 @@ const TeamManagementPage = (): JSX.Element => { default: } }, - [toggleEditTeamModal, toggleDeleteTeamModal] + [toggleRenameTeamModal, toggleDeleteTeamModal] ); const tableHeaders = useMemo(() => generateTableHeaders(onActionSelection), [ @@ -280,10 +280,10 @@ const TeamManagementPage = (): JSX.Element => { isUpdatingTeams={isUpdatingTeams} /> )} - {showEditTeamModal && ( - { return [ { - label: "Edit", + label: "Rename", disabled: false, - value: "edit", + value: "rename", }, { label: "Delete", diff --git a/frontend/pages/admin/TeamManagementPage/components/EditTeamModal/index.ts b/frontend/pages/admin/TeamManagementPage/components/EditTeamModal/index.ts deleted file mode 100644 index 4e5065e76..000000000 --- a/frontend/pages/admin/TeamManagementPage/components/EditTeamModal/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from "./EditTeamModal"; diff --git a/frontend/pages/admin/TeamManagementPage/components/EditTeamModal/EditTeamModal.tsx b/frontend/pages/admin/TeamManagementPage/components/RenameTeamModal/RenameTeamModal.tsx similarity index 89% rename from frontend/pages/admin/TeamManagementPage/components/EditTeamModal/EditTeamModal.tsx rename to frontend/pages/admin/TeamManagementPage/components/RenameTeamModal/RenameTeamModal.tsx index 257be0c55..cf59621b9 100644 --- a/frontend/pages/admin/TeamManagementPage/components/EditTeamModal/EditTeamModal.tsx +++ b/frontend/pages/admin/TeamManagementPage/components/RenameTeamModal/RenameTeamModal.tsx @@ -9,7 +9,7 @@ import Button from "components/buttons/Button"; const baseClass = "edit-team-modal"; -interface IEditTeamModalProps { +interface IRenameTeamModalProps { onCancel: () => void; onSubmit: (formData: ITeamFormData) => void; defaultName: string; @@ -17,13 +17,13 @@ interface IEditTeamModalProps { isUpdatingTeams: boolean; } -const EditTeamModal = ({ +const RenameTeamModal = ({ onCancel, onSubmit, defaultName, backendValidators, isUpdatingTeams, -}: IEditTeamModalProps): JSX.Element => { +}: IRenameTeamModalProps): JSX.Element => { const [name, setName] = useState(defaultName); const [errors, setErrors] = useState<{ [key: string]: string }>( backendValidators @@ -47,7 +47,7 @@ const EditTeamModal = ({ }; return ( - +
{ // FUNCTIONS - const goToUserSettingsPage = useCallback(() => { - const { USER_SETTINGS } = paths; - router.push(USER_SETTINGS); + const goToAccountPage = useCallback(() => { + const { ACCOUNT } = paths; + router.push(ACCOUNT); }, [router]); const onActionSelect = useCallback( @@ -172,7 +172,7 @@ const UsersTable = ({ router }: IUsersTableProps): JSX.Element => { toggleResetSessionsUserModal(user); break; case "editMyAccount": - goToUserSettingsPage(); + goToAccountPage(); break; default: return null; @@ -184,7 +184,7 @@ const UsersTable = ({ router }: IUsersTableProps): JSX.Element => { toggleDeleteUserModal, toggleResetPasswordUserModal, toggleResetSessionsUserModal, - goToUserSettingsPage, + goToAccountPage, ] ); diff --git a/frontend/pages/hosts/components/DeleteHostModal/DeleteHostModal.tsx b/frontend/pages/hosts/components/DeleteHostModal/DeleteHostModal.tsx index 8524dee20..4b27ff330 100644 --- a/frontend/pages/hosts/components/DeleteHostModal/DeleteHostModal.tsx +++ b/frontend/pages/hosts/components/DeleteHostModal/DeleteHostModal.tsx @@ -1,5 +1,7 @@ import React from "react"; +import strUtils from "utilities/strings"; + import Modal from "components/Modal"; import Button from "components/buttons/Button"; import CustomLink from "components/CustomLink"; @@ -29,11 +31,18 @@ const DeleteHostModal = ({ hostName, isUpdating, }: IDeleteHostModalProps): JSX.Element => { + const pluralizeHost = () => { + if (!selectedHostIds) { + return "host"; + } + return strUtils.pluralize(selectedHostIds.length, "host"); + }; + const hostText = () => { if (selectedHostIds) { return `${selectedHostIds.length}${ isAllMatchingHostsSelected ? "+" : "" - } ${selectedHostIds.length === 1 ? "host" : "hosts"}`; + } ${pluralizeHost()}`; } return hostName; }; @@ -58,17 +67,18 @@ const DeleteHostModal = ({ > <>

- This action will delete {hostText()} from your Fleet instance. - {largeVolumeText()} + This will remove the record of {hostText()}.{largeVolumeText()} +

+

+ The {pluralizeHost()} will re-appear unless fleet's agent is + uninstalled.

-

If the hosts come back online, they will automatically re-enroll.

- To prevent re-enrollment,{" "}

diff --git a/frontend/pages/policies/ManagePoliciesPage/ManagePoliciesPage.tsx b/frontend/pages/policies/ManagePoliciesPage/ManagePoliciesPage.tsx index b0c3e9dc8..e68a96de4 100644 --- a/frontend/pages/policies/ManagePoliciesPage/ManagePoliciesPage.tsx +++ b/frontend/pages/policies/ManagePoliciesPage/ManagePoliciesPage.tsx @@ -249,6 +249,7 @@ const ManagePolicyPage = ({ data: globalPoliciesCount, isFetching: isFetchingGlobalCount, + refetch: refetchGlobalPoliciesCount, } = useQuery( [ { @@ -303,7 +304,11 @@ const ManagePolicyPage = ({ } ); - const { data: teamPoliciesCount, isFetching: isFetchingTeamCount } = useQuery< + const { + data: teamPoliciesCount, + isFetching: isFetchingTeamCount, + refetch: refetchTeamPoliciesCount, + } = useQuery< IPoliciesCountResponse, Error, number, @@ -364,8 +369,10 @@ const ManagePolicyPage = ({ const refetchPolicies = (teamId?: number) => { if (teamId) { refetchTeamPolicies(); + refetchTeamPoliciesCount(); } else { refetchGlobalPolicies(); // Only call on global policies as this is expensive + refetchGlobalPoliciesCount(); } }; @@ -567,8 +574,6 @@ const ManagePolicyPage = ({ }`; }; - const showTeamDescription = isPremiumTier && isAnyTeamSelected; - const showInheritedPoliciesButton = isAnyTeamSelected && !isFetchingTeamPolicies && @@ -737,17 +742,11 @@ const ManagePolicyPage = ({ )}
- {showTeamDescription ? ( -

- Add additional policies for all hosts assigned to this team - . -

- ) : ( -

- Add policies for all of your hosts to see which pass your - organization’s standards. -

- )} +

+ {isAnyTeamSelected + ? "Detect device health issues for all hosts assigned to this team." + : "Detect device health issues for all hosts."} +

{renderMainTable()} {showInheritedPoliciesButton && globalPoliciesCount && ( diff --git a/frontend/pages/policies/ManagePoliciesPage/_styles.scss b/frontend/pages/policies/ManagePoliciesPage/_styles.scss index 6ce8cc0fe..62c29c1d5 100644 --- a/frontend/pages/policies/ManagePoliciesPage/_styles.scss +++ b/frontend/pages/policies/ManagePoliciesPage/_styles.scss @@ -51,13 +51,6 @@ margin: 0; margin-bottom: $pad-xxlarge; - h2 { - text-transform: uppercase; - color: $core-fleet-black; - font-weight: $regular; - font-size: $small; - } - p { color: $ui-fleet-black-75; margin: 0; diff --git a/frontend/pages/policies/ManagePoliciesPage/components/PoliciesTable/PoliciesTable.tsx b/frontend/pages/policies/ManagePoliciesPage/components/PoliciesTable/PoliciesTable.tsx index 9c089a936..9494e4980 100644 --- a/frontend/pages/policies/ManagePoliciesPage/components/PoliciesTable/PoliciesTable.tsx +++ b/frontend/pages/policies/ManagePoliciesPage/components/PoliciesTable/PoliciesTable.tsx @@ -75,37 +75,13 @@ const PoliciesTable = ({ const emptyState = () => { const emptyPolicies: IEmptyTableProps = { graphicName: "empty-policies", - header: ( - <> - Ask yes or no questions about{" "} - all your hosts - - ), + header: <>You don't have any policies, info: ( <> - - Verify whether or not your hosts have security features turned on. -
- Track your efforts to keep installed software up to date on - your hosts. -
- Provide owners with a list of hosts that still need changes. + Add policies to detect device health issues and trigger automations. ), }; - - if (currentTeam) { - emptyPolicies.header = ( - <> - Ask yes or no questions about hosts assigned to{" "} - - {currentTeam.name} - - - ); - } if (canAddOrDeletePolicy) { emptyPolicies.primaryButton = ( ); } diff --git a/frontend/pages/queries/ManageQueriesPage/ManageQueriesPage.tsx b/frontend/pages/queries/ManageQueriesPage/ManageQueriesPage.tsx index 09385bb58..8733826e8 100644 --- a/frontend/pages/queries/ManageQueriesPage/ManageQueriesPage.tsx +++ b/frontend/pages/queries/ManageQueriesPage/ManageQueriesPage.tsx @@ -461,8 +461,9 @@ const ManageQueriesPage = ({

- Manage and schedule queries to ask questions and collect telemetry - for all hosts{isAnyTeamSelected && " assigned to this team"}. + {isAnyTeamSelected + ? "Gather data about all hosts assigned to this team." + : "Gather data about all hosts."}

{renderCurrentScopeQueriesTable()} diff --git a/frontend/pages/queries/ManageQueriesPage/_styles.scss b/frontend/pages/queries/ManageQueriesPage/_styles.scss index c6a8a0086..8db063b32 100644 --- a/frontend/pages/queries/ManageQueriesPage/_styles.scss +++ b/frontend/pages/queries/ManageQueriesPage/_styles.scss @@ -39,13 +39,6 @@ &__description { margin: 0 0 $pad-xxlarge; - h2 { - text-transform: uppercase; - color: $core-fleet-black; - font-weight: $regular; - font-size: $small; - } - p { color: $ui-fleet-black-75; margin: 0; diff --git a/frontend/pages/queries/details/QueryDetailsPage/QueryDetailsPage.tsx b/frontend/pages/queries/details/QueryDetailsPage/QueryDetailsPage.tsx index 82a55c004..440e59ca4 100644 --- a/frontend/pages/queries/details/QueryDetailsPage/QueryDetailsPage.tsx +++ b/frontend/pages/queries/details/QueryDetailsPage/QueryDetailsPage.tsx @@ -33,6 +33,7 @@ import DataError from "components/DataError/DataError"; import LogDestinationIndicator from "components/LogDestinationIndicator/LogDestinationIndicator"; import CustomLink from "components/CustomLink"; import InfoBanner from "components/InfoBanner"; +import ShowQueryModal from "components/modals/ShowQueryModal"; import QueryReport from "../components/QueryReport/QueryReport"; import NoResults from "../components/NoResults/NoResults"; @@ -94,6 +95,7 @@ const QueryDetailsPage = ({ const { lastEditedQueryName, lastEditedQueryDescription, + lastEditedQueryBody, lastEditedQueryObserverCanRun, lastEditedQueryDiscardData, lastEditedQueryLoggingType, @@ -109,6 +111,7 @@ const QueryDetailsPage = ({ setLastEditedQueryDiscardData, } = useContext(QueryContext); + const [showQueryModal, setShowQueryModal] = useState(false); const [disabledCachingGlobally, setDisabledCachingGlobally] = useState(true); useEffect(() => { @@ -184,6 +187,10 @@ const QueryDetailsPage = ({ } }, [location.pathname, storedQuery?.name]); + const onShowQueryModal = () => { + setShowQueryModal(!showQueryModal); + }; + const isLoading = isStoredQueryLoading || isQueryReportLoading; const isApiError = storedQueryError || queryReportError; const isClipped = @@ -216,6 +223,13 @@ const QueryDetailsPage = ({

+ {canEditQuery && (
); diff --git a/frontend/pages/queries/details/components/QueryReport/QueryReport.tsx b/frontend/pages/queries/details/components/QueryReport/QueryReport.tsx index e7e7c1807..80de9b62e 100644 --- a/frontend/pages/queries/details/components/QueryReport/QueryReport.tsx +++ b/frontend/pages/queries/details/components/QueryReport/QueryReport.tsx @@ -13,7 +13,6 @@ import { IQueryReport, IQueryReportResultRow } from "interfaces/query_report"; import Button from "components/buttons/Button"; import Icon from "components/Icon/Icon"; import TableContainer from "components/TableContainer"; -import ShowQueryModal from "components/modals/ShowQueryModal"; import TooltipWrapper from "components/TooltipWrapper"; import EmptyTable from "components/EmptyTable"; @@ -46,7 +45,6 @@ const QueryReport = ({ }: IQueryReportProps): JSX.Element => { const { lastEditedQueryName, lastEditedQueryBody } = useContext(QueryContext); - const [showQueryModal, setShowQueryModal] = useState(false); const [filteredResults, setFilteredResults] = useState( flattenResults(queryReport?.results || []) ); @@ -77,22 +75,9 @@ const QueryReport = ({ ); }; - const onShowQueryModal = () => { - setShowQueryModal(!showQueryModal); - }; - const renderTableButtons = () => { return (
-
- ); + return
{renderTable()}
; }; export default QueryReport; diff --git a/frontend/router/index.tsx b/frontend/router/index.tsx index 2f473c35a..ea74585b8 100644 --- a/frontend/router/index.tsx +++ b/frontend/router/index.tsx @@ -45,7 +45,7 @@ import MDMAppleSSOCallbackPage from "pages/MDMAppleSSOCallbackPage"; import ApiOnlyUser from "pages/ApiOnlyUser"; import Fleet403 from "pages/errors/Fleet403"; import Fleet404 from "pages/errors/Fleet404"; -import UserSettingsPage from "pages/UserSettingsPage"; +import AccountPage from "pages/AccountPage"; import SettingsWrapper from "pages/admin/AdminWrapper"; import ManageControlsPage from "pages/ManageControlsPage/ManageControlsPage"; import UsersPage from "pages/admin/TeamManagementPage/TeamDetailsWrapper/UsersPage/UsersPage"; @@ -201,7 +201,6 @@ const routes = ( component={HostQueryReport} /> - @@ -218,7 +217,6 @@ const routes = ( - @@ -274,10 +272,8 @@ const routes = ( - + {/* deprecated URL */} + diff --git a/frontend/router/page_titles.ts b/frontend/router/page_titles.ts index 72b94a3ab..0b53cb8f4 100644 --- a/frontend/router/page_titles.ts +++ b/frontend/router/page_titles.ts @@ -30,7 +30,7 @@ export default [ title: `Settings | ${DOCUMENT_TITLE_SUFFIX}`, }, { - path: PATHS.USER_SETTINGS, + path: PATHS.ACCOUNT, title: `Settings | My account | ${DOCUMENT_TITLE_SUFFIX}`, }, ]; diff --git a/frontend/router/paths.ts b/frontend/router/paths.ts index 7fbea6274..114e42dd8 100644 --- a/frontend/router/paths.ts +++ b/frontend/router/paths.ts @@ -167,6 +167,6 @@ export default { `${URL_PREFIX}/queries/new${teamId ? `?team_id=${teamId}` : ""}`, RESET_PASSWORD: `${URL_PREFIX}/login/reset`, SETUP: `${URL_PREFIX}/setup`, - USER_SETTINGS: `${URL_PREFIX}/profile`, + ACCOUNT: `${URL_PREFIX}/account`, URL_PREFIX, }; diff --git a/frontend/styles/global/_icons.scss b/frontend/styles/global/_icons.scss index 10a2cdc43..2c9e46028 100644 --- a/frontend/styles/global/_icons.scss +++ b/frontend/styles/global/_icons.scss @@ -160,7 +160,7 @@ content: "\f03f"; } -.fleeticon-user-settings:before { +.fleeticon-account:before { content: "\f040"; } diff --git a/frontend/utilities/helpers.tsx b/frontend/utilities/helpers.tsx index 37478865b..5781a9a85 100644 --- a/frontend/utilities/helpers.tsx +++ b/frontend/utilities/helpers.tsx @@ -53,31 +53,6 @@ import { IScheduledQueryStats } from "interfaces/scheduled_query_stats"; const ORG_INFO_ATTRS = ["org_name", "org_logo_url"]; const ADMIN_ATTRS = ["email", "name", "password", "password_confirmation"]; -/** - * - * @param count The number of items. - * @param root The root of the word, omitting any suffixs. - * @param pluralSuffix The suffix to add to the root if the count is not 1. - * @param singularSuffix The suffix to add to the root if the count is 1. - * @returns A string with the root and the appropriate suffix. - * - * @example - * pluralize(1, "hero", "es", "") // "hero" - * pluralize(0, "hero", "es", "") // "heroes" - * pluralize(1, "fair", "ies", "y") // "fairy" - * pluralize(2, "fair", "ies", "y") // "fairies" - * pluralize(1, "dragon") // "dragon" - * pluralize(2, "dragon") // "dragons" - */ -export const pluralize = ( - count: number, - root: string, - pluralSuffix: string, - singularSuffix: string -) => { - return `${root}${count !== 1 ? pluralSuffix : singularSuffix}`; -}; - export const addGravatarUrlToResource = (resource: any): any => { const { email } = resource; const gravatarAvailable = @@ -906,7 +881,6 @@ export const getUniqueColumnNamesFromRows = (rows: any[]) => ); export default { - pluralize, addGravatarUrlToResource, formatConfigDataForServer, formatLabelResponse, diff --git a/frontend/utilities/strings/stringUtils.tests.ts b/frontend/utilities/strings/stringUtils.tests.ts index 9757456d8..25755873f 100644 --- a/frontend/utilities/strings/stringUtils.tests.ts +++ b/frontend/utilities/strings/stringUtils.tests.ts @@ -1,23 +1,47 @@ -import { enforceFleetSentenceCasing } from "./stringUtils"; +import { enforceFleetSentenceCasing, pluralize } from "./stringUtils"; -describe("enforceFleetSentenceCasing utility", () => { - it("fixes a Title Cased String with no ignore words", () => { - expect(enforceFleetSentenceCasing("All Hosts")).toEqual("All hosts"); - expect(enforceFleetSentenceCasing("all Hosts")).toEqual("All hosts"); - expect(enforceFleetSentenceCasing("all hosts")).toEqual("All hosts"); - expect(enforceFleetSentenceCasing("All HosTs ")).toEqual("All hosts"); - }); +describe("string utilities", () => { + describe("enforceFleetSentenceCasing utility", () => { + it("fixes a Title Cased String with no ignore words", () => { + expect(enforceFleetSentenceCasing("All Hosts")).toEqual("All hosts"); + expect(enforceFleetSentenceCasing("all Hosts")).toEqual("All hosts"); + expect(enforceFleetSentenceCasing("all hosts")).toEqual("All hosts"); + expect(enforceFleetSentenceCasing("All HosTs ")).toEqual("All hosts"); + }); - it("fixes a title cased string while ignoring special words in various places ", () => { - expect(enforceFleetSentenceCasing("macOS")).toEqual("macOS"); - expect(enforceFleetSentenceCasing("macOS Settings")).toEqual( - "macOS settings" + it("fixes a title cased string while ignoring special words in various places ", () => { + expect(enforceFleetSentenceCasing("macOS")).toEqual("macOS"); + expect(enforceFleetSentenceCasing("macOS Settings")).toEqual( + "macOS settings" + ); + expect( + enforceFleetSentenceCasing("osquery shouldn't be Capitalized") + ).toEqual("osquery shouldn't be capitalized"); + }); + expect(enforceFleetSentenceCasing("fleet uses MySQL")).toEqual( + "Fleet uses MySQL" ); - expect( - enforceFleetSentenceCasing("osquery shouldn't be Capitalized") - ).toEqual("osquery shouldn't be capitalized"); }); - expect(enforceFleetSentenceCasing("fleet uses MySQL")).toEqual( - "Fleet uses MySQL" - ); + + describe("pluralize utility", () => { + it("returns the singular form of a word when count is 1", () => { + expect(pluralize(1, "hero", "es", "")).toEqual("hero"); + }); + + it("returns the plural form of a word when count is not 1", () => { + expect(pluralize(0, "hero", "es", "")).toEqual("heroes"); + expect(pluralize(2, "hero", "es", "")).toEqual("heroes"); + expect(pluralize(100, "hero", "es", "")).toEqual("heroes"); + }); + + it("returns the singular form of a word when count is 1 and a no custom suffix are provided", () => { + expect(pluralize(1, "hero")).toEqual("hero"); + }); + + it("returns the pluralized form of a word with 's' suffix when count is not 1 and no custom suffix are provided", () => { + expect(pluralize(0, "hero")).toEqual("heros"); + expect(pluralize(2, "hero")).toEqual("heros"); + expect(pluralize(100, "hero")).toEqual("heros"); + }); + }); }); diff --git a/frontend/utilities/strings/stringUtils.ts b/frontend/utilities/strings/stringUtils.ts index edc0aea69..e7e205d98 100644 --- a/frontend/utilities/strings/stringUtils.ts +++ b/frontend/utilities/strings/stringUtils.ts @@ -45,7 +45,36 @@ export const enforceFleetSentenceCasing = (s: string) => { return resArr.join(" ").trim(); }; + +/** + * Pluralizes a word based on the entitiy count and the desired suffixes. If no + * suffixes are provided, the default suffix "s" is used. + * + * @param count The number of items. + * @param root The root of the word, omitting any suffixs. + * @param pluralSuffix The suffix to add to the root if the count is not 1. + * @param singularSuffix The suffix to add to the root if the count is 1. + * @returns A string with the root and the appropriate suffix. + * + * @example + * pluralize(1, "hero", "es", "") // "hero" + * pluralize(0, "hero", "es", "") // "heroes" + * pluralize(1, "fair", "ies", "y") // "fairy" + * pluralize(2, "fair", "ies", "y") // "fairies" + * pluralize(1, "dragon") // "dragon" + * pluralize(2, "dragon") // "dragons" + */ +export const pluralize = ( + count: number, + root: string, + pluralSuffix = "s", + singularSuffix = "" +) => { + return `${root}${count !== 1 ? pluralSuffix : singularSuffix}`; +}; + export default { capitalize, capitalizeRole, + pluralize, }; diff --git a/go.mod b/go.mod index 4ab0b66e5..03cb8a446 100644 --- a/go.mod +++ b/go.mod @@ -17,6 +17,7 @@ require ( github.com/aws/aws-sdk-go v1.44.288 github.com/beevik/etree v1.1.0 github.com/beevik/ntp v0.3.0 + github.com/boltdb/bolt v1.3.1 github.com/briandowns/spinner v1.13.0 github.com/cenkalti/backoff v2.2.1+incompatible github.com/cenkalti/backoff/v4 v4.2.1 @@ -51,6 +52,7 @@ require ( github.com/gorilla/mux v1.8.0 github.com/gorilla/websocket v1.4.2 github.com/gosuri/uilive v0.0.4 + github.com/groob/finalizer v0.0.0-20170707115354-4c2ed49aabda github.com/groob/plist v0.0.0-20220217120414-63fa881b19a5 github.com/hashicorp/go-multierror v1.1.1 github.com/hectane/go-acl v0.0.0-20190604041725-da78bae5fc95 @@ -66,7 +68,6 @@ require ( github.com/mattn/go-sqlite3 v1.14.13 github.com/micromdm/micromdm v1.9.0 github.com/micromdm/nanodep v0.1.0 - github.com/micromdm/scep/v2 v2.1.0 github.com/mitchellh/go-ps v1.0.0 github.com/mitchellh/gon v0.2.6-0.20231031204852-2d4f161ccecd github.com/mna/redisc v1.3.2 @@ -231,7 +232,6 @@ require ( github.com/goreleaser/chglog v0.1.2 // indirect github.com/goreleaser/fileglob v1.2.0 // indirect github.com/gorilla/schema v1.2.0 // indirect - github.com/groob/finalizer v0.0.0-20170707115354-4c2ed49aabda // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.0 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect @@ -322,5 +322,3 @@ require ( ) replace github.com/micromdm/nanodep => github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24 - -replace github.com/micromdm/scep/v2 => github.com/fleetdm/scep/v2 v2.1.1-0.20240111143358-4df608a81afd diff --git a/go.sum b/go.sum index a9cba664c..c8492beb3 100644 --- a/go.sum +++ b/go.sum @@ -448,8 +448,6 @@ github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBd github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24 h1:XhczaxKV3J4NjztroidSnYKyq5xtxF+amBYdBWeik58= github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24/go.mod h1:QzQrCUTmSr9HotzKZAcfmy+czbEGK8Mq26hA+0DN4ag= -github.com/fleetdm/scep/v2 v2.1.1-0.20240111143358-4df608a81afd h1:JBmApt3HCm2nzQ9Fjj7WR6QpPjtU4nlPRXHrO8o0Bgw= -github.com/fleetdm/scep/v2 v2.1.1-0.20240111143358-4df608a81afd/go.mod h1:PajjVSF3LaELUh847MlOtanfqrF8R2DOO4oS3NSPemI= github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw= @@ -490,7 +488,6 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2 github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A= github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= -github.com/go-kit/kit v0.4.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.7.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= @@ -525,7 +522,6 @@ github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrtU8EI= github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI= -github.com/go-stack/stack v1.6.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.7.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-telegram-bot-api/telegram-bot-api v4.6.4+incompatible h1:2cauKuaELYAEARXRkq2LrJ0yDDv1rW7+wrTEdVL3uaU= @@ -690,9 +686,7 @@ github.com/goreleaser/goreleaser v1.1.0 h1:YySqqYTX9kxRU0e/fGQrhivXk8/zD4iUOlL7h github.com/goreleaser/goreleaser v1.1.0/go.mod h1:Xi4DvX/N7e2hXC5tJlXsKEb+XEo83tkSqcinWunNtjs= github.com/goreleaser/nfpm/v2 v2.10.0 h1:SshT2D1MTzCifmjaagQA+5XW9Iq+qvXUavrgP0HvmWg= github.com/goreleaser/nfpm/v2 v2.10.0/go.mod h1:Bj/ztLvdnBnEgMae0fl/bLF6By1+yFFKeL97WiS6ZJg= -github.com/gorilla/context v0.0.0-20160226214623-1ea25387ff6f/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg= github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg= -github.com/gorilla/mux v1.4.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI= @@ -1368,7 +1362,6 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91 golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc= golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/net v0.0.0-20170726083632-f5079bd7f6f7/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180218175443-cbe0f9307d01/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -1474,7 +1467,6 @@ golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E= golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= -golang.org/x/sys v0.0.0-20170728174421-0f826bdd13b5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= diff --git a/handbook/business-operations/business-operations.rituals.yml b/handbook/business-operations/business-operations.rituals.yml index 48787356d..1726a429d 100644 --- a/handbook/business-operations/business-operations.rituals.yml +++ b/handbook/business-operations/business-operations.rituals.yml @@ -76,8 +76,8 @@ task: "The numbers" # TODO tie this to a responsibility startedOn: "2024-02-28" frequency: "Monthly" - description: "Each month, update the inputs in [The numbers](https://docs.google.com/spreadsheets/d/1X-brkmUK7_Rgp7aq42drNcUg8ZipzEiS153uKZSabWc/edit#gid=2112277278) spreadsheet to reflect the actuals for recurring non-personnel spend, and identify any unexpected increase or decrease in spend" - moreInfoUrl: + description: "Each month, update the inputs in “The numbers” spreadsheet to reflect the actuals for recurring non-personnel spend, and identify any unexpected increase or decrease in spend" + moreInfoUrl: "https://docs.google.com/spreadsheets/d/1X-brkmUK7_Rgp7aq42drNcUg8ZipzEiS153uKZSabWc/edit#gid=2112277278" dri: "joStableford" autoIssue: labels: [ "#g-business-operations" ] @@ -86,8 +86,8 @@ task: "Monthly accounting" # TODO tie this to a responsibility startedOn: "2024-02-28" frequency: "Monthly" - description: "Create [the monthly close GitHub issue](https://fleetdm.com/handbook/business-operations#intake) and walk through the steps. (This process includes fulfill the monthly reporting requirement for SVB)" - moreInfoUrl: + description: "Create the monthly close GitHub issue and walk through the steps. (This process includes fulfilling the monthly reporting requirement for SVB)" + moreInfoUrl: "https://fleetdm.com/handbook/business-operations#intake" dri: "hollidayn" autoIssue: labels: [ "#g-business-operations" ] @@ -140,14 +140,14 @@ task: "Quartlery finance check" startedOn: "2024-03-31" frequency: "Quarterly" - description: "Create [the quarterly close GitHub issue](https://fleetdm.com/handbook/business-operations#intake) and walk through the steps" + description: "Create the quarterly close GitHub issue and walk through the steps" moreInfoUrl: "https://fleetdm.com/handbook/business-operations#check-finances-for-quirks" dri: "joStableford" - task: "Quarterly grants" startedOn: "2024-03-31" frequency: "Quarterly" - description: "Create [the quarterly close GitHub issue](https://fleetdm.com/handbook/business-operations#intake) and walk through the steps" + description: "Create the quarterly close GitHub issue and walk through the steps" moreInfoUrl: "https://fleetdm.com/handbook/business-operations#grant-equity" dri: "hollidayn" - diff --git a/handbook/company/open-positions.yml b/handbook/company/open-positions.yml index 76578f10b..e8cc6b061 100644 --- a/handbook/company/open-positions.yml +++ b/handbook/company/open-positions.yml @@ -16,7 +16,7 @@ # hiringManagerLinkedInUrl: https://www.linkedin.com/in/lukeheath/ # responsibilities: | # - 🧑‍🔬 Design, develop, test, and maintain a state-of-the-art Golang application that includes robust APIs to support mobile and desktop clients. -# - 🛠️ Write code and tests, build prototypes, resolve issues, and profile and analyze bottlenecks. +# - 🛠 Write code and tests, build prototypes, resolve issues, and profile and analyze bottlenecks. # - 💭 Manage and optimize scalable distributed systems in the cloud. # - 🤝 Collaborate closely with product managers to understand requirements and translate them into actionable specifications. # - 🚀 Actively participate in all engineering scrum meetings, including sprint planning, daily standups, sprint demos, sprint retrospectives, and estimation sessions. @@ -36,3 +36,32 @@ # - 🛠️ Technical: You understand the software development processes. You understand that software quality matters. # - 🟣 Openness: You are flexible and open to new ideas and ways of working. # - ➕ Bonus: Cybersecurity or IT background. + +- jobTitle: 🌐 Apprentice + department: 🌐 Digital Experience + hiringManagerName: Sam Pfluger + hiringManagerGithubUsername: sampfluger88 + hiringManagerLinkedInUrl: https://www.linkedin.com/in/sampfluger88/ + responsibilities: | + - 👥 Manage multiple calendars and schedules using Google Calendar and various forms of communication simultaneously. + - 🧑‍🔬 Perform executive assistance processes as described in [https://fleetdm.com/handbook/digital-experience](https://fleetdm.com/handbook/digital-experience). + - 📖 Maintain and update the structure and content of the company handbook. + - 🗣️ Act as secondary/backup point of contact for other departments for Digital Experience initiatives. + - 🗓️ Schedule travel arrangements for the CEO and other executives as needed. + - ✍️ Help implement and drive change management for any new or modified processes and tools across the team and/or the organization. + - 📣 Record and communicate relevant information and decisions to the Digital Experience team and other departments. + - 📈 Collect and report Digital experience KPIs. + experience: | + - 🏃‍♂️ Strong desire to build a technical and operational-based skill set. + - 🚀 Detail-oriented, highly organized, and able to move quickly to solve complex problems using boring solutions. + - 🦉 Deep understanding of Google Suite (Gmail, Google Calendar, Google Sheets, Google Docs, etc.) + - 🫀 Experience dealing with sensitive personal information of team members and customers. + - 🛠️ Strong written and oral communication skills for general and technical topics. + - 💭 Capable of understanding and translating technical concepts and personas. + - 🤝 Ability to work in a process-driven team-based environment. + - 🟣 Openness: You are flexible and open to new ideas and ways of working. + - ➕ Bonus: Customer service/support background. + + + + diff --git a/handbook/company/pricing-features-table.yml b/handbook/company/pricing-features-table.yml index a8c747c8f..e618ee79e 100644 --- a/handbook/company/pricing-features-table.yml +++ b/handbook/company/pricing-features-table.yml @@ -39,7 +39,7 @@ friendlyName: Safely execute custom scripts (macOS, Windows, and Linux) description: Deploy and execute custom scripts using a REST API, and manage your library of scripts in the UI or a git repo. documentationUrl: https://fleetdm.com/docs/using-fleet/scripts - tier: Premium + tier: Free dri: mikermcneil usualDepartment: IT productCategories: [Endpoint operations,Device management] diff --git a/handbook/customer-success/README.md b/handbook/customer-success/README.md index 56b168064..245cab5a9 100644 --- a/handbook/customer-success/README.md +++ b/handbook/customer-success/README.md @@ -5,8 +5,7 @@ This handbook page details processes specific to working [with](#contact-us) and | Role | Contributor(s) | |:--------------------------------------|:------------------------------------------------------------------------------------------------------------------------| | VP of Customer Success | [Zay Hanlon](https://www.linkedin.com/in/zayhanlon/) _([@zayhanlon](https://github.com/zayhanlon))_ -| Customer Success Manager (CSM) | [Jason Lewis](https://www.linkedin.com/in/jlewis0451/) _([@patagonia121](https://github.com/patagonia121))_ -| Customer Success Manager (CSM) | [Michael Pinto](https://www.linkedin.com/in/michael-pinto-a06b4515a/) _([@pintomi1989](https://github.com/pintomi1989))_ +| Customer Success Managers (CSM) | [Jason Lewis](https://www.linkedin.com/in/jlewis0451/) _([@patagonia121](https://github.com/patagonia121))_, [Michael Pinto](https://www.linkedin.com/in/michael-pinto-a06b4515a/) _([@pintomi1989](https://github.com/pintomi1989))_ | Customer Solutions Architect (CSA) | [Brock Walters](https://www.linkedin.com/in/brock-walters-247a2990/) _([@nonpunctual](https://github.com/nonpunctual))_ | Customer Support Engineer (CSE) | [Kathy Satterlee](https://www.linkedin.com/in/ksatter/) _([@ksatter](https://github.com/ksatter))_, [Grant Bilstad](https://www.linkedin.com/in/grantbilstad/) _([@Pacamaster](https://github.com/Pacamaster))_, Ben Edwards _([@edwardsb](https://github.com/edwardsb))_ | Infrastructure Engineer | [Robert Fairburn](https://www.linkedin.com/in/robert-fairburn/) _([@rfairburn](https://github.com/rfairburn))_ diff --git a/handbook/customer-success/customer-success.rituals.yml b/handbook/customer-success/customer-success.rituals.yml index cccea45fc..27046f38f 100644 --- a/handbook/customer-success/customer-success.rituals.yml +++ b/handbook/customer-success/customer-success.rituals.yml @@ -9,15 +9,12 @@ labels: [ "#g-customer-success" ] # label to be applied to issue repo: "confidential" - - task: "Upgrade Managed Cloud" # Title that will actually show in rituals table - startedOn: "2024-02-08" # Needs to align with frequency e.g. if frequency is every thrid Thursday startedOn === any third thursday - frequency: "Weekly" # must be supported by - description: "Upgrade each Managed Cloud instance to the latest version of Fleet" # example of a longer thing: description: "[Prioritizing next sprint](https://fleetdm.com/handbook/company/communication)" - moreInfoUrl: "https://github.com/fleetdm/fleet/releases" #URL used to highlight "description:" test in table - dri: "rfairburn" # DRI for ritual (assignee if autoIssue) (TODO display GitHub proflie pic instead of name or title) - autoIssue: # Enables automation of GitHub issues - labels: [ "#g-customer-success" ] # label to be applied to issue - repo: "confidential" + task: "Process new requests" + startedOn: "2023-09-04" + frequency: "Daily" + description: "Prioritize all new requests including issues and PRs within one business day." + moreInfoUrl: "https://fleetdm.com/handbook/company/communications#process-new-requests" + dri: "zayhanlon" - task: "Overnight customer feedback" startedOn: "2024-02-08" @@ -25,4 +22,52 @@ description: "Respond to messages and alerts" moreInfoUrl: "https://fleetdm.com/handbook/customer-success#respond-to-messages-and-alerts" dri: "ksatter" - \ No newline at end of file +- + task: "Monitor customer Slack channels " + startedOn: "2024-02-08" + frequency: "Daily" + description: "Continuously monitor Slack for customer feedback, feature requests, reported bugs, etc., and respond in less than an hour." + moreInfoUrl: "https://fleetdm.com/handbook/company/communications#customer-support-service-level-agreements-slas" # TODO: add responsibility on customer-success readme + dri: "ksatter" +- + task: "Follow-up on unresolved customer questions and concerns" + startedOn: "2024-02-08" + frequency: "Daily" + description: "Follow-up with and tag appropriate personnel on customer issues and bugs in progress and items that remain unresolved." + moreInfoUrl: "https://fleetdm.com/handbook/company/communications#customer-support-service-level-agreements-slas" # TODO: add responsibility on customer-success readme + dri: "ksatter" +- + task: "Prepare for customer voice" + startedOn: "2024-02-23" + frequency: "Weekly" + description: "Prepare and review the health and latest updates from Fleet's key customers and active proof of concepts (POCs)." + moreInfoUrl: "" # TODO: add responsibility on customer-success readme starting point == "Prepare and review the health and latest updates from Fleet's key customers and active proof of concepts (POCs), plus other active support items related to community support, community engagement efforts, contact form or chat requests, self-service customers, outages, and more." + dri: "patagonia121" +- + task: "Prepare customer requests for feature fest" + startedOn: "2024-02-12" + frequency: "Triweekly" + description: "Check-in before the 🗣️ Product Feature Requests meeting to make sure that all information necessary has been gathered before presenting customer requests and feedback to the Product team." + moreInfoUrl: "" # TODO: add responsibility on customer-success readme starting point == "Prepare and review the health and latest updates from Fleet's key customers and active proof of concepts (POCs), plus other active support items related to community support, community engagement efforts, contact form or chat requests, self-service customers, outages, and more." + dri: "patagonia121" +- + task: "Present customer requests at feature fest" + startedOn: "2024-02-15" + frequency: "Triweekly" + description: "Present and advocate for requests and ideas brought to Fleet's attention by customers that are interesting from a product perspective." + moreInfoUrl: "" # TODO: add responsibility on customer-success readme starting point == "Prepare and review the health and latest updates from Fleet's key customers and active proof of concepts (POCs), plus other active support items related to community support, community engagement efforts, contact form or chat requests, self-service customers, outages, and more." + dri: "patagonia121" +- + task: "Communicate release notes to stakeholders" + startedOn: "2024-02-21" + frequency: "Triweekly" + description: "Update customers on new features and resolved bugs in an upcoming release." + moreInfoUrl: "" # TODO: add responsibility on customer-success readme starting point == "Prepare and review the health and latest updates from Fleet's key customers and active proof of concepts (POCs), plus other active support items related to community support, community engagement efforts, contact form or chat requests, self-service customers, outages, and more." + dri: "patagonia121" +- + task: "Upgrade Managed Cloud" + startedOn: "2024-02-08" + frequency: "Weekly" + description: "Upgrade each Managed Cloud instance to the latest version of Fleet" + moreInfoUrl: "https://github.com/fleetdm/fleet/releases" + dri: "rfairburn" diff --git a/handbook/digital-experience/digital-experience.rituals.yml b/handbook/digital-experience/digital-experience.rituals.yml index f37660d36..d0c1a2312 100644 --- a/handbook/digital-experience/digital-experience.rituals.yml +++ b/handbook/digital-experience/digital-experience.rituals.yml @@ -24,9 +24,9 @@ - task: "Generate latest schema" startedOn: "2024-02-19" - frequency: "Triweekly" + frequency: "Weekly" description: "After each sprint, generate the latest tables json file to incorporate any new schema documentation." - moreInfoUrl: # TODO tie to a responsibility + moreInfoUrl: https://fleetdm.com/handbook/company/product-groups#changes-to-tables-schema dri: "eashaw" autoIssue: # Enables automation of GitHub issues labels: [ "#g-digital-experience" ] # label to be applied to issue diff --git a/handbook/engineering/README.md b/handbook/engineering/README.md index 4dbbe929f..41c24079f 100644 --- a/handbook/engineering/README.md +++ b/handbook/engineering/README.md @@ -85,10 +85,10 @@ Our goal is to keep these dependencies up-to-date with each release of Fleet. If If an announcement is found for either data source that may impact data feed availability, notify the current [on-call engineer](https://fleetdm.com/handbook/engineering#how-to-reach-the-oncall-engineer). Notify them that it is their responsibility to investigate and file a bug or take further action as necessary. 5. [Fleetd](https://fleetdm.com/docs/get-started/anatomy#fleetd) components -- Check for code changes to [Orbit](https://github.com/fleetdm/fleet/blob/main/orbit/) or [Desktop](https://github.com/fleetdm/fleet/blob/main/orbit/desktop/) since the last `orbit-*` tag was published. +- Check for code changes to [Orbit](https://github.com/fleetdm/fleet/blob/main/orbit/) or [Desktop](https://github.com/fleetdm/fleet/tree/main/orbit/cmd/desktop) since the last `orbit-*` tag was published. - Check for code changes to the [fleetd-chrome extension](https://github.com/fleetdm/fleet/tree/main/ee/fleetd-chrome) since the last `fleetd-chrome-*` tag was published. -If code changes are found for any `fleetd` components, create a new release QA issue to update `fleetd`. Create and assign the release QA issue to a corresponding [GitHub milestone](https://github.com/fleetdm/fleet/milestones) for each tag that will be issued (`fleet-`, `orbit-`, `fleetd-chrome-`). +If code changes are found for any `fleetd` components, create a new release QA issue to update `fleetd`. Delete the top section for Fleet core, and retain the bottom section for `fleetd`. Populate the necessary version changes for each `fleetd` component. ### Create release QA issue Next, create a new GitHub issue using the [Release QA template](https://github.com/fleetdm/fleet/issues/new?assignees=&labels=&projects=&template=release-qa.md). Add the release version to the title, and assign the quality assurance members of the [MDM](https://fleetdm.com/handbook/company/development-groups#mdm-group) and [Endpoint ops](https://fleetdm.com/handbook/company/product-groups#endpoint-ops-group) product groups. diff --git a/handbook/engineering/engineering.rituals.yml b/handbook/engineering/engineering.rituals.yml index 36328e915..b56231457 100644 --- a/handbook/engineering/engineering.rituals.yml +++ b/handbook/engineering/engineering.rituals.yml @@ -25,21 +25,21 @@ task: "Vulnerability alerts (fleetdm.com)" startedOn: "2023-08-09" frequency: "Weekly" - description: "Review and remediate or dismiss [vulnerability alerts](https://github.com/fleetdm/fleet/security) for the fleetdm.com codebase on GitHub." + description: "Review and remediate or dismiss vulnerability alerts for the fleetdm.com codebase on GitHub." moreInfoUrl: "https://github.com/fleetdm/fleet/security" dri: "eashaw" - task: "Vulnerability alerts (frontend)" startedOn: "2023-08-09" frequency: "Weekly" - description: "Review and remediate or dismiss [vulnerability alerts](https://github.com/fleetdm/fleet/security) for the Fleet frontend codebase (and related JS) on GitHub." + description: "Review and remediate or dismiss vulnerability alerts for the Fleet frontend codebase (and related JS) on GitHub." moreInfoUrl: "https://github.com/fleetdm/fleet/security" dri: "lukeheath" - task: "Vulnerability alerts (backend)" startedOn: "2023-08-09" frequency: "Weekly" - description: "Review and remediate or dismiss [vulnerability alerts](https://github.com/fleetdm/fleet/security) for the Fleet backend codebase (and all Go code) on GitHub." + description: "Review and remediate or dismiss vulnerability alerts for the Fleet backend codebase (and all Go code) on GitHub." moreInfoUrl: "https://github.com/fleetdm/fleet/security" dri: "lukeheath" - diff --git a/handbook/sales/README.md b/handbook/sales/README.md index 534915b4b..cd971a7b8 100644 --- a/handbook/sales/README.md +++ b/handbook/sales/README.md @@ -18,24 +18,10 @@ This handbook page details processes specific to working [with](#contact-us) and - Please **use issue comments and GitHub mentions** to communicate follow-ups or answer questions related to your request. @@ -160,7 +146,7 @@ You can help a Premium license dispenser customers change their credit card by d ## Rituals - + #### Stubs diff --git a/infrastructure/dogfood/terraform/aws/variables.tf b/infrastructure/dogfood/terraform/aws/variables.tf index 18c0d6541..dc5e1f6af 100644 --- a/infrastructure/dogfood/terraform/aws/variables.tf +++ b/infrastructure/dogfood/terraform/aws/variables.tf @@ -56,7 +56,7 @@ variable "database_name" { variable "fleet_image" { description = "the name of the container image to run" - default = "fleetdm/fleet:v4.44.1" + default = "fleetdm/fleet:v4.45.0" } variable "software_inventory" { diff --git a/infrastructure/dogfood/terraform/gcp/readme.md b/infrastructure/dogfood/terraform/gcp/readme.md index 105c59be2..b4872b181 100644 --- a/infrastructure/dogfood/terraform/gcp/readme.md +++ b/infrastructure/dogfood/terraform/gcp/readme.md @@ -11,8 +11,7 @@ dns_name = "" // eg. myfleet.fleetdm.com #### Fleet server The fleet webserver is running as [Google Cloud Run](https://cloud.google.com/run) containers, this is very similar to how the existing terraform for AWS runs fleet as Fargate compute. -_NOTE: Cloud Run has [limitations](https://cloud.google.com/run/docs/deploying#images) on what container images it will run_. In our deployment we create -and Artifact Registry and deploy the public fleet container image into Artifact Registry. +_NOTE: Cloud Run has [limitations](https://cloud.google.com/run/docs/deploying#images) on what container images it will run_. In our deployment we create and deploy the public fleet container image into Artifact Registry. #### MySQL We are running MySQL using [Google Cloud SQL](https://cloud.google.com/sql/docs/mysql/introduction) only reachable via [CloudSQLProxy](https://cloud.google.com/sql/docs/mysql/connect-admin-proxy) and from Cloud Run @@ -51,4 +50,4 @@ Push to Google Artifact registry: In this example we are using [GCP Managed Certificates](https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs) to handle TLS and TLS termination at the LoadBalancer. In order for the certificate to be properly issued, you'll need to update your domain registrar with the nameserver values generated -by the new Zone created in GCP DNS. \ No newline at end of file +by the new Zone created in GCP DNS. diff --git a/infrastructure/dogfood/terraform/gcp/variables.tf b/infrastructure/dogfood/terraform/gcp/variables.tf index 2148d56d5..c859c1a39 100644 --- a/infrastructure/dogfood/terraform/gcp/variables.tf +++ b/infrastructure/dogfood/terraform/gcp/variables.tf @@ -68,5 +68,5 @@ variable "redis_mem" { } variable "image" { - default = "fleet:v4.44.1" + default = "fleet:v4.45.0" } diff --git a/infrastructure/loadtesting/terraform/docker/loadtest.Dockerfile b/infrastructure/loadtesting/terraform/docker/loadtest.Dockerfile index eb2414ac6..ff8957e69 100644 --- a/infrastructure/loadtesting/terraform/docker/loadtest.Dockerfile +++ b/infrastructure/loadtesting/terraform/docker/loadtest.Dockerfile @@ -1,7 +1,7 @@ -FROM golang:1.21.6@sha256:5c7c2c9f1a930f937a539ff66587b6947890079470921d62ef1a6ed24395b4b3 +FROM golang:1.21.7@sha256:549dd88a1a53715f177b41ab5fee25f7a376a6bb5322ac7abe263480d9554021 ARG TAG RUN git clone -b $TAG --depth=1 --no-tags --progress --no-recurse-submodules https://github.com/fleetdm/fleet.git && cd /go/fleet/cmd/osquery-perf/ && go build . -FROM golang:1.21.6@sha256:5c7c2c9f1a930f937a539ff66587b6947890079470921d62ef1a6ed24395b4b3 +FROM golang:1.21.7@sha256:549dd88a1a53715f177b41ab5fee25f7a376a6bb5322ac7abe263480d9554021 COPY --from=0 /go/fleet/cmd/osquery-perf/osquery-perf /go/osquery-perf diff --git a/infrastructure/sandbox/.gitignore b/infrastructure/sandbox/.gitignore deleted file mode 100644 index a71a684ec..000000000 --- a/infrastructure/sandbox/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -.external_modules -.tfsec diff --git a/infrastructure/sandbox/.terraform.lock.hcl b/infrastructure/sandbox/.terraform.lock.hcl deleted file mode 100644 index 306a29cea..000000000 --- a/infrastructure/sandbox/.terraform.lock.hcl +++ /dev/null @@ -1,342 +0,0 @@ -# This file is maintained automatically by "terraform init". -# Manual edits may be lost in future updates. - -provider "registry.terraform.io/cloudflare/cloudflare" { - version = "4.11.0" - constraints = "4.11.0, ~> 4.11.0" - hashes = [ - "h1:IumoPgFcYKiFQjEMU8IHAELBu9DVmFUHPFDOzralbJ4=", - "h1:vPD1Yuk9AgqRHRWC+x8mznnSaInoTwaVS5dgnseMz88=", - "zh:09d620903d0f191ab7dee88ce75833307a03c7a9f88dfb2c2a58025283b80ff4", - "zh:0fb59cccc066c867750d633d6dfea8b99e75f5545ae4e7c090be465c6858eb73", - "zh:16b35bf2b88a629c05aefc6ebdbcc039447ee23a5b32594d844ca83f92ac8507", - "zh:5cc3f5df54891bb9efab51cca3266c59a82fd7dcc5667aa3451562325002235a", - "zh:6f384c9ba3e844b41c3de8455a3b91e3e3b32c1fa34b8b1ece4eae36d347c67e", - "zh:8000b3567ba7a43837bb8ccf7fdbcd03cc30103ec6abed84a40ee1c5b99f933f", - "zh:8687603e979a5fe82f2a65bc0cfb2a20acce4d871b01f04ffeabb9aa17c079ca", - "zh:88ed3e07913ad564ae3ae3280c868054d85e37b16db250b9cbdfca0c58f75dce", - "zh:890df766e9b839623b1f0437355032a3c006226a6c200cd911e15ee1a9014e9f", - "zh:a1faa7112d35aee74eb2b90543570ea56209112c0e2c1c06ad503a9c2464676d", - "zh:a433640c433f1815ca3cf92927a3764669095b8c668a73363ca9017a0b1d0349", - "zh:a63b6cf55baaa37cd4bf98bce94b7624bb54efe5abf8b86f24384df7996229f0", - "zh:a6696b0bdadb17d6f2ef7702b922c4006b21b4125530b0a8ac3bcfce1aafe2d8", - "zh:b2b3e16aa9c9d10409132fa7f181598bb67a1e5684c54535745ce0e3dcbd5d23", - "zh:d8c65b2e8a18141bb3ee53c7bf37422ff3679a67733702a631696586666ca885", - ] -} - -provider "registry.terraform.io/gavinbunney/kubectl" { - version = "1.14.0" - constraints = ">= 1.14.0, 1.14.0" - hashes = [ - "h1:ItrWfCZMzM2JmvDncihBMalNLutsAk7kyyxVRaipftY=", - "h1:gLFn+RvP37sVzp9qnFCwngRjjFV649r6apjxvJ1E/SE=", - "zh:0350f3122ff711984bbc36f6093c1fe19043173fad5a904bce27f86afe3cc858", - "zh:07ca36c7aa7533e8325b38232c77c04d6ef1081cb0bac9d56e8ccd51f12f2030", - "zh:0c351afd91d9e994a71fe64bbd1662d0024006b3493bb61d46c23ea3e42a7cf5", - "zh:39f1a0aa1d589a7e815b62b5aa11041040903b061672c4cfc7de38622866cbc4", - "zh:428d3a321043b78e23c91a8d641f2d08d6b97f74c195c654f04d2c455e017de5", - "zh:4baf5b1de2dfe9968cc0f57fd4be5a741deb5b34ee0989519267697af5f3eee5", - "zh:6131a927f9dffa014ab5ca5364ac965fe9b19830d2bbf916a5b2865b956fdfcf", - "zh:c62e0c9fd052cbf68c5c2612af4f6408c61c7e37b615dc347918d2442dd05e93", - "zh:f0beffd7ce78f49ead612e4b1aefb7cb6a461d040428f514f4f9cc4e5698ac65", - ] -} - -provider "registry.terraform.io/hashicorp/archive" { - version = "2.4.0" - hashes = [ - "h1:EtN1lnoHoov3rASpgGmh6zZ/W6aRCTgKC7iMwvFY1yc=", - "h1:cJokkjeH1jfpG4QEHdRx0t2j8rr52H33A7C/oX73Ok4=", - "zh:18e408596dd53048f7fc8229098d0e3ad940b92036a24287eff63e2caec72594", - "zh:392d4216ecd1a1fd933d23f4486b642a8480f934c13e2cae3c13b6b6a7e34a7b", - "zh:655dd1fa5ca753a4ace21d0de3792d96fff429445717f2ce31c125d19c38f3ff", - "zh:70dae36c176aa2b258331ad366a471176417a94dd3b4985a911b8be9ff842b00", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:7d8c8e3925f1e21daf73f85983894fbe8868e326910e6df3720265bc657b9c9c", - "zh:a032ec0f0aee27a789726e348e8ad20778c3a1c9190ef25e7cff602c8d175f44", - "zh:b8e50de62ba185745b0fe9713755079ad0e9f7ac8638d204de6762cc36870410", - "zh:c8ad0c7697a3d444df21ff97f3473a8604c8639be64afe3f31b8ec7ad7571e18", - "zh:df736c5a2a7c3a82c5493665f659437a22f0baf8c2d157e45f4dd7ca40e739fc", - "zh:e8ffbf578a0977074f6d08aa8734e36c726e53dc79894cfc4f25fadc4f45f1df", - "zh:efea57ff23b141551f92b2699024d356c7ffd1a4ad62931da7ed7a386aef7f1f", - ] -} - -provider "registry.terraform.io/hashicorp/aws" { - version = "5.10.0" - constraints = ">= 3.72.0, >= 4.3.0, >= 4.8.0, >= 4.9.0, >= 4.10.0, >= 4.13.0, >= 4.30.0, >= 4.47.0, >= 4.67.0, >= 5.0.0, ~> 5.10.0" - hashes = [ - "h1:AgF54/79Nb/oQjbAMMewENSIa1PEScMn20Xa91hZR2g=", - "h1:ll2mC5mMF+Tm/+tmDQ6p6h3oAFpMSbZsA54STMZegwI=", - "zh:24f8b40ba25521ec809906623ce1387542f3da848952167bc960663583a7b2c7", - "zh:3c12afbda4e8ed44ab8315d16bbba4329ef3f18ffe3c0d5ea456dd05472fa610", - "zh:4da2de97535c7fb51ede8ef9b6bd45c790005aec36daac4317a6175d2ff632fd", - "zh:5631fd3c02c5abe5e51a73bd77ddeaaf97b2d508845ea03bc1e5955b52d94706", - "zh:5bdef27b4e5b2dcd0661125fcc1e70826d545903b1e19bb8d28d2a0c812468d5", - "zh:7b7f6b3e00ad4b7bfaa9872388f7b8014d8c9a1fe5c3f9f57865535865727633", - "zh:935f7a599a3f55f69052b096491262d59787625ce5d52f729080328e5088e823", - "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a451a24f6675f8ad643a9b218cdb54c2af75a53d6a712daff46f64b81ec61032", - "zh:a5bcf820baefdc9f455222878f276a7f406a1092ac7b4c0cdbd6e588bff84847", - "zh:c9ab7b838a75bbcacc298658c1a04d1f0ee5935a928d821afcbe08c98cca7c5f", - "zh:d83855b6d66aaa03b1e66e03b7d0a4d1c9f992fce06f00011edde2a6ad6d91d6", - "zh:f1793e9a1e3ced98ca301ef1a294f46c06f77f6eb10f4d67ffef87ea60835421", - "zh:f366c99ddb16d75e07a687a60c015e8e2e0cdb593dea902385629571bd604859", - "zh:fb3ec60ea72144f480f495634c6d3e7a7638d7061a77c228a30768c1ae0b91f6", - ] -} - -provider "registry.terraform.io/hashicorp/cloudinit" { - version = "2.3.2" - constraints = ">= 2.0.0" - hashes = [ - "h1:Vl0aixAYTV/bjathX7VArC5TVNkxBCsi3Vq7R4z1uvc=", - "h1:ocyv0lvfyvzW4krenxV5CL4Jq5DiA3EUfoy8DR6zFMw=", - "zh:2487e498736ed90f53de8f66fe2b8c05665b9f8ff1506f751c5ee227c7f457d1", - "zh:3d8627d142942336cf65eea6eb6403692f47e9072ff3fa11c3f774a3b93130b3", - "zh:434b643054aeafb5df28d5529b72acc20c6f5ded24decad73b98657af2b53f4f", - "zh:436aa6c2b07d82aa6a9dd746a3e3a627f72787c27c80552ceda6dc52d01f4b6f", - "zh:458274c5aabe65ef4dbd61d43ce759287788e35a2da004e796373f88edcaa422", - "zh:54bc70fa6fb7da33292ae4d9ceef5398d637c7373e729ed4fce59bd7b8d67372", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:893ba267e18749c1a956b69be569f0d7bc043a49c3a0eb4d0d09a8e8b2ca3136", - "zh:95493b7517bce116f75cdd4c63b7c82a9d0d48ec2ef2f5eb836d262ef96d0aa7", - "zh:9ae21ab393be52e3e84e5cce0ef20e690d21f6c10ade7d9d9d22b39851bfeddc", - "zh:cc3b01ac2472e6d59358d54d5e4945032efbc8008739a6d4946ca1b621a16040", - "zh:f23bfe9758f06a1ec10ea3a81c9deedf3a7b42963568997d84a5153f35c5839a", - ] -} - -provider "registry.terraform.io/hashicorp/external" { - version = "2.3.1" - constraints = ">= 1.0.0" - hashes = [ - "h1:bROCw6g5D/3fFnWeJ01L4IrdnJl1ILU8DGDgXCtYzaY=", - "h1:gznGscVJ0USxy4CdihpjRKPsKvyGr/zqPvBoFLJTQDc=", - "zh:001e2886dc81fc98cf17cf34c0d53cb2dae1e869464792576e11b0f34ee92f54", - "zh:2eeac58dd75b1abdf91945ac4284c9ccb2bfb17fa9bdb5f5d408148ff553b3ee", - "zh:2fc39079ba61411a737df2908942e6970cb67ed2f4fb19090cd44ce2082903dd", - "zh:472a71c624952cff7aa98a7b967f6c7bb53153dbd2b8f356ceb286e6743bb4e2", - "zh:4cff06d31272aac8bc35e9b7faec42cf4554cbcbae1092eaab6ab7f643c215d9", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:7ed16ccd2049fa089616b98c0bd57219f407958f318f3c697843e2397ddf70df", - "zh:842696362c92bf2645eb85c739410fd51376be6c488733efae44f4ce688da50e", - "zh:8985129f2eccfd7f1841ce06f3bf2bbede6352ec9e9f926fbaa6b1a05313b326", - "zh:a5f0602d8ec991a5411ef42f872aa90f6347e93886ce67905c53cfea37278e05", - "zh:bf4ab82cbe5256dcef16949973bf6aa1a98c2c73a98d6a44ee7bc40809d002b8", - "zh:e70770be62aa70198fa899526d671643ff99eecf265bf1a50e798fc3480bd417", - ] -} - -provider "registry.terraform.io/hashicorp/helm" { - version = "2.10.1" - constraints = ">= 2.4.1, >= 2.5.1" - hashes = [ - "h1:ctDhNJU4tEcyoUgPzwKuJmbDIqUl25mCY+s/lVHP6Sg=", - "h1:rssAXPIBWhumMtToGhh63w1euKOgVOi7+9LK6qZtDUQ=", - "zh:0717312baed39fb0a00576297241b69b419880cad8771bf72dec97ebdc96b200", - "zh:0e0e287b4e8429a0700143c8159764502eba0b33b1d094bf0d4ef4d93c7802cb", - "zh:4f74605377dab4065aaad35a2c5fa6186558c6e2e57b9058bdc8a62cf91857b9", - "zh:505f4af4dedb7a4f8f45b4201900b8e16216bdc2a01cc84fe13cdbf937570e7e", - "zh:83f37fe692513c0ce307d487248765383e00f9a84ed95f993ce0d3efdf4204d3", - "zh:840e5a84e1b5744f0211f611a2c6890da58016a40aafd5971f12285164d4e29b", - "zh:8c03d8dee292fa0367b0511cf3e95b706e034f78025f5dff0388116e1798bf47", - "zh:937800d1860f6b3adbb20e65f11e5fcd940b21ce8bdb48198630426244691325", - "zh:c1853aa5cbbdd1d46f4b169e84c3482103f0e8575a9bb044dbde908e27348c5d", - "zh:c9b0f640590da20931c30818b0b0587aa517d5606cb6e8052e4e4bf38f97b54d", - "zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c", - "zh:fe8bd4dd09dc7ca218959eda1ced9115408c2cdc9b4a76964bfa455f3bcadfd3", - ] -} - -provider "registry.terraform.io/hashicorp/kubernetes" { - version = "2.22.0" - constraints = ">= 2.6.1, >= 2.10.0" - hashes = [ - "h1:DJr88+52tPK4Ft9xltF6YL+sRz8HWLP2ZOfFiKSB5Dc=", - "h1:b6Wj111/wsMNg8FrHFXrf4mCZFtSXKHx4JvbZh3YTCY=", - "zh:1eac662b1f238042b2068401e510f0624efaf51fd6a4dd9c49d710a49d383b61", - "zh:4c35651603493437b0b13e070148a330c034ac62c8967c2de9da6620b26adca4", - "zh:50c0e8654efb46e3a3666c638ca2e0c8aec07f985fbc80f9205bed960386dc9b", - "zh:5f65194ddd6ea7e89b378297d882083a4b84962edb35dd35752f0c7e9d6282a0", - "zh:6fc0c2d65864324edde4db84f528268065df58229fc3ee321626687b0e603637", - "zh:73c58d007aba7f67c0aa9029794e10c2517bec565b7cb57d0f5948ea3f30e407", - "zh:7d6fc9d3c1843baccd2e1fc56317925a2f9df372427d30fcb5052d123adc887a", - "zh:a0ad9eb863b51586ea306c5f2beef74476c96684aed41a3ee99eb4b6d8898d01", - "zh:e218fcfbf4994ff741408a023a9d9eb6c697ce9f63ce5540d3b35226d86c963e", - "zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c", - "zh:f95625f317795f0e38cc6293dd31c85863f4e225209d07d1e233c50d9295083c", - "zh:f96e0923a632bc430267fe915794972be873887f5e761ed11451d67202e256c8", - ] -} - -provider "registry.terraform.io/hashicorp/local" { - version = "2.4.0" - constraints = ">= 1.0.0, >= 2.1.0" - hashes = [ - "h1:R97FTYETo88sT2VHfMgkPU3lzCsZLunPftjSI5vfKe8=", - "h1:ZUEYUmm2t4vxwzxy1BvN1wL6SDWrDxfH7pxtzX8c6d0=", - "zh:53604cd29cb92538668fe09565c739358dc53ca56f9f11312b9d7de81e48fab9", - "zh:66a46e9c508716a1c98efbf793092f03d50049fa4a83cd6b2251e9a06aca2acf", - "zh:70a6f6a852dd83768d0778ce9817d81d4b3f073fab8fa570bff92dcb0824f732", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:82a803f2f484c8b766e2e9c32343e9c89b91997b9f8d2697f9f3837f62926b35", - "zh:9708a4e40d6cc4b8afd1352e5186e6e1502f6ae599867c120967aebe9d90ed04", - "zh:973f65ce0d67c585f4ec250c1e634c9b22d9c4288b484ee2a871d7fa1e317406", - "zh:c8fa0f98f9316e4cfef082aa9b785ba16e36ff754d6aba8b456dab9500e671c6", - "zh:cfa5342a5f5188b20db246c73ac823918c189468e1382cb3c48a9c0c08fc5bf7", - "zh:e0e2b477c7e899c63b06b38cd8684a893d834d6d0b5e9b033cedc06dd7ffe9e2", - "zh:f62d7d05ea1ee566f732505200ab38d94315a4add27947a60afa29860822d3fc", - "zh:fa7ce69dde358e172bd719014ad637634bbdabc49363104f4fca759b4b73f2ce", - ] -} - -provider "registry.terraform.io/hashicorp/null" { - version = "3.2.1" - constraints = ">= 2.0.0, >= 3.0.0, >= 3.1.0" - hashes = [ - "h1:FbGfc+muBsC17Ohy5g806iuI1hQc4SIexpYCrQHQd8w=", - "h1:ydA0/SNRVB1o95btfshvYsmxA+jZFRZcvKzZSB+4S1M=", - "zh:58ed64389620cc7b82f01332e27723856422820cfd302e304b5f6c3436fb9840", - "zh:62a5cc82c3b2ddef7ef3a6f2fedb7b9b3deff4ab7b414938b08e51d6e8be87cb", - "zh:63cff4de03af983175a7e37e52d4bd89d990be256b16b5c7f919aff5ad485aa5", - "zh:74cb22c6700e48486b7cabefa10b33b801dfcab56f1a6ac9b6624531f3d36ea3", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:79e553aff77f1cfa9012a2218b8238dd672ea5e1b2924775ac9ac24d2a75c238", - "zh:a1e06ddda0b5ac48f7e7c7d59e1ab5a4073bbcf876c73c0299e4610ed53859dc", - "zh:c37a97090f1a82222925d45d84483b2aa702ef7ab66532af6cbcfb567818b970", - "zh:e4453fbebf90c53ca3323a92e7ca0f9961427d2f0ce0d2b65523cc04d5d999c2", - "zh:e80a746921946d8b6761e77305b752ad188da60688cfd2059322875d363be5f5", - "zh:fbdb892d9822ed0e4cb60f2fedbdbb556e4da0d88d3b942ae963ed6ff091e48f", - "zh:fca01a623d90d0cad0843102f9b8b9fe0d3ff8244593bd817f126582b52dd694", - ] -} - -provider "registry.terraform.io/hashicorp/random" { - version = "3.5.1" - constraints = ">= 2.2.0, >= 3.0.0, ~> 3.5.1" - hashes = [ - "h1:IL9mSatmwov+e0+++YX2V6uel+dV6bn+fC/cnGDK3Ck=", - "h1:VSnd9ZIPyfKHOObuQCaKfnjIHRtR7qTw19Rz8tJxm+k=", - "zh:04e3fbd610cb52c1017d282531364b9c53ef72b6bc533acb2a90671957324a64", - "zh:119197103301ebaf7efb91df8f0b6e0dd31e6ff943d231af35ee1831c599188d", - "zh:4d2b219d09abf3b1bb4df93d399ed156cadd61f44ad3baf5cf2954df2fba0831", - "zh:6130bdde527587bbe2dcaa7150363e96dbc5250ea20154176d82bc69df5d4ce3", - "zh:6cc326cd4000f724d3086ee05587e7710f032f94fc9af35e96a386a1c6f2214f", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:b6d88e1d28cf2dfa24e9fdcc3efc77adcdc1c3c3b5c7ce503a423efbdd6de57b", - "zh:ba74c592622ecbcef9dc2a4d81ed321c4e44cddf7da799faa324da9bf52a22b2", - "zh:c7c5cde98fe4ef1143bd1b3ec5dc04baf0d4cc3ca2c5c7d40d17c0e9b2076865", - "zh:dac4bad52c940cd0dfc27893507c1e92393846b024c5a9db159a93c534a3da03", - "zh:de8febe2a2acd9ac454b844a4106ed295ae9520ef54dc8ed2faf29f12716b602", - "zh:eab0d0495e7e711cca367f7d4df6e322e6c562fc52151ec931176115b83ed014", - ] -} - -provider "registry.terraform.io/hashicorp/time" { - version = "0.9.1" - constraints = ">= 0.7.0, >= 0.8.0" - hashes = [ - "h1:NUv/YtEytDQncBQ2mTxnUZEy/rmDlPYmE9h2iokR0vk=", - "h1:VxyoYYOCaJGDmLz4TruZQTSfQhvwEcMxvcKclWdnpbs=", - "zh:00a1476ecf18c735cc08e27bfa835c33f8ac8fa6fa746b01cd3bcbad8ca84f7f", - "zh:3007f8fc4a4f8614c43e8ef1d4b0c773a5de1dcac50e701d8abc9fdc8fcb6bf5", - "zh:5f79d0730fdec8cb148b277de3f00485eff3e9cf1ff47fb715b1c969e5bbd9d4", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:8c8094689a2bed4bb597d24a418bbbf846e15507f08be447d0a5acea67c2265a", - "zh:a6d9206e95d5681229429b406bc7a9ba4b2d9b67470bda7df88fa161508ace57", - "zh:aa299ec058f23ebe68976c7581017de50da6204883950de228ed9246f309e7f1", - "zh:b129f00f45fba1991db0aa954a6ba48d90f64a738629119bfb8e9a844b66e80b", - "zh:ef6cecf5f50cda971c1b215847938ced4cb4a30a18095509c068643b14030b00", - "zh:f1f46a4f6c65886d2dd27b66d92632232adc64f92145bf8403fe64d5ffa5caea", - "zh:f79d6155cda7d559c60d74883a24879a01c4d5f6fd7e8d1e3250f3cd215fb904", - "zh:fd59fa73074805c3575f08cd627eef7acda14ab6dac2c135a66e7a38d262201c", - ] -} - -provider "registry.terraform.io/hashicorp/tls" { - version = "4.0.4" - constraints = ">= 3.0.0" - hashes = [ - "h1:GZcFizg5ZT2VrpwvxGBHQ/hO9r6g0vYdQqx3bFD3anY=", - "h1:pe9vq86dZZKCm+8k1RhzARwENslF3SXb9ErHbQfgjXU=", - "zh:23671ed83e1fcf79745534841e10291bbf34046b27d6e68a5d0aab77206f4a55", - "zh:45292421211ffd9e8e3eb3655677700e3c5047f71d8f7650d2ce30242335f848", - "zh:59fedb519f4433c0fdb1d58b27c210b27415fddd0cd73c5312530b4309c088be", - "zh:5a8eec2409a9ff7cd0758a9d818c74bcba92a240e6c5e54b99df68fff312bbd5", - "zh:5e6a4b39f3171f53292ab88058a59e64825f2b842760a4869e64dc1dc093d1fe", - "zh:810547d0bf9311d21c81cc306126d3547e7bd3f194fc295836acf164b9f8424e", - "zh:824a5f3617624243bed0259d7dd37d76017097dc3193dac669be342b90b2ab48", - "zh:9361ccc7048be5dcbc2fafe2d8216939765b3160bd52734f7a9fd917a39ecbd8", - "zh:aa02ea625aaf672e649296bce7580f62d724268189fe9ad7c1b36bb0fa12fa60", - "zh:c71b4cd40d6ec7815dfeefd57d88bc592c0c42f5e5858dcc88245d371b4b8b1e", - "zh:dabcd52f36b43d250a3d71ad7abfa07b5622c69068d989e60b79b2bb4f220316", - "zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c", - ] -} - -provider "registry.terraform.io/kreuzwerker/docker" { - version = "2.16.0" - constraints = "~> 2.16.0" - hashes = [ - "h1:OcTn2QyCQNjDiJYy1vqQFmz2dxJdOF/2/HBXBvGxU2E=", - "h1:aslxshC6HTeDoZuygVzqDmyFCbCizZs7AWHDWk1p/6c=", - "zh:0ff8aa7884c6dae90e6f245bb9d37898735f89e095ba53413f2f364db4d11a77", - "zh:4101f4c909477f3a8225829b7063e5c5a2e2986a6163e0f113af040b5feab61f", - "zh:59db110d2b6c620cc12a1741d81ed8d1dd7fb0540024428fefbb57e8bebe5b60", - "zh:6e134983f195ea0273ac042f0a2df14158d676a24e8dd140ca0357f3efc3fd61", - "zh:7de1de3cc1eacb2ef2693207f5c5f54fa4814ae8c024b8b3c2a0923c82fd6f14", - "zh:a6659fbc7c45fbb60c7c9bf06724eb6084711f1b79c720ef8512a4367e63cbe5", - "zh:ae97c721431517d8c71f8cede91d734d2f2372a1bfef0c3bba43b54c0f8b1cee", - "zh:b3cbd47d5f0cb522b6dd3561ccd2f491fb6afb577372718e0663d12cfeef30e9", - "zh:b64af7c6ad8870c11677874f6cd13322aa03d2190391a120be17304ca324ea1c", - "zh:c363747bae968af997eaf22193168451523e92b59aee8aee135d3b27db132366", - "zh:c40721250642157b2a72d8db44fa09de0f7635ba4b0e2ebf5527570f3988e62f", - "zh:e97707609e346bf463d539099faa8790f2f453cfbd0b880327b6eae16ca4f213", - "zh:f4a23ce27cb430f91895466b3e2d132c534fa2b58808f6771235d76e696f4972", - "zh:fd634e973eb2b6483a1ce9251801a393d04cb496f8e83ffcf3f0c4cad8c18f4c", - ] -} - -provider "registry.terraform.io/paultyng/git" { - version = "0.1.0" - constraints = "~> 0.1.0" - hashes = [ - "h1:2BsazHD/QR4AvjGB4laTWU8VC9IFnMrSZ+2gCunQ1JI=", - "h1:nz3VfU3LHDUQFdILoXq8O0FWbQZfCmXhpQOTKRRzEaY=", - "zh:0d593ac990f711171875ba5fc838f0087df84ddb1c69154ee630def5984931ea", - "zh:3895c2719f42e93fc993474859b34de87d90e2c47dfb757d435b9b57945195e4", - "zh:3a90ce559a3589628a2d6820a9d76a354763c268b0c173982ff773e022032856", - "zh:42339a6084095e37d0c843907dcabe66989949ea3f0025f6f1f9d8583d7da779", - "zh:435522beccaedf89bc39eed495393194b43156d1730ef45c29faa584552dc355", - "zh:87b4ee4f521283daaa0d63dd7949dc59f700b92e246e4aeb06510c01842a3c8b", - "zh:997aca77ddc1411dd601ea1fa2e455be9531c3e3c0f0917e8f2423ffd4ffb9ba", - "zh:a70e98ce6ef7a8256286ab791bc231777b76c8f038da4b9eccf399d2b22051fb", - "zh:af9301520e8befe3ec6d1125e10cc0724b318590f5680f12032c8bdc3b0c827d", - "zh:d995a3b8eaa5ac61744d49127fbf68b4c32e16d3c67d570edda2af26113b92a5", - "zh:e8b5c7354a02c54efc026d8289ce9d3784f58abd673a78e80bd4fb073dd75101", - ] -} - -provider "registry.terraform.io/terraform-aws-modules/http" { - version = "2.4.1" - constraints = "2.4.1" - hashes = [ - "h1:ZnkXcawrIr611RvZpoDzbtPU7SVFyHym+7p1t+PQh20=", - "h1:fHqAXle/P/fT2k+HEyTqYVE+/RvpQAaBr6xXZgM66es=", - "zh:0111f54de2a9815ded291f23136d41f3d2731c58ea663a2e8f0fef02d377d697", - "zh:0740152d76f0ccf54f4d0e8e0753739a5233b022acd60b5d2353d248c4c17204", - "zh:569518f46809ec9cdc082b4dfd4e828236eee2b50f87b301d624cfd83b8f5b0d", - "zh:7669f7691de91eec9f381e9a4be81aa4560f050348a86c6ea7804925752a01bb", - "zh:81cd53e796ec806aca2d8e92a2aed9135661e170eeff6cf0418e54f98816cd05", - "zh:82f01abd905090f978b169ac85d7a5952322a5f0f460269dd981b3596652d304", - "zh:9a235610066e0f7e567e69c23a53327271a6fc568b06bf152d8fe6594749ed2b", - "zh:aeabdd8e633d143feb67c52248c85358951321e35b43943aeab577c005abd30a", - "zh:c20d22dba5c79731918e7192bc3d0b364d47e98a74f47d287e6cc66236bc0ed0", - "zh:c4fea2cb18c31ed7723deec5ebaff85d6795bb6b6ed3b954794af064d17a7f9f", - "zh:e21e88b6e7e55b9f29b046730d9928c65a4f181fd5f60a42f1cd41b46a0a938d", - "zh:eddb888a74dea348a0acdfee13a08875bacddde384bd9c28342a534269665568", - "zh:f46d5f1403b8d8dfafab9bdd7129d3080bb62a91ea726f477fd43560887b8c4a", - ] -} diff --git a/infrastructure/sandbox/Data/.gitignore b/infrastructure/sandbox/Data/.gitignore deleted file mode 100644 index fb7595fad..000000000 --- a/infrastructure/sandbox/Data/.gitignore +++ /dev/null @@ -1 +0,0 @@ -lambda.zip diff --git a/infrastructure/sandbox/Data/README.md b/infrastructure/sandbox/Data/README.md deleted file mode 100644 index 5a36cec69..000000000 --- a/infrastructure/sandbox/Data/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# The data pipeline -The data pipeline takes data from S3 using S3 notifications, -filters for only the successful requests, then enriches the data with geoip data, -then pipes it to kinesis. From kinesis, we stream the data to an Elasticsearch cluster for now, -but this design allows for expansion into Salesforce and Mixpanel later on. diff --git a/infrastructure/sandbox/Data/lambda/__pycache__/geohash.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/__pycache__/geohash.cpython-310.pyc deleted file mode 100644 index 9f1c014c1..000000000 Binary files a/infrastructure/sandbox/Data/lambda/__pycache__/geohash.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/__pycache__/jpgrid.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/__pycache__/jpgrid.cpython-310.pyc deleted file mode 100644 index 2607c8ec9..000000000 Binary files a/infrastructure/sandbox/Data/lambda/__pycache__/jpgrid.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/__pycache__/jpiarea.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/__pycache__/jpiarea.cpython-310.pyc deleted file mode 100644 index 6a86a2a71..000000000 Binary files a/infrastructure/sandbox/Data/lambda/__pycache__/jpiarea.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/__pycache__/quadtree.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/__pycache__/quadtree.cpython-310.pyc deleted file mode 100644 index 125d5e07b..000000000 Binary files a/infrastructure/sandbox/Data/lambda/__pycache__/quadtree.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/_geohash.cpython-310-x86_64-linux-gnu.so b/infrastructure/sandbox/Data/lambda/_geohash.cpython-310-x86_64-linux-gnu.so deleted file mode 100755 index fbefa54f1..000000000 Binary files a/infrastructure/sandbox/Data/lambda/_geohash.cpython-310-x86_64-linux-gnu.so and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/_geoip_geolite2/GeoLite2-City.mmdb b/infrastructure/sandbox/Data/lambda/_geoip_geolite2/GeoLite2-City.mmdb deleted file mode 100644 index 45e8cb0cb..000000000 Binary files a/infrastructure/sandbox/Data/lambda/_geoip_geolite2/GeoLite2-City.mmdb and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/_geoip_geolite2/__init__.py b/infrastructure/sandbox/Data/lambda/_geoip_geolite2/__init__.py deleted file mode 100644 index df130cbd7..000000000 --- a/infrastructure/sandbox/Data/lambda/_geoip_geolite2/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -import os - - -database_name = 'GeoLite2-City.mmdb' - - -def loader(database, mod): - filename = os.path.join(os.path.dirname(__file__), database_name) - return mod.open_database(filename) diff --git a/infrastructure/sandbox/Data/lambda/_geoip_geolite2/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/_geoip_geolite2/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 72b9cfb2b..000000000 Binary files a/infrastructure/sandbox/Data/lambda/_geoip_geolite2/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/AUTHORS b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/AUTHORS deleted file mode 100644 index 72dd48b74..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/AUTHORS +++ /dev/null @@ -1,2 +0,0 @@ -For a list of all our amazing authors please see the contributors page: -https://github.com/elastic/elasticsearch-py/graphs/contributors diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/INSTALLER b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/INSTALLER deleted file mode 100644 index a1b589e38..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/INSTALLER +++ /dev/null @@ -1 +0,0 @@ -pip diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/LICENSE b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/LICENSE deleted file mode 100644 index 68c771a09..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/LICENSE +++ /dev/null @@ -1,176 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/METADATA b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/METADATA deleted file mode 100644 index 3effddc05..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/METADATA +++ /dev/null @@ -1,193 +0,0 @@ -Metadata-Version: 2.1 -Name: elasticsearch -Version: 6.8.2 -Summary: Python client for Elasticsearch -Home-page: https://github.com/elastic/elasticsearch-py -Author: Honza Král, Nick Lang -Author-email: honza.kral@gmail.com, nick@nicklang.com -Maintainer: Seth Michael Larson -Maintainer-email: seth.larson@elastic.co -License: Apache-2.0 -Platform: UNKNOWN -Classifier: Development Status :: 5 - Production/Stable -Classifier: License :: OSI Approved :: Apache Software License -Classifier: Intended Audience :: Developers -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2 -Classifier: Programming Language :: Python :: 2.6 -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.2 -Classifier: Programming Language :: Python :: 3.3 -Classifier: Programming Language :: Python :: 3.4 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Python :: 3.6 -Classifier: Programming Language :: Python :: Implementation :: CPython -Classifier: Programming Language :: Python :: Implementation :: PyPy -Requires-Python: >=2.6, !=3.0.*, !=3.1.*, !=3.2.*, <4 -Description-Content-Type: text/x-rst -Requires-Dist: urllib3 (>=1.21.1) -Provides-Extra: develop -Requires-Dist: requests (<3.0.0,>=2.0.0) ; extra == 'develop' -Requires-Dist: nose ; extra == 'develop' -Requires-Dist: coverage ; extra == 'develop' -Requires-Dist: mock ; extra == 'develop' -Requires-Dist: pyyaml ; extra == 'develop' -Requires-Dist: nosexcover ; extra == 'develop' -Requires-Dist: numpy ; extra == 'develop' -Requires-Dist: pandas ; extra == 'develop' -Requires-Dist: sphinx (<1.7) ; extra == 'develop' -Requires-Dist: sphinx-rtd-theme ; extra == 'develop' -Provides-Extra: requests -Requires-Dist: requests (<3.0.0,>=2.4.0) ; extra == 'requests' - -Python Elasticsearch Client -=========================== - -Official low-level client for Elasticsearch. Its goal is to provide common -ground for all Elasticsearch-related code in Python; because of this it tries -to be opinion-free and very extendable. - -For a more high level client library with more limited scope, have a look at -`elasticsearch-dsl`_ - a more pythonic library sitting on top of -``elasticsearch-py``. - -It provides a more convenient and idiomatic way to write and manipulate -`queries`_. It stays close to the Elasticsearch JSON DSL, mirroring its -terminology and structure while exposing the whole range of the DSL from Python -either directly using defined classes or a queryset-like expressions. - -It also provides an optional `persistence layer`_ for working with documents as -Python objects in an ORM-like fashion: defining mappings, retrieving and saving -documents, wrapping the document data in user-defined classes. - -.. _elasticsearch-dsl: https://elasticsearch-dsl.readthedocs.io/ -.. _queries: https://elasticsearch-dsl.readthedocs.io/en/latest/search_dsl.html -.. _persistence layer: https://elasticsearch-dsl.readthedocs.io/en/latest/persistence.html#doctype - -Compatibility -------------- - -The library is compatible with all Elasticsearch versions since ``0.90.x`` but you -**have to use a matching major version**: - -For **Elasticsearch 6.0** and later, use the major version 6 (``6.x.y``) of the -library. - -For **Elasticsearch 5.0** and later, use the major version 5 (``5.x.y``) of the -library. - -For **Elasticsearch 2.0** and later, use the major version 2 (``2.x.y``) of the -library, and so on. - -The recommended way to set your requirements in your `setup.py` or -`requirements.txt` is:: - - # Elasticsearch 6.x - elasticsearch>=6.0.0,<7.0.0 - - # Elasticsearch 5.x - elasticsearch>=5.0.0,<6.0.0 - - # Elasticsearch 2.x - elasticsearch>=2.0.0,<3.0.0 - -If you have a need to have multiple versions installed at the same time older -versions are also released as ``elasticsearch2``, ``elasticsearch5`` and ``elasticsearch6``. - -Installation ------------- - -Install the ``elasticsearch`` package for Elasticsearch 6.x with `pip -`_:: - - pip install "elasticsearch>=6,<7" - - -Example use ------------ - -Simple use-case:: - - >>> from datetime import datetime - >>> from elasticsearch import Elasticsearch - - # by default we connect to localhost:9200 - >>> es = Elasticsearch() - - # create an index in elasticsearch, ignore status code 400 (index already exists) - >>> es.indices.create(index='my-index', ignore=400) - {u'acknowledged': True} - - # datetimes will be serialized - >>> es.index(index="my-index", doc_type="test-type", id=42, body={"any": "data", "timestamp": datetime.now()}) - {u'_id': u'42', u'_index': u'my-index', u'_type': u'test-type', u'_version': 1, u'ok': True} - - # but not deserialized - >>> es.get(index="my-index", doc_type="test-type", id=42)['_source'] - {u'any': u'data', u'timestamp': u'2013-05-12T19:45:31.804229'} - -`Full documentation`_. - -.. _Full documentation: https://elasticsearch-py.readthedocs.io/ - -Elastic Cloud (and SSL) use-case:: - - >>> from elasticsearch import Elasticsearch - >>> es = Elasticsearch("https://elasticsearch.url:port", http_auth=('elastic','yourpassword')) - >>> es.info() - -Using SSL Context with a self-signed cert use-case:: - - >>> from elasticsearch import Elasticsearch - >>> from ssl import create_default_context - - >>> context = create_default_context(cafile="path/to/cafile.pem") - >>> es = Elasticsearch("https://elasticsearch.url:port", ssl_context=context, http_auth=('elastic','yourpassword')) - >>> es.info() - - - -Features --------- - -The client's features include: - - * translating basic Python data types to and from json (datetimes are not - decoded for performance reasons) - * configurable automatic discovery of cluster nodes - * persistent connections - * load balancing (with pluggable selection strategy) across all available nodes - * failed connection penalization (time based - failed connections won't be - retried until a timeout is reached) - * support for ssl and http authentication - * thread safety - * pluggable architecture - - -License -------- - -Copyright 2017 Elasticsearch - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -Build status ------------- -.. image:: https://readthedocs.org/projects/elasticsearch-py/badge/?version=latest&style=flat - :target: https://elasticsearch-py.readthedocs.io/en/master/ - -.. image:: https://clients-ci.elastic.co/job/elastic+elasticsearch-py+master/badge/icon - :target: https://clients-ci.elastic.co/job/elastic+elasticsearch-py+master/ - diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/RECORD b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/RECORD deleted file mode 100644 index 47515a207..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/RECORD +++ /dev/null @@ -1,78 +0,0 @@ -elasticsearch-6.8.2.dist-info/AUTHORS,sha256=lzWXD7E6TlSJkJAHfegykZOG8PQ3rEsm-xRT6uysmDg,136 -elasticsearch-6.8.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 -elasticsearch-6.8.2.dist-info/LICENSE,sha256=XfKg2H1sVi8OoRxoisUlMqoo10TKvHmU_wU39ks7MyA,10143 -elasticsearch-6.8.2.dist-info/METADATA,sha256=5qN-GB11HL43f83oiTMP6tMvJByEQITBSO8Z1nPUX1U,7028 -elasticsearch-6.8.2.dist-info/RECORD,, -elasticsearch-6.8.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 -elasticsearch-6.8.2.dist-info/WHEEL,sha256=Z-nyYpwrcSqxfdux5Mbn_DQ525iP7J2DG3JgGvOYyTQ,110 -elasticsearch-6.8.2.dist-info/top_level.txt,sha256=Jp2bLWq49skvCN4YCZsg1Hfn_NDLgleC-x-Bn01_HgM,14 -elasticsearch/__init__.py,sha256=T0k9G4d0hHW7tzffNGXG6uFtH2cQUW8_1sTQh0fyAp8,1511 -elasticsearch/__pycache__/__init__.cpython-310.pyc,, -elasticsearch/__pycache__/compat.cpython-310.pyc,, -elasticsearch/__pycache__/connection_pool.cpython-310.pyc,, -elasticsearch/__pycache__/exceptions.cpython-310.pyc,, -elasticsearch/__pycache__/serializer.cpython-310.pyc,, -elasticsearch/__pycache__/transport.cpython-310.pyc,, -elasticsearch/__pycache__/utils.cpython-310.pyc,, -elasticsearch/client/__init__.py,sha256=zR2WSLCA7T4Rx3asB4xmKzkwRzzxnmwiOlLN-EN2aIs,86963 -elasticsearch/client/__pycache__/__init__.cpython-310.pyc,, -elasticsearch/client/__pycache__/cat.cpython-310.pyc,, -elasticsearch/client/__pycache__/cluster.cpython-310.pyc,, -elasticsearch/client/__pycache__/indices.cpython-310.pyc,, -elasticsearch/client/__pycache__/ingest.cpython-310.pyc,, -elasticsearch/client/__pycache__/nodes.cpython-310.pyc,, -elasticsearch/client/__pycache__/remote.cpython-310.pyc,, -elasticsearch/client/__pycache__/snapshot.cpython-310.pyc,, -elasticsearch/client/__pycache__/tasks.cpython-310.pyc,, -elasticsearch/client/__pycache__/utils.cpython-310.pyc,, -elasticsearch/client/cat.py,sha256=sptUQCjDN9gdrHXYYxgbyKFkYOdJAyUGx7t2L_QjwSc,22396 -elasticsearch/client/cluster.py,sha256=6GRgj-4I4UJjtzHdUN9iuckD-orxb_nJ0aAsfWn6sKo,10187 -elasticsearch/client/indices.py,sha256=f4EsC3TjSpK-BF24Zfk-Q3Vt5NeHyqrFfumYqXb22rg,53453 -elasticsearch/client/ingest.py,sha256=R-LLnE3bJ5HUKwsrALxKuQBzbU8G9TmiAiPcIKZu5zg,3771 -elasticsearch/client/nodes.py,sha256=2BxjaLxizaqj-zwwVP_LM4KbV89pDKcfZucnP4QZIlQ,7257 -elasticsearch/client/remote.py,sha256=jnV51VIZx8O6m9FWLiPB9jaDtjhQlU4_qdq_xr7HDbI,1141 -elasticsearch/client/snapshot.py,sha256=IBJEmn7pZ6bISP-ZeaTakr6YO9H15EQn_jBqS_Bdv7A,8911 -elasticsearch/client/tasks.py,sha256=TP9gN-IQrp4F6--W2-Gg9y7EyP84koDr_UCPuUni3tA,3767 -elasticsearch/client/utils.py,sha256=iunit4TkACv5Z4NYQI29O--DE4F2nPfe8HfY0mUVS7M,3731 -elasticsearch/client/xpack/__init__.py,sha256=vXun8SJFQTIBZu2Kc44IUg8bMvJgY1O5myaoSfZUKTY,2596 -elasticsearch/client/xpack/__pycache__/__init__.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/deprecation.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/graph.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/license.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/migration.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/ml.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/monitoring.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/security.cpython-310.pyc,, -elasticsearch/client/xpack/__pycache__/watcher.cpython-310.pyc,, -elasticsearch/client/xpack/deprecation.py,sha256=au0BCNQ0UcuT9nSMMeMMhunUFYkd9NU0QTgXhaBTDBo,1286 -elasticsearch/client/xpack/graph.py,sha256=fNsL7IaQNr6IptWvnImTmXmmltlCMN7YUNXR18f-jPc,1741 -elasticsearch/client/xpack/license.py,sha256=T9NNuTtM8oRu9UesHsO9AfhkBUqO4NXQxRl4EQoUbIY,2006 -elasticsearch/client/xpack/migration.py,sha256=AFEuUf3lbyVJuvNdPUMA0Po8F_OK55DLeG2A9xviEMM,2682 -elasticsearch/client/xpack/ml.py,sha256=I-WJsVUE6zsBmPKxOVx0poWo1AZAzcM55UUUR1SlMUk,24716 -elasticsearch/client/xpack/monitoring.py,sha256=8S9F8sjFIvSPYWfMbO7HbTFOsi3XhecWx_q3QXyvNkU,1906 -elasticsearch/client/xpack/security.py,sha256=02s9qNH6Qd8OXZ1xBtAduzH4nuw2pcwO77TlCmr23xc,13016 -elasticsearch/client/xpack/watcher.py,sha256=5XugO86Lr7FfeNBGLhhge-O0JXLXounRYE69yDkFlSg,6820 -elasticsearch/compat.py,sha256=LYXKmjFhytJ50Lv2LYawuPT3QbBV85PpBzHKo1JXJIM,1342 -elasticsearch/connection/__init__.py,sha256=niEqyjlEb5K6LYjppaLaT31ZkbjifvgTRc7nRfsA_jU,1053 -elasticsearch/connection/__pycache__/__init__.cpython-310.pyc,, -elasticsearch/connection/__pycache__/base.cpython-310.pyc,, -elasticsearch/connection/__pycache__/http_requests.cpython-310.pyc,, -elasticsearch/connection/__pycache__/http_urllib3.cpython-310.pyc,, -elasticsearch/connection/__pycache__/pooling.cpython-310.pyc,, -elasticsearch/connection/base.py,sha256=y1SwwrgHikTUJ3wBgDTkgiF2dM-idJmQ6T90TU6Fx04,8942 -elasticsearch/connection/http_requests.py,sha256=MML8JblofLw2QJa-S8VNwxyo-8r0k1SZWu69g_yCvd0,7089 -elasticsearch/connection/http_urllib3.py,sha256=8yfIZyYVptMcmBpGkSGyHVUv_ZfRzUoa_eHp5eMlX9M,9583 -elasticsearch/connection/pooling.py,sha256=i8f6R3nl8sC2CsYhsJzqc6EBQBN4msxM9Rdn8wKbCh8,1682 -elasticsearch/connection_pool.py,sha256=HC_awZt0NUaII592PhgEvK53PYaxKJaJnYL8MCCAYgw,10847 -elasticsearch/exceptions.py,sha256=fTDkJIEy_xfIpu4bMLdoi_KCWnlpkSUjmYOq92AfPcM,4357 -elasticsearch/helpers/__init__.py,sha256=rRSQsrrCYJbDJXrvki8tSyiICAEyZwhwmTbGuu1XsUc,1204 -elasticsearch/helpers/__pycache__/__init__.cpython-310.pyc,, -elasticsearch/helpers/__pycache__/actions.cpython-310.pyc,, -elasticsearch/helpers/__pycache__/errors.cpython-310.pyc,, -elasticsearch/helpers/__pycache__/test.cpython-310.pyc,, -elasticsearch/helpers/actions.py,sha256=MD4mvtpPKHywFA7EKEcC7zZ1JsjuO8qmARPLOy2UwTI,20700 -elasticsearch/helpers/errors.py,sha256=rhPLN2qM8RS-HjGy5INss0zXUXFnKItKrY7BzTDnbCs,1200 -elasticsearch/helpers/test.py,sha256=LSdTlhPS3naUH_wxMUG_AFl5VTR10ze7EjxgJmCsVaE,2641 -elasticsearch/serializer.py,sha256=DqMwHr-7MJrpu9Lbq01RQ58CJvuR_hC-yYeFkAT6vc0,4372 -elasticsearch/transport.py,sha256=DJ2INj9kSkX_MTCc-qrQ5Mule9CkDQ34KXwSINmjUCg,18089 -elasticsearch/utils.py,sha256=6rY_mTQpfUfaixSi1QxPsSi-uKP_0z98ZE1kZlekL8g,1176 diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/REQUESTED b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/REQUESTED deleted file mode 100644 index e69de29bb..000000000 diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/WHEEL b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/WHEEL deleted file mode 100644 index 01b8fc7d4..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/WHEEL +++ /dev/null @@ -1,6 +0,0 @@ -Wheel-Version: 1.0 -Generator: bdist_wheel (0.36.2) -Root-Is-Purelib: true -Tag: py2-none-any -Tag: py3-none-any - diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/top_level.txt b/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/top_level.txt deleted file mode 100644 index 174c3f8b3..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch-6.8.2.dist-info/top_level.txt +++ /dev/null @@ -1 +0,0 @@ -elasticsearch diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__init__.py b/infrastructure/sandbox/Data/lambda/elasticsearch/__init__.py deleted file mode 100644 index 4ed0ef0a1..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/__init__.py +++ /dev/null @@ -1,46 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -# flake8: noqa -from __future__ import absolute_import - -VERSION = (6, 8, 2) -__version__ = VERSION -__versionstr__ = ".".join(map(str, VERSION)) - -import logging - -try: # Python 2.7+ - from logging import NullHandler -except ImportError: - - class NullHandler(logging.Handler): - def emit(self, record): - pass - - -import sys - -logger = logging.getLogger("elasticsearch") -logger.addHandler(logging.NullHandler()) - -from .client import Elasticsearch -from .transport import Transport -from .connection_pool import ConnectionPool, ConnectionSelector, RoundRobinSelector -from .serializer import JSONSerializer -from .connection import Connection, RequestsHttpConnection, Urllib3HttpConnection -from .exceptions import * diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 6e1d71251..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/compat.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/compat.cpython-310.pyc deleted file mode 100644 index e81257929..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/compat.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/connection_pool.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/connection_pool.cpython-310.pyc deleted file mode 100644 index 2cc08fdd8..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/connection_pool.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/exceptions.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/exceptions.cpython-310.pyc deleted file mode 100644 index 9b4514a06..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/exceptions.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/serializer.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/serializer.cpython-310.pyc deleted file mode 100644 index b54fef668..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/serializer.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/transport.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/transport.cpython-310.pyc deleted file mode 100644 index 89fc42ef8..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/transport.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/utils.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/utils.cpython-310.pyc deleted file mode 100644 index 4234661f7..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/__pycache__/utils.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__init__.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__init__.py deleted file mode 100644 index efb011aaf..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__init__.py +++ /dev/null @@ -1,1916 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from __future__ import unicode_literals -import logging - -from ..transport import Transport -from ..exceptions import TransportError -from ..compat import string_types, urlparse, unquote -from .indices import IndicesClient -from .ingest import IngestClient -from .cluster import ClusterClient -from .cat import CatClient -from .nodes import NodesClient -from .remote import RemoteClient -from .snapshot import SnapshotClient -from .tasks import TasksClient -from .xpack import XPackClient -from .utils import query_params, _make_path, SKIP_IN_PATH - -logger = logging.getLogger("elasticsearch") - - -def _normalize_hosts(hosts): - """ - Helper function to transform hosts argument to - :class:`~elasticsearch.Elasticsearch` to a list of dicts. - """ - # if hosts are empty, just defer to defaults down the line - if hosts is None: - return [{}] - - # passed in just one string - if isinstance(hosts, string_types): - hosts = [hosts] - - out = [] - # normalize hosts to dicts - for host in hosts: - if isinstance(host, string_types): - if "://" not in host: - host = "//%s" % host - - parsed_url = urlparse(host) - h = {"host": parsed_url.hostname} - - if parsed_url.port: - h["port"] = parsed_url.port - - if parsed_url.scheme == "https": - h["port"] = parsed_url.port or 443 - h["use_ssl"] = True - - if parsed_url.username or parsed_url.password: - h["http_auth"] = "%s:%s" % ( - unquote(parsed_url.username), - unquote(parsed_url.password), - ) - - if parsed_url.path and parsed_url.path != "/": - h["url_prefix"] = parsed_url.path - - out.append(h) - else: - out.append(host) - return out - - -class Elasticsearch(object): - """ - Elasticsearch low-level client. Provides a straightforward mapping from - Python to ES REST endpoints. - - The instance has attributes ``cat``, ``cluster``, ``indices``, ``ingest``, - ``nodes``, ``snapshot`` and ``tasks`` that provide access to instances of - :class:`~elasticsearch.client.CatClient`, - :class:`~elasticsearch.client.ClusterClient`, - :class:`~elasticsearch.client.IndicesClient`, - :class:`~elasticsearch.client.IngestClient`, - :class:`~elasticsearch.client.NodesClient`, - :class:`~elasticsearch.client.SnapshotClient` and - :class:`~elasticsearch.client.TasksClient` respectively. This is the - preferred (and only supported) way to get access to those classes and their - methods. - - You can specify your own connection class which should be used by providing - the ``connection_class`` parameter:: - - # create connection to localhost using the ThriftConnection - es = Elasticsearch(connection_class=ThriftConnection) - - If you want to turn on :ref:`sniffing` you have several options (described - in :class:`~elasticsearch.Transport`):: - - # create connection that will automatically inspect the cluster to get - # the list of active nodes. Start with nodes running on 'esnode1' and - # 'esnode2' - es = Elasticsearch( - ['esnode1', 'esnode2'], - # sniff before doing anything - sniff_on_start=True, - # refresh nodes after a node fails to respond - sniff_on_connection_fail=True, - # and also every 60 seconds - sniffer_timeout=60 - ) - - Different hosts can have different parameters, use a dictionary per node to - specify those:: - - # connect to localhost directly and another node using SSL on port 443 - # and an url_prefix. Note that ``port`` needs to be an int. - es = Elasticsearch([ - {'host': 'localhost'}, - {'host': 'othernode', 'port': 443, 'url_prefix': 'es', 'use_ssl': True}, - ]) - - If using SSL, there are several parameters that control how we deal with - certificates (see :class:`~elasticsearch.Urllib3HttpConnection` for - detailed description of the options):: - - es = Elasticsearch( - ['localhost:443', 'other_host:443'], - # turn on SSL - use_ssl=True, - # make sure we verify SSL certificates - verify_certs=True, - # provide a path to CA certs on disk - ca_certs='/path/to/CA_certs' - ) - - SSL client authentication is supported - (see :class:`~elasticsearch.Urllib3HttpConnection` for - detailed description of the options):: - - es = Elasticsearch( - ['localhost:443', 'other_host:443'], - # turn on SSL - use_ssl=True, - # make sure we verify SSL certificates - verify_certs=True, - # provide a path to CA certs on disk - ca_certs='/path/to/CA_certs', - # PEM formatted SSL client certificate - client_cert='/path/to/clientcert.pem', - # PEM formatted SSL client key - client_key='/path/to/clientkey.pem' - ) - - Alternatively you can use RFC-1738 formatted URLs, as long as they are not - in conflict with other options:: - - es = Elasticsearch( - [ - 'http://user:secret@localhost:9200/', - 'https://user:secret@other_host:443/production' - ], - verify_certs=True - ) - - By default, `JSONSerializer - `_ - is used to encode all outgoing requests. - However, you can implement your own custom serializer:: - - from elasticsearch.serializer import JSONSerializer - - class SetEncoder(JSONSerializer): - def default(self, obj): - if isinstance(obj, set): - return list(obj) - if isinstance(obj, Something): - return 'CustomSomethingRepresentation' - return JSONSerializer.default(self, obj) - - es = Elasticsearch(serializer=SetEncoder()) - - """ - - def __init__(self, hosts=None, transport_class=Transport, **kwargs): - """ - :arg hosts: list of nodes we should connect to. Node should be a - dictionary ({"host": "localhost", "port": 9200}), the entire dictionary - will be passed to the :class:`~elasticsearch.Connection` class as - kwargs, or a string in the format of ``host[:port]`` which will be - translated to a dictionary automatically. If no value is given the - :class:`~elasticsearch.Urllib3HttpConnection` class defaults will be used. - - :arg transport_class: :class:`~elasticsearch.Transport` subclass to use. - - :arg kwargs: any additional arguments will be passed on to the - :class:`~elasticsearch.Transport` class and, subsequently, to the - :class:`~elasticsearch.Connection` instances. - """ - self.transport = transport_class(_normalize_hosts(hosts), **kwargs) - - # namespaced clients for compatibility with API names - self.indices = IndicesClient(self) - self.ingest = IngestClient(self) - self.cluster = ClusterClient(self) - self.cat = CatClient(self) - self.nodes = NodesClient(self) - self.remote = RemoteClient(self) - self.snapshot = SnapshotClient(self) - self.tasks = TasksClient(self) - self.xpack = XPackClient(self) - - def __repr__(self): - try: - # get a list of all connections - cons = self.transport.hosts - # truncate to 5 if there are too many - if len(cons) > 5: - cons = cons[:5] + ["..."] - return "<{cls}({cons})>".format(cls=self.__class__.__name__, cons=cons) - except Exception: - # probably operating on custom transport and connection_pool, ignore - return super(Elasticsearch, self).__repr__() - - def _bulk_body(self, body): - # if not passed in a string, serialize items and join by newline - if not isinstance(body, string_types): - body = "\n".join(map(self.transport.serializer.dumps, body)) - - # bulk body must end with a newline - if isinstance(body, bytes): - if not body.endswith(b"\n"): - body += b"\n" - elif isinstance(body, string_types) and not body.endswith("\n"): - body += "\n" - - return body - - @query_params() - def ping(self, params=None): - """ - Returns True if the cluster is up, False otherwise. - ``_ - """ - try: - return self.transport.perform_request("HEAD", "/", params=params) - except TransportError: - return False - - @query_params() - def info(self, params=None): - """ - Get the basic info from the current cluster. - ``_ - """ - return self.transport.perform_request("GET", "/", params=params) - - @query_params( - "parent", - "pipeline", - "refresh", - "routing", - "timeout", - "timestamp", - "ttl", - "version", - "version_type", - "wait_for_active_shards", - ) - def create(self, index, doc_type, id, body, params=None): - """ - Adds a typed JSON document in a specific index, making it searchable. - Behind the scenes this method calls index(..., op_type='create') - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document - :arg id: Document ID - :arg body: The document - :arg parent: ID of the parent document - :arg pipeline: The pipeline id to preprocess incoming documents with - :arg refresh: If `true` then refresh the affected shards to make this - operation visible to search, if `wait_for` then wait for a refresh - to make this operation visible to search, if `false` (the default) - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - :arg routing: Specific routing value - :arg timeout: Explicit operation timeout - :arg timestamp: Explicit timestamp for the document - :arg ttl: Expiration time for the document - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the index operation. Defaults to 1, - meaning the primary shard only. Set to `all` for all shard copies, - otherwise set to any non-negative value less than or equal to the - total number of copies for the shard (number of replicas + 1) - """ - for param in (index, doc_type, id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path(index, doc_type, id, "_create"), params=params, body=body - ) - - @query_params( - "op_type", - "parent", - "pipeline", - "refresh", - "routing", - "timeout", - "timestamp", - "ttl", - "version", - "version_type", - "wait_for_active_shards", - "if_primary_term", - "if_seq_no", - ) - def index(self, index, doc_type, body, id=None, params=None): - """ - Adds or updates a typed JSON document in a specific index, making it searchable. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document - :arg body: The document - :arg id: Document ID - :arg op_type: Explicit operation type, default 'index', valid choices - are: 'index', 'create' - :arg parent: ID of the parent document - :arg pipeline: The pipeline id to preprocess incoming documents with - :arg refresh: If `true` then refresh the affected shards to make this - operation visible to search, if `wait_for` then wait for a refresh - to make this operation visible to search, if `false` (the default) - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - :arg routing: Specific routing value - :arg timeout: Explicit operation timeout - :arg timestamp: Explicit timestamp for the document - :arg ttl: Expiration time for the document - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the index operation. Defaults to 1, - meaning the primary shard only. Set to `all` for all shard copies, - otherwise set to any non-negative value less than or equal to the - total number of copies for the shard (number of replicas + 1) - :arg if_primary_term: only perform the index operation if the last - operation that has changed the document has the specified primary - term - :arg if_seq_no: only perform the index operation if the last operation - that has changed the document has the specified sequence number - """ - for param in (index, doc_type, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST" if id in SKIP_IN_PATH else "PUT", - _make_path(index, doc_type, id), - params=params, - body=body, - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "parent", - "preference", - "realtime", - "refresh", - "routing", - "stored_fields", - "version", - "version_type", - ) - def exists(self, index, doc_type, id, params=None): - """ - Returns a boolean indicating whether or not given document exists in Elasticsearch. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document (use `_all` to fetch the first - document matching the ID across all types) - :arg id: The document ID - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg parent: The ID of the parent document - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg realtime: Specify whether to perform the operation in realtime or - search mode - :arg refresh: Refresh the shard containing the document before - performing the operation - :arg routing: Specific routing value - :arg stored_fields: A comma-separated list of stored fields to return in - the response - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "HEAD", _make_path(index, doc_type, id), params=params - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "parent", - "preference", - "realtime", - "refresh", - "routing", - "version", - "version_type", - ) - def exists_source(self, index, doc_type, id, params=None): - """ - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document; use `_all` to fetch the first - document matching the ID across all types - :arg id: The document ID - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg parent: The ID of the parent document - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg realtime: Specify whether to perform the operation in realtime or - search mode - :arg refresh: Refresh the shard containing the document before - performing the operation - :arg routing: Specific routing value - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "HEAD", _make_path(index, doc_type, id, "_source"), params=params - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "parent", - "preference", - "realtime", - "refresh", - "routing", - "stored_fields", - "version", - "version_type", - ) - def get(self, index, doc_type, id, params=None): - """ - Get a typed JSON document from the index based on its id. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document (use `_all` to fetch the first - document matching the ID across all types) - :arg id: The document ID - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg parent: The ID of the parent document - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg realtime: Specify whether to perform the operation in realtime or - search mode - :arg refresh: Refresh the shard containing the document before - performing the operation - :arg routing: Specific routing value - :arg stored_fields: A comma-separated list of stored fields to return in - the response - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "GET", _make_path(index, doc_type, id), params=params - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "parent", - "preference", - "realtime", - "refresh", - "routing", - "version", - "version_type", - ) - def get_source(self, index, doc_type, id, params=None): - """ - Get the source of a document by it's index, type and id. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document; use `_all` to fetch the first - document matching the ID across all types - :arg id: The document ID - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg parent: The ID of the parent document - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg realtime: Specify whether to perform the operation in realtime or - search mode - :arg refresh: Refresh the shard containing the document before - performing the operation - :arg routing: Specific routing value - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "GET", _make_path(index, doc_type, id, "_source"), params=params - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "preference", - "realtime", - "refresh", - "routing", - "stored_fields", - ) - def mget(self, body, index=None, doc_type=None, params=None): - """ - Get multiple documents based on an index, type (optional) and ids. - ``_ - - :arg body: Document identifiers; can be either `docs` (containing full - document information) or `ids` (when index and type is provided in - the URL. - :arg index: The name of the index - :arg doc_type: The type of the document - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg realtime: Specify whether to perform the operation in realtime or - search mode - :arg refresh: Refresh the shard containing the document before - performing the operation - :arg routing: Specific routing value - :arg stored_fields: A comma-separated list of stored fields to return in - the response - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "GET", _make_path(index, doc_type, "_mget"), params=params, body=body - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "fields", - "if_primary_term", - "if_seq_no", - "lang", - "parent", - "refresh", - "retry_on_conflict", - "routing", - "timeout", - "timestamp", - "ttl", - "version", - "version_type", - "wait_for_active_shards", - ) - def update(self, index, doc_type, id, body=None, params=None): - """ - Update a document based on a script or partial data provided. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document - :arg id: Document ID - :arg body: The request definition using either `script` or partial `doc` - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg fields: A comma-separated list of fields to return in the response - :arg if_primary_term: only perform the update operation if the last - operation that has changed the document has the specified primary - term - :arg if_seq_no: only perform the update operation if the last operation - that has changed the document has the specified sequence number - :arg lang: The script language (default: painless) - :arg parent: ID of the parent document. Is is only used for routing and - when for the upsert request - :arg refresh: If `true` then refresh the effected shards to make this - operation visible to search, if `wait_for` then wait for a refresh - to make this operation visible to search, if `false` (the default) - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - :arg retry_on_conflict: Specify how many times should the operation be - retried when a conflict occurs (default: 0) - :arg routing: Specific routing value - :arg timeout: Explicit operation timeout - :arg timestamp: Explicit timestamp for the document - :arg ttl: Expiration time for the document - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'force' - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the update operation. Defaults to - 1, meaning the primary shard only. Set to `all` for all shard - copies, otherwise set to any non-negative value less than or equal - to the total number of copies for the shard (number of replicas + 1) - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", _make_path(index, doc_type, id, "_update"), params=params, body=body - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "allow_no_indices", - "allow_partial_search_results", - "analyze_wildcard", - "analyzer", - "batched_reduce_size", - "default_operator", - "df", - "docvalue_fields", - "expand_wildcards", - "explain", - "from_", - "ignore_throttled", - "ignore_unavailable", - "lenient", - "max_concurrent_shard_requests", - "pre_filter_shard_size", - "preference", - "q", - "request_cache", - "rest_total_hits_as_int", - "routing", - "scroll", - "search_type", - "seq_no_primary_term", - "size", - "sort", - "stats", - "stored_fields", - "suggest_field", - "suggest_mode", - "suggest_size", - "suggest_text", - "terminate_after", - "timeout", - "track_scores", - "track_total_hits", - "typed_keys", - "version", - ) - def search(self, index=None, doc_type=None, body=None, params=None): - """ - Execute a search query and get back search hits that match the query. - ``_ - - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg doc_type: A comma-separated list of document types to search; leave - empty to perform the operation on all types - :arg body: The search definition using the Query DSL - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg allow_partial_search_results: Set to false to return an overall - failure if the request would produce partial results. Defaults to - True, which will allow partial results in the case of timeouts or - partial failures - :arg analyze_wildcard: Specify whether wildcard and prefix queries - should be analyzed (default: false) - :arg analyzer: The analyzer to use for the query string - :arg batched_reduce_size: The number of shard results that should be - reduced at once on the coordinating node. This value should be used - as a protection mechanism to reduce the memory overhead per search - request if the potential number of shards in the request can be - large., default 512 - :arg default_operator: The default operator for query string query (AND - or OR), default 'OR', valid choices are: 'AND', 'OR' - :arg df: The field to use as default where no field prefix is given in - the query string - :arg docvalue_fields: A comma-separated list of fields to return as the - docvalue representation of a field for each hit - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg explain: Specify whether to return detailed information about score - computation as part of a hit - :arg from\\_: Starting offset (default: 0) - :arg ignore_throttled: Whether specified concrete, expanded or aliased - indices should be ignored when throttled - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg lenient: Specify whether format-based query failures (such as - providing text to a numeric field) should be ignored - :arg max_concurrent_shard_requests: The number of concurrent shard - requests this search executes concurrently. This value should be - used to limit the impact of the search on the cluster in order to - limit the number of concurrent shard requests, default 'The default - grows with the number of nodes in the cluster but is at most 256.' - :arg pre_filter_shard_size: A threshold that enforces a pre-filter - roundtrip to prefilter search shards based on query rewriting if - the number of shards the search request expands to exceeds the - threshold. This filter roundtrip can limit the number of shards - significantly if for instance a shard can not match any documents - based on it's rewrite method ie. if date filters are mandatory to - match but the shard bounds and the query are disjoint., default 128 - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg q: Query in the Lucene query string syntax - :arg request_cache: Specify if request cache should be used for this - request or not, defaults to index level setting - :arg rest_total_hits_as_int: This parameter is ignored in this version. - It is used in the next major version to control whether the rest - response should render the total.hits as an object or a number, - default False - :arg routing: A comma-separated list of specific routing values - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg search_type: Search operation type, valid choices are: - 'query_then_fetch', 'dfs_query_then_fetch' - :arg seq_no_primary_term: Specify whether to return sequence number and - primary term of the last modification of each hit - :arg size: Number of hits to return (default: 10) - :arg sort: A comma-separated list of : pairs - :arg stats: Specific 'tag' of the request for logging and statistical - purposes - :arg stored_fields: A comma-separated list of stored fields to return as - part of a hit - :arg suggest_field: Specify which field to use for suggestions - :arg suggest_mode: Specify suggest mode, default 'missing', valid - choices are: 'missing', 'popular', 'always' - :arg suggest_size: How many suggestions to return in response - :arg suggest_text: The source text for which the suggestions should be - returned - :arg terminate_after: The maximum number of documents to collect for - each shard, upon reaching which the query execution will terminate - early. - :arg timeout: Explicit operation timeout - :arg track_scores: Whether to calculate and return scores even if they - are not used for sorting - :arg track_total_hits: Indicate if the number of documents that match - the query should be tracked - :arg typed_keys: Specify whether aggregation and suggester names should - be prefixed by their respective types in the response - :arg version: Specify whether to return document version as part of a - hit - """ - # from is a reserved word so it cannot be used, use from_ instead - if "from_" in params: - params["from"] = params.pop("from_") - - if doc_type and not index: - index = "_all" - return self.transport.perform_request( - "GET", _make_path(index, doc_type, "_search"), params=params, body=body - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "allow_no_indices", - "analyze_wildcard", - "analyzer", - "conflicts", - "default_operator", - "df", - "expand_wildcards", - "from_", - "ignore_unavailable", - "lenient", - "pipeline", - "preference", - "q", - "refresh", - "request_cache", - "requests_per_second", - "routing", - "scroll", - "scroll_size", - "search_timeout", - "search_type", - "size", - "slices", - "sort", - "stats", - "terminate_after", - "timeout", - "version", - "version_type", - "wait_for_active_shards", - "wait_for_completion", - ) - def update_by_query(self, index, doc_type=None, body=None, params=None): - """ - Perform an update on all documents matching a query. - ``_ - - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg doc_type: A comma-separated list of document types to search; leave - empty to perform the operation on all types - :arg body: The search definition using the Query DSL - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg analyze_wildcard: Specify whether wildcard and prefix queries - should be analyzed (default: false) - :arg analyzer: The analyzer to use for the query string - :arg conflicts: What to do when the update by query hits version - conflicts?, default 'abort', valid choices are: 'abort', 'proceed' - :arg default_operator: The default operator for query string query (AND - or OR), default 'OR', valid choices are: 'AND', 'OR' - :arg df: The field to use as default where no field prefix is given in - the query string - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg from_: Starting offset (default: 0) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg lenient: Specify whether format-based query failures (such as - providing text to a numeric field) should be ignored - :arg pipeline: Ingest pipeline to set on index requests made by this - action. (default: none) - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg q: Query in the Lucene query string syntax - :arg refresh: Should the effected indexes be refreshed? - :arg request_cache: Specify if request cache should be used for this - request or not, defaults to index level setting - :arg requests_per_second: The throttle to set on this request in sub- - requests per second. -1 means no throttle., default 0 - :arg routing: A comma-separated list of specific routing values - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg scroll_size: Size on the scroll request powering the update by - query - :arg search_timeout: Explicit timeout for each search request. Defaults - to no timeout. - :arg search_type: Search operation type, valid choices are: - 'query_then_fetch', 'dfs_query_then_fetch' - :arg size: Number of hits to return (default: 10) - :arg slices: The number of slices this task should be divided into. - Defaults to 1 meaning the task isn't sliced into subtasks., default - 1 - :arg sort: A comma-separated list of : pairs - :arg stats: Specific 'tag' of the request for logging and statistical - purposes - :arg terminate_after: The maximum number of documents to collect for - each shard, upon reaching which the query execution will terminate - early. - :arg timeout: Time each individual bulk request should wait for shards - that are unavailable., default '1m' - :arg version: Specify whether to return document version as part of a - hit - :arg version_type: Should the document increment the version number - (internal) on hit or not (reindex) - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the update by query operation. - Defaults to 1, meaning the primary shard only. Set to `all` for all - shard copies, otherwise set to any non-negative value less than or - equal to the total number of copies for the shard (number of - replicas + 1) - :arg wait_for_completion: Should the request should block until the - update by query operation is complete., default True - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "POST", - _make_path(index, doc_type, "_update_by_query"), - params=params, - body=body, - ) - - @query_params("requests_per_second") - def update_by_query_rethrottle(self, task_id, params=None): - """ - ``_ - - :arg task_id: The task id to rethrottle - :arg requests_per_second: The throttle to set on this request in - floating sub-requests per second. -1 means set no throttle. - """ - if task_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'task_id'.") - return self.transport.perform_request( - "POST", - _make_path("_update_by_query", task_id, "_rethrottle"), - params=params, - ) - - @query_params( - "refresh", - "requests_per_second", - "slices", - "timeout", - "wait_for_active_shards", - "wait_for_completion", - ) - def reindex(self, body, params=None): - """ - Reindex all documents from one index to another. - ``_ - - :arg body: The search definition using the Query DSL and the prototype - for the index request. - :arg refresh: Should the effected indexes be refreshed? - :arg requests_per_second: The throttle to set on this request in sub- - requests per second. -1 means no throttle., default 0 - :arg slices: The number of slices this task should be divided into. - Defaults to 1 meaning the task isn't sliced into subtasks., default - 1 - :arg timeout: Time each individual bulk request should wait for shards - that are unavailable., default '1m' - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the reindex operation. Defaults to - 1, meaning the primary shard only. Set to `all` for all shard - copies, otherwise set to any non-negative value less than or equal - to the total number of copies for the shard (number of replicas + 1) - :arg wait_for_completion: Should the request should block until the - reindex is complete., default True - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", "/_reindex", params=params, body=body - ) - - @query_params("requests_per_second") - def reindex_rethrottle(self, task_id=None, params=None): - """ - Change the value of ``requests_per_second`` of a running ``reindex`` task. - ``_ - - :arg task_id: The task id to rethrottle - :arg requests_per_second: The throttle to set on this request in - floating sub-requests per second. -1 means set no throttle. - """ - return self.transport.perform_request( - "POST", _make_path("_reindex", task_id, "_rethrottle"), params=params - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "allow_no_indices", - "analyze_wildcard", - "analyzer", - "conflicts", - "default_operator", - "df", - "expand_wildcards", - "from_", - "ignore_unavailable", - "lenient", - "preference", - "q", - "refresh", - "request_cache", - "requests_per_second", - "routing", - "scroll", - "scroll_size", - "search_timeout", - "search_type", - "size", - "slices", - "sort", - "stats", - "terminate_after", - "timeout", - "version", - "wait_for_active_shards", - "wait_for_completion", - ) - def delete_by_query(self, index, body, doc_type=None, params=None): - """ - Delete all documents matching a query. - ``_ - - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg body: The search definition using the Query DSL - :arg doc_type: A comma-separated list of document types to search; leave - empty to perform the operation on all types - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg analyze_wildcard: Specify whether wildcard and prefix queries - should be analyzed (default: false) - :arg analyzer: The analyzer to use for the query string - :arg conflicts: What to do when the delete-by-query hits version - conflicts?, default 'abort', valid choices are: 'abort', 'proceed' - :arg default_operator: The default operator for query string query (AND - or OR), default 'OR', valid choices are: 'AND', 'OR' - :arg df: The field to use as default where no field prefix is given in - the query string - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg from\\_: Starting offset (default: 0) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg lenient: Specify whether format-based query failures (such as - providing text to a numeric field) should be ignored - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg q: Query in the Lucene query string syntax - :arg refresh: Should the effected indexes be refreshed? - :arg request_cache: Specify if request cache should be used for this - request or not, defaults to index level setting - :arg requests_per_second: The throttle for this request in sub-requests - per second. -1 means no throttle., default 0 - :arg routing: A comma-separated list of specific routing values - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg scroll_size: Size on the scroll request powering the - update_by_query - :arg search_timeout: Explicit timeout for each search request. Defaults - to no timeout. - :arg search_type: Search operation type, valid choices are: - 'query_then_fetch', 'dfs_query_then_fetch' - :arg size: Number of hits to return (default: 10) - :arg slices: The number of slices this task should be divided into. - Defaults to 1 meaning the task isn't sliced into subtasks., default - 1 - :arg sort: A comma-separated list of : pairs - :arg stats: Specific 'tag' of the request for logging and statistical - purposes - :arg terminate_after: The maximum number of documents to collect for - each shard, upon reaching which the query execution will terminate - early. - :arg timeout: Time each individual bulk request should wait for shards - that are unavailable., default '1m' - :arg version: Specify whether to return document version as part of a - hit - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the delete by query operation. - Defaults to 1, meaning the primary shard only. Set to `all` for all - shard copies, otherwise set to any non-negative value less than or - equal to the total number of copies for the shard (number of - replicas + 1) - :arg wait_for_completion: Should the request should block until the - delete-by-query is complete., default True - """ - for param in (index, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path(index, doc_type, "_delete_by_query"), - params=params, - body=body, - ) - - @query_params("requests_per_second") - def delete_by_query_rethrottle(self, task_id, params=None): - """ - ``_ - - :arg task_id: The task id to rethrottle - :arg requests_per_second: The throttle to set on this request in - floating sub-requests per second. -1 means set no throttle. - """ - if task_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'task_id'.") - return self.transport.perform_request( - "POST", - _make_path("_delete_by_query", task_id, "_rethrottle"), - params=params, - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "local", - "preference", - "routing", - ) - def search_shards(self, index=None, doc_type=None, params=None): - """ - The search shards api returns the indices and shards that a search - request would be executed against. This can give useful feedback for working - out issues or planning optimizations with routing and shard preferences. - ``_ - - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg routing: Specific routing value - """ - return self.transport.perform_request( - "GET", _make_path(index, doc_type, "_search_shards"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "explain", - "ignore_throttled", - "ignore_unavailable", - "preference", - "profile", - "rest_total_hits_as_int", - "routing", - "scroll", - "search_type", - "typed_keys", - ) - def search_template(self, index=None, doc_type=None, body=None, params=None): - """ - A query that accepts a query template and a map of key/value pairs to - fill in template parameters. - ``_ - - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg doc_type: A comma-separated list of document types to search; leave - empty to perform the operation on all types - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg explain: Specify whether to return detailed information about score - computation as part of a hit - :arg ignore_throttled: Whether specified concrete, expanded or aliased - indices should be ignored when throttled - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg profile: Specify whether to profile the query execution - :arg rest_total_hits_as_int: This parameter is ignored in this version. - It is used in the next major version to control whether the rest - response should render the total.hits as an object or a number, - default False - :arg routing: A comma-separated list of specific routing values - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg search_type: Search operation type, valid choices are: - 'query_then_fetch', 'query_and_fetch', 'dfs_query_then_fetch', - 'dfs_query_and_fetch' - :arg typed_keys: Specify whether aggregation and suggester names should - be prefixed by their respective types in the response - """ - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, "_search", "template"), - params=params, - body=body, - ) - - @query_params( - "_source", - "_source_exclude", - "_source_include", - "_source_excludes", - "_source_includes", - "analyze_wildcard", - "analyzer", - "default_operator", - "df", - "lenient", - "parent", - "preference", - "q", - "routing", - "stored_fields", - ) - def explain(self, index, doc_type, id, body=None, params=None): - """ - The explain api computes a score explanation for a query and a specific - document. This can give useful feedback whether a document matches or - didn't match a specific query. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document - :arg id: The document ID - :arg body: The query definition using the Query DSL - :arg _source: True or false to return the _source field or not, or a - list of fields to return - :arg _source_exclude: A list of fields to exclude from the returned - _source field - :arg _source_include: A list of fields to extract and return from the - _source field - :arg _source_excludes: A list of fields to exclude from the returned - _source field - :arg _source_includes: A list of fields to extract and return from the - _source field - :arg analyze_wildcard: Specify whether wildcards and prefix queries in - the query string query should be analyzed (default: false) - :arg analyzer: The analyzer for the query string query - :arg default_operator: The default operator for query string query (AND - or OR), default 'OR', valid choices are: 'AND', 'OR' - :arg df: The default field for query string query (default: _all) - :arg lenient: Specify whether format-based query failures (such as - providing text to a numeric field) should be ignored - :arg parent: The ID of the parent document - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg q: Query in the Lucene query string syntax - :arg routing: Specific routing value - :arg stored_fields: A comma-separated list of stored fields to return in - the response - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "GET", _make_path(index, doc_type, id, "_explain"), params=params, body=body - ) - - @query_params("scroll", "rest_total_hits_as_int") - def scroll(self, body=None, scroll_id=None, params=None): - """ - Scroll a search request created by specifying the scroll parameter. - ``_ - - :arg scroll_id: The scroll ID - :arg body: The scroll ID if not passed by URL or query parameter. - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg rest_total_hits_as_int: This parameter is used to restore the total hits as a number - in the response. This param is added version 6.x to handle mixed cluster queries where nodes - are in multiple versions (7.0 and 6.latest) - """ - if scroll_id in SKIP_IN_PATH and body in SKIP_IN_PATH: - raise ValueError("You need to supply scroll_id or body.") - elif scroll_id and not body: - body = {"scroll_id": scroll_id} - elif scroll_id: - params["scroll_id"] = scroll_id - - return self.transport.perform_request( - "GET", "/_search/scroll", params=params, body=body - ) - - @query_params() - def clear_scroll(self, scroll_id=None, body=None, params=None): - """ - Clear the scroll request created by specifying the scroll parameter to - search. - ``_ - - :arg scroll_id: A comma-separated list of scroll IDs to clear - :arg body: A comma-separated list of scroll IDs to clear if none was - specified via the scroll_id parameter - """ - if scroll_id in SKIP_IN_PATH and body in SKIP_IN_PATH: - raise ValueError("You need to supply scroll_id or body.") - elif scroll_id and not body: - body = {"scroll_id": [scroll_id]} - elif scroll_id: - params["scroll_id"] = scroll_id - - return self.transport.perform_request( - "DELETE", "/_search/scroll", params=params, body=body - ) - - @query_params( - "parent", - "refresh", - "routing", - "timeout", - "version", - "version_type", - "wait_for_active_shards", - "if_primary_term", - "if_seq_no", - ) - def delete(self, index, doc_type, id, params=None): - """ - Delete a typed JSON document from a specific index based on its id. - ``_ - - :arg index: The name of the index - :arg doc_type: The type of the document - :arg id: The document ID - :arg parent: ID of parent document - :arg refresh: If `true` then refresh the effected shards to make this - operation visible to search, if `wait_for` then wait for a refresh - to make this operation visible to search, if `false` (the default) - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - :arg routing: Specific routing value - :arg timeout: Explicit operation timeout - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the delete operation. Defaults to - 1, meaning the primary shard only. Set to `all` for all shard - copies, otherwise set to any non-negative value less than or equal - to the total number of copies for the shard (number of replicas + 1) - :arg if_primary_term: only perform the delete operation if the last - operation that has changed the document has the specified primary - term - :arg if_seq_no: only perform the delete operation if the last operation - that has changed the document has the specified sequence number - """ - for param in (index, doc_type, id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "DELETE", _make_path(index, doc_type, id), params=params - ) - - @query_params( - "allow_no_indices", - "analyze_wildcard", - "analyzer", - "default_operator", - "df", - "expand_wildcards", - "ignore_unavailable", - "lenient", - "min_score", - "preference", - "q", - "routing", - "terminate_after", - ) - def count(self, index=None, doc_type=None, body=None, params=None): - """ - Execute a query and get the number of matches for that query. - ``_ - - :arg index: A comma-separated list of indices to restrict the results - :arg doc_type: A comma-separated list of types to restrict the results - :arg body: A query to restrict the results specified with the Query DSL - (optional) - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg analyze_wildcard: Specify whether wildcard and prefix queries - should be analyzed (default: false) - :arg analyzer: The analyzer to use for the query string - :arg default_operator: The default operator for query string query (AND - or OR), default 'OR', valid choices are: 'AND', 'OR' - :arg df: The field to use as default where no field prefix is given in - the query string - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg lenient: Specify whether format-based query failures (such as - providing text to a numeric field) should be ignored - :arg min_score: Include only documents with a specific `_score` value in - the result - :arg preference: Specify the node or shard the operation should be - performed on (default: random) - :arg q: Query in the Lucene query string syntax - :arg routing: Specific routing value - """ - if doc_type and not index: - index = "_all" - - return self.transport.perform_request( - "GET", _make_path(index, doc_type, "_count"), params=params, body=body - ) - - @query_params( - "_source", - "_source_exclude", - "_source_excludes", - "_source_include", - "_source_includes", - "fields", - "pipeline", - "refresh", - "routing", - "timeout", - "wait_for_active_shards", - ) - def bulk(self, body, index=None, doc_type=None, params=None): - """ - Perform many index/delete operations in a single API call. - - See the :func:`~elasticsearch.helpers.bulk` helper function for a more - friendly API. - ``_ - - :arg body: The operation definition and data (action-data pairs), - separated by newlines - :arg index: Default index for items which don't provide one - :arg doc_type: Default document type for items which don't provide one - :arg _source: True or false to return the _source field or not, or - default list of fields to return, can be overridden on each sub- - request - :arg _source_exclude: Default list of fields to exclude from the - returned _source field, can be overridden on each sub-request - :arg _source_include: Default list of fields to extract and return from - the _source field, can be overridden on each sub-request - :arg _source_excludes: Default list of fields to exclude from the - returned _source field, can be overridden on each sub-request - :arg _source_includes: Default list of fields to extract and return from - the _source field, can be overridden on each sub-request - :arg fields: Default comma-separated list of fields to return in the - response for updates, can be overridden on each sub-request - :arg pipeline: The pipeline id to preprocess incoming documents with - :arg refresh: If `true` then refresh the effected shards to make this - operation visible to search, if `wait_for` then wait for a refresh - to make this operation visible to search, if `false` (the default) - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - :arg routing: Specific routing value - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Sets the number of shard copies that must - be active before proceeding with the bulk operation. Defaults to 1, - meaning the primary shard only. Set to `all` for all shard copies, - otherwise set to any non-negative value less than or equal to the - total number of copies for the shard (number of replicas + 1) - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", - _make_path(index, doc_type, "_bulk"), - params=params, - body=self._bulk_body(body), - headers={"content-type": "application/x-ndjson"}, - ) - - @query_params( - "max_concurrent_searches", - "max_concurrent_shard_requests", - "pre_filter_shard_size", - "rest_total_hits_as_int", - "search_type", - "typed_keys", - ) - def msearch(self, body, index=None, doc_type=None, params=None): - """ - Execute several search requests within the same API. - ``_ - - :arg body: The request definitions (metadata-search request definition - pairs), separated by newlines - :arg index: A comma-separated list of index names to use as default - :arg doc_type: A comma-separated list of document types to use as - default - :arg max_concurrent_searches: Controls the maximum number of concurrent - searches the multi search api will execute - :arg max_concurrent_searches: Controls the maximum number of concurrent - searches the multi search api will execute - :arg pre_filter_shard_size: A threshold that enforces a pre-filter - roundtrip to prefilter search shards based on query rewriting if - the number of shards the search request expands to exceeds the - threshold. This filter roundtrip can limit the number of shards - significantly if for instance a shard can not match any documents - based on it's rewrite method ie. if date filters are mandatory to - match but the shard bounds and the query are disjoint., default 128 - :arg rest_total_hits_as_int: This parameter is ignored in this version. - It is used in the next major version to control whether the rest - response should render the total.hits as an object or a number, - default False - :arg search_type: Search operation type, valid choices are: - 'query_then_fetch', 'query_and_fetch', 'dfs_query_then_fetch', - 'dfs_query_and_fetch' - :arg typed_keys: Specify whether aggregation and suggester names should - be prefixed by their respective types in the response - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, "_msearch"), - params=params, - body=self._bulk_body(body), - headers={"content-type": "application/x-ndjson"}, - ) - - @query_params( - "field_statistics", - "fields", - "offsets", - "parent", - "payloads", - "positions", - "preference", - "realtime", - "routing", - "term_statistics", - "version", - "version_type", - ) - def termvectors(self, index, doc_type, id=None, body=None, params=None): - """ - Returns information and statistics on terms in the fields of a - particular document. The document could be stored in the index or - artificially provided by the user (Added in 1.4). Note that for - documents stored in the index, this is a near realtime API as the term - vectors are not available until the next refresh. - ``_ - - :arg index: The index in which the document resides. - :arg doc_type: The type of the document. - :arg id: The id of the document, when not specified a doc param should - be supplied. - :arg body: Define parameters and or supply a document to get termvectors - for. See documentation. - :arg field_statistics: Specifies if document count, sum of document - frequencies and sum of total term frequencies should be returned., - default True - :arg fields: A comma-separated list of fields to return. - :arg offsets: Specifies if term offsets should be returned., default - True - :arg parent: Parent id of documents. - :arg payloads: Specifies if term payloads should be returned., default - True - :arg positions: Specifies if term positions should be returned., default - True - :arg preference: Specify the node or shard the operation should be - performed on (default: random). - :arg realtime: Specifies if request is real-time as opposed to near- - real-time (default: true). - :arg routing: Specific routing value. - :arg term_statistics: Specifies if total term frequency and document - frequency should be returned., default False - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - """ - for param in (index, doc_type): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, id, "_termvectors"), - params=params, - body=body, - ) - - @query_params( - "field_statistics", - "fields", - "ids", - "offsets", - "parent", - "payloads", - "positions", - "preference", - "realtime", - "routing", - "term_statistics", - "version", - "version_type", - ) - def mtermvectors(self, index=None, doc_type=None, body=None, params=None): - """ - Multi termvectors API allows to get multiple termvectors based on an - index, type and id. - ``_ - - :arg index: The index in which the document resides. - :arg doc_type: The type of the document. - :arg body: Define ids, documents, parameters or a list of parameters per - document here. You must at least provide a list of document ids. See - documentation. - :arg field_statistics: Specifies if document count, sum of document - frequencies and sum of total term frequencies should be returned. - Applies to all returned documents unless otherwise specified in body - "params" or "docs"., default True - :arg fields: A comma-separated list of fields to return. Applies to all - returned documents unless otherwise specified in body "params" or - "docs". - :arg ids: A comma-separated list of documents ids. You must define ids - as parameter or set "ids" or "docs" in the request body - :arg offsets: Specifies if term offsets should be returned. Applies to - all returned documents unless otherwise specified in body "params" - or "docs"., default True - :arg parent: Parent id of documents. Applies to all returned documents - unless otherwise specified in body "params" or "docs". - :arg payloads: Specifies if term payloads should be returned. Applies to - all returned documents unless otherwise specified in body "params" - or "docs"., default True - :arg positions: Specifies if term positions should be returned. Applies - to all returned documents unless otherwise specified in body - "params" or "docs"., default True - :arg preference: Specify the node or shard the operation should be - performed on (default: random) .Applies to all returned documents - unless otherwise specified in body "params" or "docs". - :arg realtime: Specifies if requests are real-time as opposed to near- - real-time (default: true). - :arg routing: Specific routing value. Applies to all returned documents - unless otherwise specified in body "params" or "docs". - :arg term_statistics: Specifies if total term frequency and document - frequency should be returned. Applies to all returned documents - unless otherwise specified in body "params" or "docs"., default - False - :arg version: Explicit version number for concurrency control - :arg version_type: Specific version type, valid choices are: 'internal', - 'external', 'external_gte', 'force' - """ - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, "_mtermvectors"), - params=params, - body=body, - ) - - @query_params("master_timeout", "timeout") - def put_script(self, id, body, context=None, params=None): - """ - Create a script in given language with specified ID. - ``_ - - :arg id: Script ID - :arg body: The document - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - """ - for param in (id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path("_scripts", id, context), params=params, body=body - ) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable") - def rank_eval(self, body, index=None, params=None): - """ - ``_ - - :arg body: The ranking evaluation search definition, including search - requests, document ratings and ranking metric definition. - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "GET", _make_path(index, "_rank_eval"), params=params, body=body - ) - - @query_params("master_timeout") - def get_script(self, id, params=None): - """ - Retrieve a script from the API. - ``_ - - :arg id: Script ID - :arg master_timeout: Specify timeout for connection to master - """ - if id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'id'.") - return self.transport.perform_request( - "GET", _make_path("_scripts", id), params=params - ) - - @query_params("master_timeout", "timeout") - def delete_script(self, id, params=None): - """ - Remove a stored script from elasticsearch. - ``_ - - :arg id: Script ID - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - """ - if id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'id'.") - return self.transport.perform_request( - "DELETE", _make_path("_scripts", id), params=params - ) - - @query_params() - def render_search_template(self, id=None, body=None, params=None): - """ - ``_ - - :arg id: The id of the stored search template - :arg body: The search definition template and its params - """ - return self.transport.perform_request( - "GET", _make_path("_render", "template", id), params=params, body=body - ) - - @query_params() - def scripts_painless_execute(self, body=None, params=None): - """ - ``_ - - :arg body: The script to execute - """ - return self.transport.perform_request( - "GET", "/_scripts/painless/_execute", params=params, body=body - ) - - @query_params( - "max_concurrent_searches", "rest_total_hits_as_int", "search_type", "typed_keys" - ) - def msearch_template(self, body, index=None, doc_type=None, params=None): - """ - The /_search/template endpoint allows to use the mustache language to - pre render search requests, before they are executed and fill existing - templates with template parameters. - ``_ - - :arg body: The request definitions (metadata-search request definition - pairs), separated by newlines - :arg index: A comma-separated list of index names to use as default - :arg max_concurrent_searches: Controls the maximum number of concurrent - searches the multi search api will execute - :arg rest_total_hits_as_int: This parameter is ignored in this version. - It is used in the next major version to control whether the rest - response should render the total.hits as an object or a number, - default False - :arg search_type: Search operation type, valid choices are: - 'query_then_fetch', 'query_and_fetch', 'dfs_query_then_fetch', - 'dfs_query_and_fetch' - :arg typed_keys: Specify whether aggregation and suggester names should - be prefixed by their respective types in the response - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, "_msearch", "template"), - params=params, - body=self._bulk_body(body), - headers={"content-type": "application/x-ndjson"}, - ) - - @query_params( - "allow_no_indices", "expand_wildcards", "fields", "ignore_unavailable" - ) - def field_caps(self, index=None, body=None, params=None): - """ - The field capabilities API allows to retrieve the capabilities of fields among multiple indices. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg body: Field json objects containing an array of field names - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg fields: A comma-separated list of field names - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - """ - return self.transport.perform_request( - "GET", _make_path(index, "_field_caps"), params=params, body=body - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index e114ad011..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/cat.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/cat.cpython-310.pyc deleted file mode 100644 index c3ca14c46..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/cat.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/cluster.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/cluster.cpython-310.pyc deleted file mode 100644 index b27b8d6c3..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/cluster.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/indices.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/indices.cpython-310.pyc deleted file mode 100644 index 16bac92b6..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/indices.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/ingest.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/ingest.cpython-310.pyc deleted file mode 100644 index a4b75f4be..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/ingest.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/nodes.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/nodes.cpython-310.pyc deleted file mode 100644 index 7367e3e58..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/nodes.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/remote.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/remote.cpython-310.pyc deleted file mode 100644 index 1f134e285..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/remote.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/snapshot.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/snapshot.cpython-310.pyc deleted file mode 100644 index 2202684cd..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/snapshot.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/tasks.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/tasks.cpython-310.pyc deleted file mode 100644 index 52c78b82f..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/tasks.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/utils.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/utils.cpython-310.pyc deleted file mode 100644 index 6aebad802..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/__pycache__/utils.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/cat.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/cat.py deleted file mode 100644 index 2f5bdb55c..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/cat.py +++ /dev/null @@ -1,470 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path - - -class CatClient(NamespacedClient): - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def aliases(self, name=None, params=None): - """ - ``_ - - :arg name: A comma-separated list of alias names to return - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "aliases", name), params=params - ) - - @query_params("bytes", "format", "h", "help", "local", "master_timeout", "s", "v") - def allocation(self, node_id=None, params=None): - """ - Allocation provides a snapshot of how shards have located around the - cluster and the state of disk usage. - - ``_ - - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information - :arg bytes: The unit in which to display byte values, valid choices are: - 'b', 'k', 'kb', 'm', 'mb', 'g', 'gb', 't', 'tb', 'p', 'pb' - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "allocation", node_id), params=params - ) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def count(self, index=None, params=None): - """ - Count provides quick access to the document count of the entire cluster, - or individual indices. - - ``_ - - :arg index: A comma-separated list of index names to limit the returned - information - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "count", index), params=params - ) - - @query_params("bytes", "format", "h", "help", "local", "master_timeout", "s", "v") - def fielddata(self, fields=None, params=None): - """ - ``_ - - :arg fields: A comma-separated list of fields to return the fielddata - size - :arg bytes: The unit in which to display byte values, valid choices are: - 'b', 'k', 'kb', 'm', 'mb', 'g', 'gb', 't', 'tb', 'p', 'pb' - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "fielddata", fields), params=params - ) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "ts", "v") - def health(self, params=None): - """ - health is a terse, one-line representation of the same information from - :meth:`~elasticsearch.client.cluster.ClusterClient.health` API - - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg ts: Set to false to disable timestamping, default True - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request("GET", "/_cat/health", params=params) - - @query_params("help", "s") - def help(self, params=None): - """ - A simple help for the cat api. - ``_ - - :arg help: Return help information, default False - :arg s: Comma-separated list of column names or column aliases to sort - by - """ - return self.transport.perform_request("GET", "/_cat", params=params) - - @query_params( - "bytes", - "format", - "h", - "health", - "help", - "local", - "master_timeout", - "pri", - "s", - "v", - ) - def indices(self, index=None, params=None): - """ - The indices command provides a cross-section of each index. - ``_ - - :arg index: A comma-separated list of index names to limit the returned - information - :arg bytes: The unit in which to display byte values, valid choices are: - 'b', 'k', 'm', 'g' - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg health: A health status ("green", "yellow", or "red" to filter only - indices matching the specified health status, default None, valid - choices are: 'green', 'yellow', 'red' - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg pri: Set to true to return stats only for primary shards, default - False - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "indices", index), params=params - ) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def master(self, params=None): - """ - Displays the master's node ID, bound IP address, and node name. - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request("GET", "/_cat/master", params=params) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def nodeattrs(self, params=None): - """ - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request("GET", "/_cat/nodeattrs", params=params) - - @query_params("format", "full_id", "h", "help", "local", "master_timeout", "s", "v") - def nodes(self, params=None): - """ - The nodes command shows the cluster topology. - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg full_id: Return the full node ID instead of the shortened version - (default: false) - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request("GET", "/_cat/nodes", params=params) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def pending_tasks(self, params=None): - """ - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", "/_cat/pending_tasks", params=params - ) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def plugins(self, params=None): - """ - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request("GET", "/_cat/plugins", params=params) - - @query_params("bytes", "format", "h", "help", "master_timeout", "s", "v") - def recovery(self, index=None, params=None): - """ - ``_ - - :arg index: A comma-separated list of index names to limit the returned - information - :arg bytes: The unit in which to display byte values, valid choices are: - 'b', 'k', 'kb', 'm', 'mb', 'g', 'gb', 't', 'tb', 'p', 'pb' - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "recovery", index), params=params - ) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def repositories(self, params=None): - """ - ``_ - - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node, default False - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", "/_cat/repositories", params=params - ) - - @query_params("bytes", "format", "h", "help", "s", "v") - def segments(self, index=None, params=None): - """ - ``_ - - :arg index: A comma-separated list of index names to limit the returned - information - :arg bytes: The unit in which to display byte values, valid choices are: - 'b', 'k', 'kb', 'm', 'mb', 'g', 'gb', 't', 'tb', 'p', 'pb' - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "segments", index), params=params - ) - - @query_params("bytes", "format", "h", "help", "local", "master_timeout", "s", "v") - def shards(self, index=None, params=None): - """ - ``_ - - :arg index: A comma-separated list of index names to limit the returned - information - :arg bytes: The unit in which to display byte values, valid choices are: - 'b', 'k', 'kb', 'm', 'mb', 'g', 'gb', 't', 'tb', 'p', 'pb' - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "shards", index), params=params - ) - - @query_params( - "format", "h", "help", "ignore_unavailable", "master_timeout", "s", "v" - ) - def snapshots(self, repository=None, params=None): - """ - ``_ - - :arg repository: Name of repository from which to fetch the snapshot - information - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg ignore_unavailable: Set to true to ignore unavailable snapshots, - default False - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "snapshots", repository), params=params - ) - - @query_params( - "actions", "detailed", "format", "h", "help", "node_id", "parent_task", "s", "v" - ) - def tasks(self, params=None): - """ - ``_ - - :arg actions: A comma-separated list of actions that should be returned. - Leave empty to return all. - :arg detailed: Return detailed task information (default: false) - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg parent_task: Return tasks with specified parent task id. Set to -1 - to return all. - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request("GET", "/_cat/tasks", params=params) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "v") - def templates(self, name=None, params=None): - """ - ``_ - - :arg name: A pattern that returned template names must match - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", _make_path("_cat", "templates", name), params=params - ) - - @query_params("format", "h", "help", "local", "master_timeout", "s", "size", "v") - def thread_pool(self, thread_pool_patterns=None, params=None): - """ - ``_ - - :arg thread_pool_patterns: A comma-separated list of regular-expressions - to filter the thread pools in the output - :arg format: a short version of the Accept header, e.g. json, yaml - :arg h: Comma-separated list of column names to display - :arg help: Return help information, default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg s: Comma-separated list of column names or column aliases to sort - by - :arg size: The multiplier in which to display values, valid choices are: - '', 'k', 'm', 'g', 't', 'p' - :arg v: Verbose mode. Display column headers, default False - """ - return self.transport.perform_request( - "GET", - _make_path("_cat", "thread_pool", thread_pool_patterns), - params=params, - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/cluster.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/cluster.py deleted file mode 100644 index 9d11376a5..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/cluster.py +++ /dev/null @@ -1,221 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path - - -class ClusterClient(NamespacedClient): - @query_params( - "level", - "local", - "master_timeout", - "timeout", - "wait_for_active_shards", - "wait_for_events", - "wait_for_no_relocating_shards", - "wait_for_nodes", - "wait_for_status", - "wait_for_no_initializing_shards", - ) - def health(self, index=None, params=None): - """ - Get a very simple status on the health of the cluster. - ``_ - - :arg index: Limit the information returned to a specific index - :arg level: Specify the level of detail for returned information, - default 'cluster', valid choices are: 'cluster', 'indices', 'shards' - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Wait until the specified number of shards - is active - :arg wait_for_events: Wait until all currently queued events with the - given priority are processed, valid choices are: 'immediate', - 'urgent', 'high', 'normal', 'low', 'languid' - :arg wait_for_no_relocating_shards: Whether to wait until there are no - relocating shards in the cluster - :arg wait_for_nodes: Wait until the specified number of nodes is - available - :arg wait_for_status: Wait until cluster is in a specific state, default - None, valid choices are: 'green', 'yellow', 'red' - """ - return self.transport.perform_request( - "GET", _make_path("_cluster", "health", index), params=params - ) - - @query_params("local", "master_timeout") - def pending_tasks(self, params=None): - """ - The pending cluster tasks API returns a list of any cluster-level - changes (e.g. create index, update mapping, allocate or fail shard) - which have not yet been executed. - ``_ - - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Specify timeout for connection to master - """ - return self.transport.perform_request( - "GET", "/_cluster/pending_tasks", params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "flat_settings", - "ignore_unavailable", - "local", - "master_timeout", - "wait_for_metadata_version", - "wait_for_timeout", - ) - def state(self, metric=None, index=None, params=None): - """ - Get a comprehensive state information of the whole cluster. - ``_ - - :arg metric: Limit the information returned to the specified metrics - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg flat_settings: Return settings in flat format (default: false) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Specify timeout for connection to master - :arg wait_for_metadata_version: Wait for the metadata version to be - equal or greater than the specified metadata version - :arg wait_for_timeout: The maximum time to wait for - wait_for_metadata_version before timing out - """ - if index and not metric: - metric = "_all" - return self.transport.perform_request( - "GET", _make_path("_cluster", "state", metric, index), params=params - ) - - @query_params("flat_settings", "timeout") - def stats(self, node_id=None, params=None): - """ - The Cluster Stats API allows to retrieve statistics from a cluster wide - perspective. The API returns basic index metrics and information about - the current nodes that form the cluster. - ``_ - - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg flat_settings: Return settings in flat format (default: false) - :arg timeout: Explicit operation timeout - """ - url = "/_cluster/stats" - if node_id: - url = _make_path("_cluster/stats/nodes", node_id) - return self.transport.perform_request("GET", url, params=params) - - @query_params( - "dry_run", "explain", "master_timeout", "metric", "retry_failed", "timeout" - ) - def reroute(self, body=None, params=None): - """ - Explicitly execute a cluster reroute allocation command including specific commands. - ``_ - - :arg body: The definition of `commands` to perform (`move`, `cancel`, - `allocate`) - :arg dry_run: Simulate the operation only and return the resulting state - :arg explain: Return an explanation of why the commands can or cannot be - executed - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg metric: Limit the information returned to the specified metrics. - Defaults to all but metadata, valid choices are: '_all', 'blocks', - 'metadata', 'nodes', 'routing_table', 'master_node', 'version' - :arg retry_failed: Retries allocation of shards that are blocked due to - too many subsequent allocation failures - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "POST", "/_cluster/reroute", params=params, body=body - ) - - @query_params("flat_settings", "include_defaults", "master_timeout", "timeout") - def get_settings(self, params=None): - """ - Get cluster settings. - ``_ - - :arg flat_settings: Return settings in flat format (default: false) - :arg include_defaults: Whether to return all default clusters setting., - default False - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "GET", "/_cluster/settings", params=params - ) - - @query_params("flat_settings", "master_timeout", "timeout") - def put_settings(self, body=None, params=None): - """ - Update cluster wide specific settings. - ``_ - - :arg body: The settings to be updated. Can be either `transient` or - `persistent` (survives cluster restart). - :arg flat_settings: Return settings in flat format (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "PUT", "/_cluster/settings", params=params, body=body - ) - - @query_params("include_disk_info", "include_yes_decisions") - def allocation_explain(self, body=None, params=None): - """ - ``_ - - :arg body: The index, shard, and primary flag to explain. Empty means - 'explain the first unassigned shard' - :arg include_disk_info: Return information about disk usage and shard - sizes (default: false) - :arg include_yes_decisions: Return 'YES' decisions in explanation - (default: false) - """ - return self.transport.perform_request( - "GET", "/_cluster/allocation/explain", params=params, body=body - ) - - @query_params() - def remote_info(self, params=None): - """ - ``_ - """ - return self.transport.perform_request("GET", "/_remote/info", params=params) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/indices.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/indices.py deleted file mode 100644 index bea264564..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/indices.py +++ /dev/null @@ -1,1105 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path, SKIP_IN_PATH - - -class IndicesClient(NamespacedClient): - @query_params("format", "prefer_local") - def analyze(self, index=None, body=None, params=None): - """ - Perform the analysis process on a text and return the tokens breakdown of the text. - ``_ - - :arg index: The name of the index to scope the operation - :arg body: Define analyzer/tokenizer parameters and the text on which - the analysis should be performed - :arg format: Format of the output, default 'detailed', valid choices - are: 'detailed', 'text' - :arg prefer_local: With `true`, specify that a local shard should be - used if available, with `false`, use a random shard (default: true) - """ - return self.transport.perform_request( - "GET", _make_path(index, "_analyze"), params=params, body=body - ) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable") - def refresh(self, index=None, params=None): - """ - Explicitly refresh one or more index, making all operations performed - since the last refresh available for search. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - """ - return self.transport.perform_request( - "POST", _make_path(index, "_refresh"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "force", - "ignore_unavailable", - "wait_if_ongoing", - ) - def flush(self, index=None, params=None): - """ - Explicitly flush one or more indices. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string for all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg force: Whether a flush should be forced even if it is not - necessarily needed ie. if no changes will be committed to the index. - This is useful if transaction log IDs should be incremented even if - no uncommitted changes are present. (This setting can be considered - as internal) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg wait_if_ongoing: If set to true the flush operation will block - until the flush can be executed if another flush operation is - already executing. The default is true. If set to false the flush - will be skipped iff if another flush operation is already running. - """ - return self.transport.perform_request( - "POST", _make_path(index, "_flush"), params=params - ) - - @query_params( - "master_timeout", - "timeout", - "wait_for_active_shards", - "include_type_name", - "update_all_types", - ) - def create(self, index, body=None, params=None): - """ - Create an index in Elasticsearch. - ``_ - - :arg index: The name of the index - :arg body: The configuration for the index (`settings` and `mappings`) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Set the number of active shards to wait for - before the operation returns. - :arg update_all_types: Whether to update the mapping for all fields with - the same name across all types or not - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "PUT", _make_path(index), params=params, body=body - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "flat_settings", - "ignore_unavailable", - "include_defaults", - "local", - "include_type_name", - "master_timeout", - ) - def get(self, index, feature=None, params=None): - """ - The get index API allows to retrieve information about one or more indexes. - ``_ - - :arg index: A comma-separated list of index names - :arg allow_no_indices: Ignore if a wildcard expression resolves to no - concrete indices (default: false) - :arg expand_wildcards: Whether wildcard expressions should get expanded - to open or closed indices (default: open), default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg flat_settings: Return settings in flat format (default: false) - :arg ignore_unavailable: Ignore unavailable indexes (default: false) - :arg include_defaults: Whether to return all default setting for each of - the indices., default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - :arg master_timeout: Specify timeout for connection to master - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "GET", _make_path(index, feature), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "master_timeout", - "timeout", - "wait_for_active_shards", - ) - def open(self, index, params=None): - """ - Open a closed index to make it available for search. - ``_ - - :arg index: The name of the index - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'closed', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Sets the number of active shards to wait - for before the operation returns. - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "POST", _make_path(index, "_open"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "master_timeout", - "timeout", - ) - def close(self, index, params=None): - """ - Close an index to remove it's overhead from the cluster. Closed index - is blocked for read/write operations. - ``_ - - :arg index: The name of the index - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "POST", _make_path(index, "_close"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "master_timeout", - "timeout", - ) - def delete(self, index, params=None): - """ - Delete an index in Elasticsearch - ``_ - - :arg index: A comma-separated list of indices to delete; use `_all` or - `*` string to delete all indices - :arg allow_no_indices: Ignore if a wildcard expression resolves to no - concrete indices (default: false) - :arg expand_wildcards: Whether wildcard expressions should get expanded - to open or closed indices (default: open), default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Ignore unavailable indexes (default: false) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "DELETE", _make_path(index), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "flat_settings", - "ignore_unavailable", - "include_defaults", - "local", - ) - def exists(self, index, params=None): - """ - Return a boolean indicating whether given index exists. - ``_ - - :arg index: A comma-separated list of index names - :arg allow_no_indices: Ignore if a wildcard expression resolves to no - concrete indices (default: false) - :arg expand_wildcards: Whether wildcard expressions should get expanded - to open or closed indices (default: open), default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg flat_settings: Return settings in flat format (default: false) - :arg ignore_unavailable: Ignore unavailable indexes (default: false) - :arg include_defaults: Whether to return all default setting for each of - the indices., default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request("HEAD", _make_path(index), params=params) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable", "local") - def exists_type(self, index, doc_type, params=None): - """ - Check if a type/types exists in an index/indices. - ``_ - - :arg index: A comma-separated list of index names; use `_all` to check - the types across all indices - :arg doc_type: A comma-separated list of document types to check - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - """ - for param in (index, doc_type): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "HEAD", _make_path(index, "_mapping", doc_type), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "master_timeout", - "timeout", - "include_type_name", - "update_all_types", - ) - def put_mapping(self, body, doc_type=None, index=None, params=None): - """ - Register specific mapping definition for a specific type. - ``_ - - :arg doc_type: The name of the document type - :arg body: The mapping definition - :arg index: A comma-separated list of index names the mapping should be - added to (supports wildcards); use `_all` or omit to add the mapping - on all indices. - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - :arg update_all_types: Whether to update the mapping for all fields with - the same name across all types or not - """ - for param in (body,): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path(index, "_mapping", doc_type), params=params, body=body - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "local", - "include_type_name", - ) - def get_mapping(self, index=None, doc_type=None, params=None): - """ - Retrieve mapping definition of index or index/type. - ``_ - - :arg index: A comma-separated list of index names - :arg doc_type: A comma-separated list of document types - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - """ - return self.transport.perform_request( - "GET", _make_path(index, "_mapping", doc_type), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "include_defaults", - "local", - "include_type_name", - ) - def get_field_mapping(self, fields, index=None, doc_type=None, params=None): - """ - Retrieve mapping definition of a specific field. - ``_ - - :arg fields: A comma-separated list of fields - :arg index: A comma-separated list of index names - :arg doc_type: A comma-separated list of document types - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg include_defaults: Whether the default mapping values should be - returned as well - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - """ - if fields in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'fields'.") - return self.transport.perform_request( - "GET", - _make_path(index, "_mapping", doc_type, "field", fields), - params=params, - ) - - @query_params("master_timeout", "timeout") - def put_alias(self, index, name, body=None, params=None): - """ - Create an alias for a specific index/indices. - ``_ - - :arg index: A comma-separated list of index names the alias should point - to (supports wildcards); use `_all` to perform the operation on all - indices. - :arg name: The name of the alias to be created or updated - :arg body: The settings for the alias, such as `routing` or `filter` - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit timeout for the operation - """ - for param in (index, name): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path(index, "_alias", name), params=params, body=body - ) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable", "local") - def exists_alias(self, index=None, name=None, params=None): - """ - Return a boolean indicating whether given alias exists. - ``_ - - :arg index: A comma-separated list of index names to filter aliases - :arg name: A comma-separated list of alias names to return - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'all', valid choices - are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - """ - return self.transport.perform_request( - "HEAD", _make_path(index, "_alias", name), params=params - ) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable", "local") - def get_alias(self, index=None, name=None, params=None): - """ - Retrieve a specified alias. - ``_ - - :arg index: A comma-separated list of index names to filter aliases - :arg name: A comma-separated list of alias names to return - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'all', valid choices - are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - """ - return self.transport.perform_request( - "GET", _make_path(index, "_alias", name), params=params - ) - - @query_params("master_timeout", "timeout") - def update_aliases(self, body, params=None): - """ - Update specified aliases. - ``_ - - :arg body: The definition of `actions` to perform - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Request timeout - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", "/_aliases", params=params, body=body - ) - - @query_params("master_timeout", "timeout") - def delete_alias(self, index, name, params=None): - """ - Delete specific alias. - ``_ - - :arg index: A comma-separated list of index names (supports wildcards); - use `_all` for all indices - :arg name: A comma-separated list of aliases to delete (supports - wildcards); use `_all` to delete all aliases for the specified - indices. - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit timeout for the operation - """ - for param in (index, name): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "DELETE", _make_path(index, "_alias", name), params=params - ) - - @query_params( - "create", - "flat_settings", - "master_timeout", - "order", - "timeout", - "include_type_name", - ) - def put_template(self, name, body, params=None): - """ - Create an index template that will automatically be applied to new - indices created. - ``_ - - :arg name: The name of the template - :arg body: The template definition - :arg create: Whether the index template should only be added if new or - can also replace an existing one, default False - :arg flat_settings: Return settings in flat format (default: false) - :arg master_timeout: Specify timeout for connection to master - :arg order: The order for this template when merging multiple matching - ones (higher numbers are merged later, overriding the lower numbers) - :arg timeout: Explicit operation timeout - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - """ - for param in (name, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path("_template", name), params=params, body=body - ) - - @query_params("flat_settings", "local", "master_timeout") - def exists_template(self, name, params=None): - """ - Return a boolean indicating whether given template exists. - ``_ - - :arg name: The comma separated names of the index templates - :arg flat_settings: Return settings in flat format (default: false) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - if name in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'name'.") - return self.transport.perform_request( - "HEAD", _make_path("_template", name), params=params - ) - - @query_params("flat_settings", "local", "master_timeout", "include_type_name") - def get_template(self, name=None, params=None): - """ - Retrieve an index template by its name. - ``_ - - :arg name: The name of the template - :arg flat_settings: Return settings in flat format (default: false) - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - """ - return self.transport.perform_request( - "GET", _make_path("_template", name), params=params - ) - - @query_params("master_timeout", "timeout") - def delete_template(self, name, params=None): - """ - Delete an index template by its name. - ``_ - - :arg name: The name of the template - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - """ - if name in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'name'.") - return self.transport.perform_request( - "DELETE", _make_path("_template", name), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "flat_settings", - "ignore_unavailable", - "include_defaults", - "local", - "master_timeout", - ) - def get_settings(self, index=None, name=None, params=None): - """ - Retrieve settings for one or more (or all) indices. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg name: The name of the settings that should be included - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default ['open', 'closed'], - valid choices are: 'open', 'closed', 'none', 'all' - :arg flat_settings: Return settings in flat format (default: false) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg include_defaults: Whether to return all default setting for each of - the indices., default False - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Specify timeout for connection to master - """ - return self.transport.perform_request( - "GET", _make_path(index, "_settings", name), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "flat_settings", - "ignore_unavailable", - "master_timeout", - "preserve_existing", - "timeout", - ) - def put_settings(self, body, index=None, params=None): - """ - Change specific index level settings in real time. - ``_ - - :arg body: The index settings to be updated - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg flat_settings: Return settings in flat format (default: false) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg master_timeout: Specify timeout for connection to master - :arg preserve_existing: Whether to update existing settings. If set to - `true` existing settings on an index remain unchanged, the default - is `false` - :arg timeout: Explicit operation timeout - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "PUT", _make_path(index, "_settings"), params=params, body=body - ) - - @query_params( - "completion_fields", - "fielddata_fields", - "fields", - "groups", - "include_segment_file_sizes", - "level", - "types", - ) - def stats(self, index=None, metric=None, params=None): - """ - Retrieve statistics on different operations happening on an index. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg metric: Limit the information returned the specific metrics. - :arg completion_fields: A comma-separated list of fields for `fielddata` - and `suggest` index metric (supports wildcards) - :arg fielddata_fields: A comma-separated list of fields for `fielddata` - index metric (supports wildcards) - :arg fields: A comma-separated list of fields for `fielddata` and - `completion` index metric (supports wildcards) - :arg groups: A comma-separated list of search groups for `search` index - metric - :arg include_segment_file_sizes: Whether to report the aggregated disk - usage of each one of the Lucene index files (only applies if segment - stats are requested), default False - :arg level: Return stats aggregated at cluster, index or shard level, - default 'indices', valid choices are: 'cluster', 'indices', 'shards' - :arg types: A comma-separated list of document types for the `indexing` - index metric - """ - return self.transport.perform_request( - "GET", _make_path(index, "_stats", metric), params=params - ) - - @query_params( - "allow_no_indices", "expand_wildcards", "ignore_unavailable", "verbose" - ) - def segments(self, index=None, params=None): - """ - Provide low level segments information that a Lucene index (shard level) is built with. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg verbose: Includes detailed memory usage by Lucene., default False - """ - return self.transport.perform_request( - "GET", _make_path(index, "_segments"), params=params - ) - - @query_params( - "all_shards", - "allow_no_indices", - "analyze_wildcard", - "analyzer", - "default_operator", - "df", - "expand_wildcards", - "explain", - "ignore_unavailable", - "lenient", - "q", - "rewrite", - ) - def validate_query(self, index=None, doc_type=None, body=None, params=None): - """ - Validate a potentially expensive query without executing it. - ``_ - - :arg index: A comma-separated list of index names to restrict the - operation; use `_all` or empty string to perform the operation on - all indices - :arg doc_type: A comma-separated list of document types to restrict the - operation; leave empty to perform the operation on all types - :arg body: The query definition specified with the Query DSL - :arg all_shards: Execute validation on all shards instead of one random - shard per index - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg analyze_wildcard: Specify whether wildcard and prefix queries - should be analyzed (default: false) - :arg analyzer: The analyzer to use for the query string - :arg default_operator: The default operator for query string query (AND - or OR), default 'OR', valid choices are: 'AND', 'OR' - :arg df: The field to use as default where no field prefix is given in - the query string - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg explain: Return detailed information about the error - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg lenient: Specify whether format-based query failures (such as - providing text to a numeric field) should be ignored - :arg q: Query in the Lucene query string syntax - :arg rewrite: Provide a more detailed explanation showing the actual - Lucene query that will be executed. - """ - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, "_validate", "query"), - params=params, - body=body, - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "field_data", - "fielddata", - "fields", - "ignore_unavailable", - "query", - "request", - "request_cache", - ) - def clear_cache(self, index=None, params=None): - """ - Clear either all caches or specific cached associated with one ore more indices. - ``_ - - :arg index: A comma-separated list of index name to limit the operation - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg field_data: Clear field data - :arg fielddata: Clear field data - :arg fields: A comma-separated list of fields to clear when using the - `field_data` parameter (default: all) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg query: Clear query caches - :arg request: Clear request cache - :arg request_cache: Clear request cache - """ - return self.transport.perform_request( - "POST", _make_path(index, "_cache", "clear"), params=params - ) - - @query_params("active_only", "detailed") - def recovery(self, index=None, params=None): - """ - The indices recovery API provides insight into on-going shard - recoveries. Recovery status may be reported for specific indices, or - cluster-wide. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg active_only: Display only those recoveries that are currently on- - going, default False - :arg detailed: Whether to display detailed information about shard - recovery, default False - """ - return self.transport.perform_request( - "GET", _make_path(index, "_recovery"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "only_ancient_segments", - "wait_for_completion", - ) - def upgrade(self, index=None, params=None): - """ - Upgrade one or more indices to the latest format through an API. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg only_ancient_segments: If true, only ancient (an older Lucene major - release) segments will be upgraded - :arg wait_for_completion: Specify whether the request should block until - the all segments are upgraded (default: false) - """ - return self.transport.perform_request( - "POST", _make_path(index, "_upgrade"), params=params - ) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable") - def get_upgrade(self, index=None, params=None): - """ - Monitor how much of one or more index is upgraded. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - """ - return self.transport.perform_request( - "GET", _make_path(index, "_upgrade"), params=params - ) - - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable") - def flush_synced(self, index=None, params=None): - """ - Perform a normal flush, then add a generated unique marker (sync_id) to all shards. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string for all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - """ - return self.transport.perform_request( - "POST", _make_path(index, "_flush", "synced"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "ignore_unavailable", - "operation_threading", - "status", - ) - def shard_stores(self, index=None, params=None): - """ - Provides store information for shard copies of indices. Store - information reports on which nodes shard copies exist, the shard copy - version, indicating how recent they are, and any exceptions encountered - while opening the shard index or from earlier engine failure. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg operation_threading: TODO: ? - :arg status: A comma-separated list of statuses used to filter on shards - to get store information for, valid choices are: 'green', 'yellow', - 'red', 'all' - """ - return self.transport.perform_request( - "GET", _make_path(index, "_shard_stores"), params=params - ) - - @query_params( - "allow_no_indices", - "expand_wildcards", - "flush", - "ignore_unavailable", - "max_num_segments", - "only_expunge_deletes", - ) - def forcemerge(self, index=None, params=None): - """ - The force merge API allows to force merging of one or more indices - through an API. The merge relates to the number of segments a Lucene - index holds within each shard. The force merge operation allows to - reduce the number of segments by merging them. - - This call will block until the merge is complete. If the http - connection is lost, the request will continue in the background, and - any new requests will block until the previous force merge is complete. - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg flush: Specify whether the index should be flushed after performing - the operation (default: true) - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - :arg max_num_segments: The number of segments the index should be merged - into (default: dynamic) - :arg only_expunge_deletes: Specify whether the operation should only - expunge deleted documents - """ - return self.transport.perform_request( - "POST", _make_path(index, "_forcemerge"), params=params - ) - - @query_params( - "copy_settings", "master_timeout", "timeout", "wait_for_active_shards" - ) - def shrink(self, index, target, body=None, params=None): - """ - The shrink index API allows you to shrink an existing index into a new - index with fewer primary shards. The number of primary shards in the - target index must be a factor of the shards in the source index. For - example an index with 8 primary shards can be shrunk into 4, 2 or 1 - primary shards or an index with 15 primary shards can be shrunk into 5, - 3 or 1. If the number of shards in the index is a prime number it can - only be shrunk into a single primary shard. Before shrinking, a - (primary or replica) copy of every shard in the index must be present - on the same node. - ``_ - - :arg index: The name of the source index to shrink - :arg target: The name of the target index to shrink into - :arg body: The configuration for the target index (`settings` and - `aliases`) - :arg copy_settings: whether or not to copy settings from the source - index (defaults to false) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Set the number of active shards to wait for - on the shrunken index before the operation returns. - """ - for param in (index, target): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path(index, "_shrink", target), params=params, body=body - ) - - @query_params( - "copy_settings", "master_timeout", "timeout", "wait_for_active_shards" - ) - def split(self, index, target, body=None, params=None): - """ - ``_ - - :arg index: The name of the source index to split - :arg target: The name of the target index to split into - :arg body: The configuration for the target index (`settings` and - `aliases`) - :arg copy_settings: whether or not to copy settings from the source - index (defaults to false) - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Set the number of active shards to wait for - on the shrunken index before the operation returns. - """ - for param in (index, target): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path(index, "_split", target), params=params, body=body - ) - - @query_params( - "dry_run", - "master_timeout", - "timeout", - "wait_for_active_shards", - "include_type_name", - ) - def rollover(self, alias, new_index=None, body=None, params=None): - """ - The rollover index API rolls an alias over to a new index when the - existing index is considered to be too large or too old. - - The API accepts a single alias name and a list of conditions. The alias - must point to a single index only. If the index satisfies the specified - conditions then a new index is created and the alias is switched to - point to the new alias. - ``_ - - :arg alias: The name of the alias to rollover - :arg new_index: The name of the rollover index - :arg body: The conditions that needs to be met for executing rollover - :arg dry_run: If set to true the rollover action will only be validated - but not actually performed even if a condition matches. The default - is false - :arg master_timeout: Specify timeout for connection to master - :arg timeout: Explicit operation timeout - :arg wait_for_active_shards: Set the number of active shards to wait for - on the newly created rollover index before the operation returns. - :arg include_type_name: Specify whether requests and responses should include a - type name (default: depends on Elasticsearch version). - """ - if alias in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'alias'.") - return self.transport.perform_request( - "POST", _make_path(alias, "_rollover", new_index), params=params, body=body - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/ingest.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/ingest.py deleted file mode 100644 index 1f5762652..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/ingest.py +++ /dev/null @@ -1,95 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path, SKIP_IN_PATH - - -class IngestClient(NamespacedClient): - @query_params("master_timeout") - def get_pipeline(self, id=None, params=None): - """ - ``_ - - :arg id: Comma separated list of pipeline ids. Wildcards supported - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - return self.transport.perform_request( - "GET", _make_path("_ingest", "pipeline", id), params=params - ) - - @query_params("master_timeout", "timeout") - def put_pipeline(self, id, body, params=None): - """ - ``_ - - :arg id: Pipeline ID - :arg body: The ingest definition - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - """ - for param in (id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path("_ingest", "pipeline", id), params=params, body=body - ) - - @query_params("master_timeout", "timeout") - def delete_pipeline(self, id, params=None): - """ - ``_ - - :arg id: Pipeline ID - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - """ - if id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'id'.") - return self.transport.perform_request( - "DELETE", _make_path("_ingest", "pipeline", id), params=params - ) - - @query_params("verbose") - def simulate(self, body, id=None, params=None): - """ - ``_ - - :arg body: The simulate definition - :arg id: Pipeline ID - :arg verbose: Verbose mode. Display data output for each processor in - executed pipeline, default False - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "GET", - _make_path("_ingest", "pipeline", id, "_simulate"), - params=params, - body=body, - ) - - @query_params() - def processor_grok(self, params=None): - """ - ``_ - """ - return self.transport.perform_request( - "GET", "/_ingest/processor/grok", params=params - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/nodes.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/nodes.py deleted file mode 100644 index c0e8c255e..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/nodes.py +++ /dev/null @@ -1,154 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path - - -class NodesClient(NamespacedClient): - @query_params("flat_settings", "timeout") - def info(self, node_id=None, metric=None, params=None): - """ - The cluster nodes info API allows to retrieve one or more (or all) of - the cluster nodes information. - ``_ - - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg metric: A comma-separated list of metrics you wish returned. Leave - empty to return all. - :arg flat_settings: Return settings in flat format (default: false) - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "GET", _make_path("_nodes", node_id, metric), params=params - ) - - @query_params( - "completion_fields", - "fielddata_fields", - "fields", - "groups", - "include_segment_file_sizes", - "level", - "timeout", - "types", - ) - def stats(self, node_id=None, metric=None, index_metric=None, params=None): - """ - The cluster nodes stats API allows to retrieve one or more (or all) of - the cluster nodes statistics. - ``_ - - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg metric: Limit the information returned to the specified metrics - :arg index_metric: Limit the information returned for `indices` metric - to the specific index metrics. Isn't used if `indices` (or `all`) - metric isn't specified. - :arg completion_fields: A comma-separated list of fields for `fielddata` - and `suggest` index metric (supports wildcards) - :arg fielddata_fields: A comma-separated list of fields for `fielddata` - index metric (supports wildcards) - :arg fields: A comma-separated list of fields for `fielddata` and - `completion` index metric (supports wildcards) - :arg groups: A comma-separated list of search groups for `search` index - metric - :arg include_segment_file_sizes: Whether to report the aggregated disk - usage of each one of the Lucene index files (only applies if segment - stats are requested), default False - :arg level: Return indices stats aggregated at index, node or shard - level, default 'node', valid choices are: 'indices', 'node', - 'shards' - :arg timeout: Explicit operation timeout - :arg types: A comma-separated list of document types for the `indexing` - index metric - """ - return self.transport.perform_request( - "GET", - _make_path("_nodes", node_id, "stats", metric, index_metric), - params=params, - ) - - @query_params( - "type", "ignore_idle_threads", "interval", "snapshots", "threads", "timeout" - ) - def hot_threads(self, node_id=None, params=None): - """ - An API allowing to get the current hot threads on each node in the cluster. - ``_ - - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg type: The type to sample (default: cpu), valid choices are: - 'cpu', 'wait', 'block' - :arg ignore_idle_threads: Don't show threads that are in known-idle - places, such as waiting on a socket select or pulling from an empty - task queue (default: true) - :arg interval: The interval for the second sampling of threads - :arg snapshots: Number of samples of thread stacktrace (default: 10) - :arg threads: Specify the number of threads to provide information for - (default: 3) - :arg timeout: Explicit operation timeout - """ - # avoid python reserved words - if params and "type_" in params: - params["type"] = params.pop("type_") - return self.transport.perform_request( - "GET", _make_path("_cluster", "nodes", node_id, "hotthreads"), params=params - ) - - @query_params("human", "timeout") - def usage(self, node_id=None, metric=None, params=None): - """ - The cluster nodes usage API allows to retrieve information on the usage - of features for each node. - ``_ - - :arg node_id: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg metric: Limit the information returned to the specified metrics - :arg human: Whether to return time and byte values in human-readable - format., default False - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "GET", _make_path("_nodes", node_id, "usage", metric), params=params - ) - - @query_params("timeout") - def reload_secure_settings(self, node_id=None, params=None): - """ - ``_ - - :arg node_id: A comma-separated list of node IDs to span the - reload/reinit call. Should stay empty because reloading usually - involves all cluster nodes. - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "POST", - _make_path("_nodes", node_id, "reload_secure_settings"), - params=params, - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/remote.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/remote.py deleted file mode 100644 index d76998d94..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/remote.py +++ /dev/null @@ -1,27 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params - - -class RemoteClient(NamespacedClient): - @query_params() - def info(self, params=None): - """ - ``_ - """ - return self.transport.perform_request("GET", "/_remote/info", params=params) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/snapshot.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/snapshot.py deleted file mode 100644 index 0c837d126..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/snapshot.py +++ /dev/null @@ -1,200 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path, SKIP_IN_PATH - - -class SnapshotClient(NamespacedClient): - @query_params("master_timeout", "wait_for_completion") - def create(self, repository, snapshot, body=None, params=None): - """ - Create a snapshot in repository - ``_ - - :arg repository: A repository name - :arg snapshot: A snapshot name - :arg body: The snapshot definition - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg wait_for_completion: Should this request wait until the operation - has completed before returning, default False - """ - for param in (repository, snapshot): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_snapshot", repository, snapshot), - params=params, - body=body, - ) - - @query_params("master_timeout") - def delete(self, repository, snapshot, params=None): - """ - Deletes a snapshot from a repository. - ``_ - - :arg repository: A repository name - :arg snapshot: A snapshot name - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - for param in (repository, snapshot): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "DELETE", _make_path("_snapshot", repository, snapshot), params=params - ) - - @query_params("ignore_unavailable", "master_timeout", "verbose") - def get(self, repository, snapshot, params=None): - """ - Retrieve information about a snapshot. - ``_ - - :arg repository: A repository name - :arg snapshot: A comma-separated list of snapshot names - :arg ignore_unavailable: Whether to ignore unavailable snapshots, - defaults to false which means a NotFoundError `snapshot_missing_exception` is thrown - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg verbose: Whether to show verbose snapshot info or only show the - basic info found in the repository index blob - """ - for param in (repository, snapshot): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "GET", _make_path("_snapshot", repository, snapshot), params=params - ) - - @query_params("master_timeout", "timeout") - def delete_repository(self, repository, params=None): - """ - Removes a shared file system repository. - ``_ - - :arg repository: A comma-separated list of repository names - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - """ - if repository in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'repository'.") - return self.transport.perform_request( - "DELETE", _make_path("_snapshot", repository), params=params - ) - - @query_params("local", "master_timeout") - def get_repository(self, repository=None, params=None): - """ - Return information about registered repositories. - ``_ - - :arg repository: A comma-separated list of repository names - :arg local: Return local information, do not retrieve the state from - master node (default: false) - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - return self.transport.perform_request( - "GET", _make_path("_snapshot", repository), params=params - ) - - @query_params("master_timeout", "timeout", "verify") - def create_repository(self, repository, body, params=None): - """ - Registers a shared file system repository. - ``_ - - :arg repository: A repository name - :arg body: The repository definition - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - :arg verify: Whether to verify the repository after creation - """ - for param in (repository, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", _make_path("_snapshot", repository), params=params, body=body - ) - - @query_params("master_timeout", "wait_for_completion") - def restore(self, repository, snapshot, body=None, params=None): - """ - Restore a snapshot. - ``_ - - :arg repository: A repository name - :arg snapshot: A snapshot name - :arg body: Details of what to restore - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg wait_for_completion: Should this request wait until the operation - has completed before returning, default False - """ - for param in (repository, snapshot): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path("_snapshot", repository, snapshot, "_restore"), - params=params, - body=body, - ) - - @query_params("ignore_unavailable", "master_timeout") - def status(self, repository=None, snapshot=None, params=None): - """ - Return information about all currently running snapshots. By specifying - a repository name, it's possible to limit the results to a particular - repository. - ``_ - - :arg repository: A repository name - :arg snapshot: A comma-separated list of snapshot names - :arg ignore_unavailable: Whether to ignore unavailable snapshots, - defaults to false which means a NotFoundError `snapshot_missing_exception` is thrown - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - return self.transport.perform_request( - "GET", - _make_path("_snapshot", repository, snapshot, "_status"), - params=params, - ) - - @query_params("master_timeout", "timeout") - def verify_repository(self, repository, params=None): - """ - Returns a list of nodes where repository was successfully verified or - an error message if verification process failed. - ``_ - - :arg repository: A repository name - :arg master_timeout: Explicit operation timeout for connection to master - node - :arg timeout: Explicit operation timeout - """ - if repository in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'repository'.") - return self.transport.perform_request( - "POST", _make_path("_snapshot", repository, "_verify"), params=params - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/tasks.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/tasks.py deleted file mode 100644 index 5867e83f3..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/tasks.py +++ /dev/null @@ -1,86 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .utils import NamespacedClient, query_params, _make_path - - -class TasksClient(NamespacedClient): - @query_params( - "actions", - "detailed", - "group_by", - "nodes", - "parent_task_id", - "wait_for_completion", - "timeout", - ) - def list(self, params=None): - """ - ``_ - - :arg actions: A comma-separated list of actions that should be returned. - Leave empty to return all. - :arg detailed: Return detailed task information (default: false) - :arg group_by: Group tasks by nodes or parent/child relationships, - default 'nodes', valid choices are: 'nodes', 'parents' - :arg nodes: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg parent_task_id: Return tasks with specified parent task id - (node_id:task_number). Set to -1 to return all. - :arg wait_for_completion: Wait for the matching tasks to complete - (default: false) - :arg timeout: Maximum waiting time for `wait_for_completion` - """ - return self.transport.perform_request("GET", "/_tasks", params=params) - - @query_params("actions", "nodes", "parent_task_id") - def cancel(self, task_id=None, params=None): - """ - - ``_ - - :arg task_id: Cancel the task with specified task id - (node_id:task_number) - :arg actions: A comma-separated list of actions that should be - cancelled. Leave empty to cancel all. - :arg nodes: A comma-separated list of node IDs or names to limit the - returned information; use `_local` to return information from the - node you're connecting to, leave empty to get information from all - nodes - :arg parent_task_id: Cancel tasks with specified parent task id - (node_id:task_number). Set to -1 to cancel all. - """ - return self.transport.perform_request( - "POST", _make_path("_tasks", task_id, "_cancel"), params=params - ) - - @query_params("wait_for_completion", "timeout") - def get(self, task_id=None, params=None): - """ - Retrieve information for a particular task. - ``_ - - :arg task_id: Return the task with specified id (node_id:task_number) - :arg wait_for_completion: Wait for the matching tasks to complete - (default: false) - :arg timeout: Maximum waiting time for `wait_for_completion` - """ - return self.transport.perform_request( - "GET", _make_path("_tasks", task_id), params=params - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/utils.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/utils.py deleted file mode 100644 index ca03258c6..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/utils.py +++ /dev/null @@ -1,122 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from __future__ import unicode_literals - -import weakref -from datetime import date, datetime -from functools import wraps -from ..compat import string_types, quote, PY2 - -# parts of URL to be omitted -SKIP_IN_PATH = (None, "", b"", [], ()) - - -def _escape(value): - """ - Escape a single value of a URL string or a query parameter. If it is a list - or tuple, turn it into a comma-separated string first. - """ - - # make sequences into comma-separated stings - if isinstance(value, (list, tuple)): - value = ",".join(value) - - # dates and datetimes into isoformat - elif isinstance(value, (date, datetime)): - value = value.isoformat() - - # make bools into true/false strings - elif isinstance(value, bool): - value = str(value).lower() - - # don't decode bytestrings - elif isinstance(value, bytes): - return value - - # encode strings to utf-8 - if isinstance(value, string_types): - if PY2 and isinstance(value, unicode): # noqa: F821 - return value.encode("utf-8") - if not PY2 and isinstance(value, str): - return value.encode("utf-8") - - return str(value) - - -def _make_path(*parts): - """ - Create a URL string from parts, omit all `None` values and empty strings. - Convert lists nad tuples to comma separated values. - """ - # TODO: maybe only allow some parts to be lists/tuples ? - return "/" + "/".join( - # preserve ',' and '*' in url for nicer URLs in logs - quote(_escape(p), b",*") - for p in parts - if p not in SKIP_IN_PATH - ) - - -# parameters that apply to all methods -GLOBAL_PARAMS = ("pretty", "human", "error_trace", "format", "filter_path") - - -def query_params(*es_query_params): - """ - Decorator that pops all accepted parameters from method's kwargs and puts - them in the params argument. - """ - - def _wrapper(func): - @wraps(func) - def _wrapped(*args, **kwargs): - params = {} - if "params" in kwargs: - params = kwargs.pop("params").copy() - for p in es_query_params + GLOBAL_PARAMS: - if p in kwargs: - v = kwargs.pop(p) - if v is not None: - params[p] = _escape(v) - - # don't treat ignore and request_timeout as other params to avoid escaping - for p in ("ignore", "request_timeout"): - if p in kwargs: - params[p] = kwargs.pop(p) - return func(*args, params=params, **kwargs) - - return _wrapped - - return _wrapper - - -class NamespacedClient(object): - def __init__(self, client): - self.client = client - - @property - def transport(self): - return self.client.transport - - -class AddonClient(NamespacedClient): - @classmethod - def infect_client(cls, client): - addon = cls(weakref.proxy(client)) - setattr(client, cls.namespace, addon) - return client diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__init__.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__init__.py deleted file mode 100644 index e1eb25db5..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__init__.py +++ /dev/null @@ -1,64 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import NamespacedClient, query_params - -from .graph import GraphClient -from .license import LicenseClient -from .monitoring import MonitoringClient -from .security import SecurityClient -from .watcher import WatcherClient -from .ml import MlClient -from .migration import MigrationClient -from .deprecation import DeprecationClient - - -class XPackClient(NamespacedClient): - namespace = "xpack" - - def __init__(self, *args, **kwargs): - super(XPackClient, self).__init__(*args, **kwargs) - self.graph = GraphClient(self.client) - self.license = LicenseClient(self.client) - self.monitoring = MonitoringClient(self.client) - self.security = SecurityClient(self.client) - self.watcher = WatcherClient(self.client) - self.ml = MlClient(self.client) - self.migration = MigrationClient(self.client) - self.deprecation = DeprecationClient(self.client) - - @query_params("categories", "human") - def info(self, params=None): - """ - Retrieve information about xpack, including build number/timestamp and license status - ``_ - - :arg categories: Comma-separated list of info categories. Can be any of: - build, license, features - :arg human: Presents additional info for humans (feature descriptions - and X-Pack tagline) - """ - return self.transport.perform_request("GET", "/_xpack", params=params) - - @query_params("master_timeout") - def usage(self, params=None): - """ - Retrieve information about xpack features usage - - :arg master_timeout: Specify timeout for watch write operation - """ - return self.transport.perform_request("GET", "/_xpack/usage", params=params) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 1b6768795..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/deprecation.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/deprecation.cpython-310.pyc deleted file mode 100644 index e2d1eba3f..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/deprecation.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/graph.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/graph.cpython-310.pyc deleted file mode 100644 index ebc7d2fe0..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/graph.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/license.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/license.cpython-310.pyc deleted file mode 100644 index fa16e5f24..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/license.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/migration.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/migration.cpython-310.pyc deleted file mode 100644 index fd8b65414..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/migration.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/ml.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/ml.cpython-310.pyc deleted file mode 100644 index 346206ecd..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/ml.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/monitoring.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/monitoring.cpython-310.pyc deleted file mode 100644 index a1c160b66..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/monitoring.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/security.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/security.cpython-310.pyc deleted file mode 100644 index 0aaab5f45..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/security.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/watcher.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/watcher.cpython-310.pyc deleted file mode 100644 index 7bb962e27..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/__pycache__/watcher.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/deprecation.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/deprecation.py deleted file mode 100644 index 4c1548b92..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/deprecation.py +++ /dev/null @@ -1,33 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import NamespacedClient, query_params, _make_path - - -class DeprecationClient(NamespacedClient): - @query_params() - def info(self, index=None, params=None): - """ - ``_ - - :arg index: Index pattern - """ - return self.transport.perform_request( - "GET", - _make_path(index, "_xpack", "migration", "deprecations"), - params=params, - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/graph.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/graph.py deleted file mode 100644 index 6e905e369..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/graph.py +++ /dev/null @@ -1,40 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import NamespacedClient, query_params, _make_path - - -class GraphClient(NamespacedClient): - @query_params("routing", "timeout") - def explore(self, index=None, doc_type=None, body=None, params=None): - """ - ``_ - - :arg index: A comma-separated list of index names to search; use `_all` - or empty string to perform the operation on all indices - :arg doc_type: A comma-separated list of document types to search; leave - empty to perform the operation on all types - :arg body: Graph Query DSL - :arg routing: Specific routing value - :arg timeout: Explicit operation timeout - """ - return self.transport.perform_request( - "GET", - _make_path(index, doc_type, "_xpack", "graph", "_explore"), - params=params, - body=body, - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/license.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/license.py deleted file mode 100644 index a8a45101b..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/license.py +++ /dev/null @@ -1,58 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import ( - NamespacedClient, - query_params, -) - - -class LicenseClient(NamespacedClient): - @query_params() - def delete(self, params=None): - """ - - ``_ - """ - return self.transport.perform_request( - "DELETE", "/_xpack/license", params=params - ) - - @query_params("local") - def get(self, params=None): - """ - - ``_ - - :arg local: Return local information, do not retrieve the state from - master node (default: false) - """ - return self.transport.perform_request("GET", "/_xpack/license", params=params) - - @query_params("acknowledge") - def post(self, body=None, params=None): - """ - - ``_ - - :arg body: licenses to be installed - :arg acknowledge: whether the user has acknowledged acknowledge messages - (default: false) - """ - return self.transport.perform_request( - "PUT", "/_xpack/license", params=params, body=body - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/migration.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/migration.py deleted file mode 100644 index dc530f612..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/migration.py +++ /dev/null @@ -1,61 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import ( - NamespacedClient, - query_params, - _make_path, - SKIP_IN_PATH, -) - - -class MigrationClient(NamespacedClient): - @query_params("allow_no_indices", "expand_wildcards", "ignore_unavailable") - def get_assistance(self, index=None, params=None): - """ - ``_ - - :arg index: A comma-separated list of index names; use `_all` or empty - string to perform the operation on all indices - :arg allow_no_indices: Whether to ignore if a wildcard indices - expression resolves into no concrete indices. (This includes `_all` - string or when no indices have been specified) - :arg expand_wildcards: Whether to expand wildcard expression to concrete - indices that are open, closed or both., default 'open', valid - choices are: 'open', 'closed', 'none', 'all' - :arg ignore_unavailable: Whether specified concrete indices should be - ignored when unavailable (missing or closed) - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "migration", "assistance", index), params=params - ) - - @query_params("wait_for_completion") - def upgrade(self, index, params=None): - """ - - ``_ - - :arg index: The name of the index - :arg wait_for_completion: Should the request block until the upgrade - operation is completed, default True - """ - if index in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'index'.") - return self.transport.perform_request( - "POST", _make_path("_xpack", "migration", "upgrade", index), params=params - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/ml.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/ml.py deleted file mode 100644 index d71f20110..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/ml.py +++ /dev/null @@ -1,695 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import NamespacedClient, query_params, _make_path, SKIP_IN_PATH - - -class MlClient(NamespacedClient): - @query_params("from_", "size") - def get_filters(self, filter_id=None, params=None): - """ - - :arg filter_id: The ID of the filter to fetch - :arg from_: skips a number of filters - :arg size: specifies a max number of filters to get - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "ml", "filters", filter_id), params=params - ) - - @query_params() - def get_datafeeds(self, datafeed_id=None, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeeds to fetch - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "ml", "datafeeds", datafeed_id), params=params - ) - - @query_params() - def get_datafeed_stats(self, datafeed_id=None, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeeds stats to fetch - """ - return self.transport.perform_request( - "GET", - _make_path("_xpack", "ml", "datafeeds", datafeed_id, "_stats"), - params=params, - ) - - @query_params( - "anomaly_score", - "desc", - "end", - "exclude_interim", - "expand", - "from_", - "size", - "sort", - "start", - ) - def get_buckets(self, job_id, timestamp=None, body=None, params=None): - """ - - ``_ - - :arg job_id: ID of the job to get bucket results from - :arg timestamp: The timestamp of the desired single bucket result - :arg body: Bucket selection details if not provided in URI - :arg anomaly_score: Filter for the most anomalous buckets - :arg desc: Set the sort direction - :arg end: End time filter for buckets - :arg exclude_interim: Exclude interim results - :arg expand: Include anomaly records - :arg from_: skips a number of buckets - :arg size: specifies a max number of buckets to get - :arg sort: Sort buckets by a particular field - :arg start: Start time filter for buckets - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "GET", - _make_path( - "_xpack", - "ml", - "anomaly_detectors", - job_id, - "results", - "buckets", - timestamp, - ), - params=params, - body=body, - ) - - @query_params("reset_end", "reset_start") - def post_data(self, job_id, body, params=None): - """ - - ``_ - - :arg job_id: The name of the job receiving the data - :arg body: The data to process - :arg reset_end: Optional parameter to specify the end of the bucket - resetting range - :arg reset_start: Optional parameter to specify the start of the bucket - resetting range - """ - for param in (job_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_data"), - params=params, - body=self.client._bulk_body(body), - ) - - @query_params("force", "timeout") - def stop_datafeed(self, datafeed_id, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeed to stop - :arg force: True if the datafeed should be forcefully stopped. - :arg timeout: Controls the time to wait until a datafeed has stopped. - Default to 20 seconds - """ - if datafeed_id in SKIP_IN_PATH: - raise ValueError( - "Empty value passed for a required argument 'datafeed_id'." - ) - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "datafeeds", datafeed_id, "_stop"), - params=params, - ) - - @query_params() - def get_jobs(self, job_id=None, params=None): - """ - - ``_ - - :arg job_id: The ID of the jobs to fetch - """ - return self.transport.perform_request( - "GET", - _make_path("_xpack", "ml", "anomaly_detectors", job_id), - params=params, - ) - - @query_params() - def delete_expired_data(self, params=None): - """""" - return self.transport.perform_request( - "DELETE", "/_xpack/ml/_delete_expired_data", params=params - ) - - @query_params() - def put_job(self, job_id, body, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to create - :arg body: The job - """ - for param in (job_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "ml", "anomaly_detectors", job_id), - params=params, - body=body, - ) - - @query_params() - def validate_detector(self, body, params=None): - """ - - :arg body: The detector - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", - "/_xpack/ml/anomaly_detectors/_validate/detector", - params=params, - body=body, - ) - - @query_params("end", "start", "timeout") - def start_datafeed(self, datafeed_id, body=None, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeed to start - :arg body: The start datafeed parameters - :arg end: The end time when the datafeed should stop. When not set, the - datafeed continues in real time - :arg start: The start time from where the datafeed should begin - :arg timeout: Controls the time to wait until a datafeed has started. - Default to 20 seconds - """ - if datafeed_id in SKIP_IN_PATH: - raise ValueError( - "Empty value passed for a required argument 'datafeed_id'." - ) - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "datafeeds", datafeed_id, "_start"), - params=params, - body=body, - ) - - @query_params( - "desc", - "end", - "exclude_interim", - "from_", - "record_score", - "size", - "sort", - "start", - ) - def get_records(self, job_id, body=None, params=None): - """ - - ``_ - - :arg job_id: None - :arg body: Record selection criteria - :arg desc: Set the sort direction - :arg end: End time filter for records - :arg exclude_interim: Exclude interim results - :arg from_: skips a number of records - :arg record_score: - :arg size: specifies a max number of records to get - :arg sort: Sort records by a particular field - :arg start: Start time filter for records - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "GET", - _make_path( - "_xpack", "ml", "anomaly_detectors", job_id, "results", "records" - ), - params=params, - body=body, - ) - - @query_params() - def update_job(self, job_id, body, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to create - :arg body: The job update settings - """ - for param in (job_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_update"), - params=params, - body=body, - ) - - @query_params() - def put_filter(self, filter_id, body, params=None): - """ - - :arg filter_id: The ID of the filter to create - :arg body: The filter details - """ - for param in (filter_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "ml", "filters", filter_id), - params=params, - body=body, - ) - - @query_params() - def update_datafeed(self, datafeed_id, body, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeed to update - :arg body: The datafeed update settings - """ - for param in (datafeed_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "datafeeds", datafeed_id, "_update"), - params=params, - body=body, - ) - - @query_params() - def preview_datafeed(self, datafeed_id, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeed to preview - """ - if datafeed_id in SKIP_IN_PATH: - raise ValueError( - "Empty value passed for a required argument 'datafeed_id'." - ) - return self.transport.perform_request( - "GET", - _make_path("_xpack", "ml", "datafeeds", datafeed_id, "_preview"), - params=params, - ) - - @query_params("advance_time", "calc_interim", "end", "skip_time", "start") - def flush_job(self, job_id, body=None, params=None): - """ - - ``_ - - :arg job_id: The name of the job to flush - :arg body: Flush parameters - :arg advance_time: Advances time to the given value generating results - and updating the model for the advanced interval - :arg calc_interim: Calculates interim results for the most recent bucket - or all buckets within the latency period - :arg end: When used in conjunction with calc_interim, specifies the - range of buckets on which to calculate interim results - :arg skip_time: Skips time to the given value without generating results - or updating the model for the skipped interval - :arg start: When used in conjunction with calc_interim, specifies the - range of buckets on which to calculate interim results - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_flush"), - params=params, - body=body, - ) - - @query_params("force", "timeout") - def close_job(self, job_id, params=None): - """ - - ``_ - - :arg job_id: The name of the job to close - :arg force: True if the job should be forcefully closed - :arg timeout: Controls the time to wait until a job has closed. Default - to 30 minutes - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_close"), - params=params, - ) - - @query_params() - def open_job(self, job_id, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to open - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_open"), - params=params, - ) - - @query_params("force") - def delete_job(self, job_id, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to delete - :arg force: True if the job should be forcefully deleted - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "DELETE", - _make_path("_xpack", "ml", "anomaly_detectors", job_id), - params=params, - ) - - @query_params("duration", "expires_in") - def forecast_job(self, job_id, params=None): - """ - - ``_ - - :arg job_id: The name of the job to close - :arg duration: A period of time that indicates how far into the future to forecast - :arg expires_in: The period of time that forecast results are retained. - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_forecast"), - params=params, - ) - - @query_params() - def update_model_snapshot(self, job_id, snapshot_id, body, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to fetch - :arg snapshot_id: The ID of the snapshot to update - :arg body: The model snapshot properties to update - """ - for param in (job_id, snapshot_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path( - "_xpack", - "ml", - "anomaly_detectors", - job_id, - "model_snapshots", - snapshot_id, - "_update", - ), - params=params, - body=body, - ) - - @query_params() - def delete_filter(self, filter_id, params=None): - """ - - :arg filter_id: The ID of the filter to delete - """ - if filter_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'filter_id'.") - return self.transport.perform_request( - "DELETE", _make_path("_xpack", "ml", "filters", filter_id), params=params - ) - - @query_params() - def validate(self, body, params=None): - """ - - :arg body: The job config - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", "/_xpack/ml/anomaly_detectors/_validate", params=params, body=body - ) - - @query_params("from_", "size") - def get_categories(self, job_id, category_id=None, body=None, params=None): - """ - - ``_ - - :arg job_id: The name of the job - :arg category_id: The identifier of the category definition of interest - :arg body: Category selection details if not provided in URI - :arg from_: skips a number of categories - :arg size: specifies a max number of categories to get - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "GET", - _make_path( - "_xpack", - "ml", - "anomaly_detectors", - job_id, - "results", - "categories", - category_id, - ), - params=params, - body=body, - ) - - @query_params( - "desc", - "end", - "exclude_interim", - "from_", - "influencer_score", - "size", - "sort", - "start", - ) - def get_influencers(self, job_id, body=None, params=None): - """ - - ``_ - - :arg job_id: None - :arg body: Influencer selection criteria - :arg desc: whether the results should be sorted in decending order - :arg end: end timestamp for the requested influencers - :arg exclude_interim: Exclude interim results - :arg from_: skips a number of influencers - :arg influencer_score: influencer score threshold for the requested - influencers - :arg size: specifies a max number of influencers to get - :arg sort: sort field for the requested influencers - :arg start: start timestamp for the requested influencers - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "GET", - _make_path( - "_xpack", "ml", "anomaly_detectors", job_id, "results", "influencers" - ), - params=params, - body=body, - ) - - @query_params() - def put_datafeed(self, datafeed_id, body, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeed to create - :arg body: The datafeed config - """ - for param in (datafeed_id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "ml", "datafeeds", datafeed_id), - params=params, - body=body, - ) - - @query_params("force") - def delete_datafeed(self, datafeed_id, params=None): - """ - - ``_ - - :arg datafeed_id: The ID of the datafeed to delete - :arg force: True if the datafeed should be forcefully deleted - """ - if datafeed_id in SKIP_IN_PATH: - raise ValueError( - "Empty value passed for a required argument 'datafeed_id'." - ) - return self.transport.perform_request( - "DELETE", - _make_path("_xpack", "ml", "datafeeds", datafeed_id), - params=params, - ) - - @query_params() - def get_job_stats(self, job_id=None, params=None): - """ - - ``_ - - :arg job_id: The ID of the jobs stats to fetch - """ - return self.transport.perform_request( - "GET", - _make_path("_xpack", "ml", "anomaly_detectors", job_id, "_stats"), - params=params, - ) - - @query_params("delete_intervening_results") - def revert_model_snapshot(self, job_id, snapshot_id, body=None, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to fetch - :arg snapshot_id: The ID of the snapshot to revert to - :arg body: Reversion options - :arg delete_intervening_results: Should we reset the results back to the - time of the snapshot? - """ - for param in (job_id, snapshot_id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "POST", - _make_path( - "_xpack", - "ml", - "anomaly_detectors", - job_id, - "model_snapshots", - snapshot_id, - "_revert", - ), - params=params, - body=body, - ) - - @query_params("desc", "end", "from_", "size", "sort", "start") - def get_model_snapshots(self, job_id, snapshot_id=None, body=None, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to fetch - :arg snapshot_id: The ID of the snapshot to fetch - :arg body: Model snapshot selection criteria - :arg desc: True if the results should be sorted in descending order - :arg end: The filter 'end' query parameter - :arg from_: Skips a number of documents - :arg size: The default number of documents returned in queries as a - string. - :arg sort: Name of the field to sort on - :arg start: The filter 'start' query parameter - """ - if job_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'job_id'.") - return self.transport.perform_request( - "GET", - _make_path( - "_xpack", - "ml", - "anomaly_detectors", - job_id, - "model_snapshots", - snapshot_id, - ), - params=params, - body=body, - ) - - @query_params() - def delete_model_snapshot(self, job_id, snapshot_id, params=None): - """ - - ``_ - - :arg job_id: The ID of the job to fetch - :arg snapshot_id: The ID of the snapshot to delete - """ - for param in (job_id, snapshot_id): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "DELETE", - _make_path( - "_xpack", - "ml", - "anomaly_detectors", - job_id, - "model_snapshots", - snapshot_id, - ), - params=params, - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/monitoring.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/monitoring.py deleted file mode 100644 index fac9b5ab2..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/monitoring.py +++ /dev/null @@ -1,47 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import ( - NamespacedClient, - query_params, - _make_path, - SKIP_IN_PATH, -) - - -class MonitoringClient(NamespacedClient): - @query_params("interval", "system_api_version", "system_id") - def bulk(self, body, doc_type=None, params=None): - """ - ``_ - - :arg body: The operation definition and data (action-data pairs), - separated by newlines - :arg doc_type: Default document type for items which don't provide one - :arg interval: Collection interval (e.g., '10s' or '10000ms') of the - payload - :arg system_api_version: API Version of the monitored system - :arg system_id: Identifier of the monitored system - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "monitoring", doc_type, "_bulk"), - params=params, - body=self.client._bulk_body(body), - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/security.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/security.py deleted file mode 100644 index f95cc2e78..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/security.py +++ /dev/null @@ -1,318 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import ( - NamespacedClient, - query_params, - _make_path, - SKIP_IN_PATH, -) - - -class SecurityClient(NamespacedClient): - @query_params("refresh") - def delete_user(self, username, params=None): - """ - - ``_ - - :arg username: username - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - if username in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'username'.") - return self.transport.perform_request( - "DELETE", _make_path("_xpack", "security", "user", username), params=params - ) - - @query_params() - def get_user(self, username=None, params=None): - """ - - ``_ - - :arg username: A comma-separated list of usernames - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "security", "user", username), params=params - ) - - @query_params("refresh") - def put_role(self, name, body, params=None): - """ - - ``_ - - :arg name: Role name - :arg body: The role to add - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - for param in (name, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "security", "role", name), - params=params, - body=body, - ) - - @query_params() - def authenticate(self, params=None): - """ - - ``_ - """ - return self.transport.perform_request( - "GET", "/_xpack/security/_authenticate", params=params - ) - - @query_params("refresh") - def put_user(self, username, body, params=None): - """ - - ``_ - - :arg username: The username of the User - :arg body: The user to add - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - for param in (username, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "security", "user", username), - params=params, - body=body, - ) - - @query_params("usernames") - def clear_cached_realms(self, realms, params=None): - """ - - ``_ - - :arg realms: Comma-separated list of realms to clear - :arg usernames: Comma-separated list of usernames to clear from the - cache - """ - if realms in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'realms'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "security", "realm", realms, "_clear_cache"), - params=params, - ) - - @query_params("refresh") - def change_password(self, body, username=None, params=None): - """ - - ``_ - - :arg body: the new password for the user - :arg username: The username of the user to change the password for - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "security", "user", username, "_password"), - params=params, - body=body, - ) - - @query_params() - def get_role(self, name=None, params=None): - """ - - ``_ - - :arg name: Role name - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "security", "role", name), params=params - ) - - @query_params() - def clear_cached_roles(self, name, params=None): - """ - - ``_ - - :arg name: Role name - """ - if name in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'name'.") - return self.transport.perform_request( - "POST", - _make_path("_xpack", "security", "role", name, "_clear_cache"), - params=params, - ) - - @query_params("refresh") - def delete_role(self, name, params=None): - """ - - ``_ - - :arg name: Role name - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - if name in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'name'.") - return self.transport.perform_request( - "DELETE", _make_path("_xpack", "security", "role", name), params=params - ) - - @query_params("refresh") - def delete_role_mapping(self, name, params=None): - """ - ``_ - - :arg name: Role-mapping name - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - if name in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'name'.") - return self.transport.perform_request( - "DELETE", - _make_path("_xpack", "security", "role_mapping", name), - params=params, - ) - - @query_params("refresh") - def disable_user(self, username=None, params=None): - """ - ``_ - - :arg username: The username of the user to disable - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "security", "user", username, "_disable"), - params=params, - ) - - @query_params("refresh") - def enable_user(self, username=None, params=None): - """ - ``_ - - :arg username: The username of the user to enable - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "security", "user", username, "_enable"), - params=params, - ) - - @query_params() - def get_role_mapping(self, name=None, params=None): - """ - ``_ - - :arg name: Role-Mapping name - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "security", "role_mapping", name), params=params - ) - - @query_params() - def get_token(self, body, params=None): - """ - ``_ - - :arg body: The token request to get - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "POST", "/_xpack/security/oauth2/token", params=params, body=body - ) - - @query_params() - def invalidate_token(self, body, params=None): - """ - ``_ - - :arg body: The token to invalidate - """ - if body in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'body'.") - return self.transport.perform_request( - "DELETE", "/_xpack/security/oauth2/token", params=params, body=body - ) - - @query_params("refresh") - def put_role_mapping(self, name, body, params=None): - """ - ``_ - - :arg name: Role-mapping name - :arg body: The role to add - :arg refresh: If `true` (the default) then refresh the affected shards - to make this operation visible to search, if `wait_for` then wait - for a refresh to make this operation visible to search, if `false` - then do nothing with refreshes., valid choices are: 'true', 'false', - 'wait_for' - """ - for param in (name, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "security", "role_mapping", name), - params=params, - body=body, - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/watcher.py b/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/watcher.py deleted file mode 100644 index d529fc012..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/client/xpack/watcher.py +++ /dev/null @@ -1,193 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..utils import ( - NamespacedClient, - query_params, - _make_path, - SKIP_IN_PATH, -) - - -class WatcherClient(NamespacedClient): - @query_params() - def stop(self, params=None): - """ - - ``_ - """ - return self.transport.perform_request( - "POST", "/_xpack/watcher/_stop", params=params - ) - - @query_params("master_timeout") - def ack_watch(self, watch_id, action_id=None, params=None): - """ - - ``_ - - :arg watch_id: Watch ID - :arg action_id: A comma-separated list of the action ids to be acked - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - if watch_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'watch_id'.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "watcher", "watch", watch_id, "_ack", action_id), - params=params, - ) - - @query_params("debug") - def execute_watch(self, id=None, body=None, params=None): - """ - - ``_ - - :arg id: Watch ID - :arg body: Execution control - :arg debug: indicates whether the watch should execute in debug mode - """ - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "watcher", "watch", id, "_execute"), - params=params, - body=body, - ) - - @query_params() - def start(self, params=None): - """ - - ``_ - """ - return self.transport.perform_request( - "POST", "/_xpack/watcher/_start", params=params - ) - - @query_params("master_timeout") - def activate_watch(self, watch_id, params=None): - """ - - ``_ - - :arg watch_id: Watch ID - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - if watch_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'watch_id'.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "watcher", "watch", watch_id, "_activate"), - params=params, - ) - - @query_params("master_timeout") - def deactivate_watch(self, watch_id, params=None): - """ - - ``_ - - :arg watch_id: Watch ID - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - if watch_id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'watch_id'.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "watcher", "watch", watch_id, "_deactivate"), - params=params, - ) - - @query_params("active", "master_timeout") - def put_watch(self, id, body, params=None): - """ - - ``_ - - :arg id: Watch ID - :arg body: The watch - :arg active: Specify whether the watch is in/active by default - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - for param in (id, body): - if param in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument.") - return self.transport.perform_request( - "PUT", - _make_path("_xpack", "watcher", "watch", id), - params=params, - body=body, - ) - - @query_params("master_timeout") - def delete_watch(self, id, params=None): - """ - - ``_ - - :arg id: Watch ID - :arg master_timeout: Explicit operation timeout for connection to master - node - """ - if id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'id'.") - return self.transport.perform_request( - "DELETE", _make_path("_xpack", "watcher", "watch", id), params=params - ) - - @query_params() - def get_watch(self, id, params=None): - """ - - ``_ - - :arg id: Watch ID - """ - if id in SKIP_IN_PATH: - raise ValueError("Empty value passed for a required argument 'id'.") - return self.transport.perform_request( - "GET", _make_path("_xpack", "watcher", "watch", id), params=params - ) - - @query_params("emit_stacktraces") - def stats(self, metric=None, params=None): - """ - - ``_ - - :arg metric: Controls what additional stat metrics should be include in - the response - :arg emit_stacktraces: Emits stack traces of currently running watches - """ - return self.transport.perform_request( - "GET", _make_path("_xpack", "watcher", "stats", metric), params=params - ) - - @query_params() - def restart(self, params=None): - """ - - ``_ - """ - return self.transport.perform_request( - "POST", "/_xpack/watcher/_restart", params=params - ) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/compat.py b/infrastructure/sandbox/Data/lambda/elasticsearch/compat.py deleted file mode 100644 index fe4f6bad1..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/compat.py +++ /dev/null @@ -1,44 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import sys - -PY2 = sys.version_info[0] == 2 - -if PY2: - string_types = (basestring,) # noqa: F821 - from urllib import quote, quote_plus, urlencode, unquote - from urlparse import urlparse - from itertools import imap as map - from Queue import Queue -else: - string_types = str, bytes - from urllib.parse import quote, quote_plus, urlencode, urlparse, unquote - - map = map - from queue import Queue - -__all__ = [ - "string_types", - "quote", - "quote_plus", - "urlencode", - "unquote", - "urlparse", - "map", - "Queue", -] diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__init__.py b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__init__.py deleted file mode 100644 index b85ea5570..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .base import Connection -from .http_requests import RequestsHttpConnection -from .http_urllib3 import Urllib3HttpConnection, create_ssl_context - -__all__ = [ - "Connection", - "RequestsHttpConnection", - "Urllib3HttpConnection", - "create_ssl_context", -] diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 380a63f05..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/base.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/base.cpython-310.pyc deleted file mode 100644 index 855b4d245..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/base.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/http_requests.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/http_requests.cpython-310.pyc deleted file mode 100644 index b1d611f19..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/http_requests.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/http_urllib3.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/http_urllib3.cpython-310.pyc deleted file mode 100644 index d424888d0..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/http_urllib3.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/pooling.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/pooling.cpython-310.pyc deleted file mode 100644 index e7c19fad4..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/__pycache__/pooling.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/base.py b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/base.py deleted file mode 100644 index a23eb9f87..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/base.py +++ /dev/null @@ -1,258 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import logging -import binascii -import gzip -import io -from platform import python_version - -try: - import simplejson as json -except ImportError: - import json - -from ..exceptions import TransportError, ImproperlyConfigured, HTTP_EXCEPTIONS -from .. import __versionstr__ - -logger = logging.getLogger("elasticsearch") - -# create the elasticsearch.trace logger, but only set propagate to False if the -# logger hasn't already been configured -_tracer_already_configured = "elasticsearch.trace" in logging.Logger.manager.loggerDict -tracer = logging.getLogger("elasticsearch.trace") -if not _tracer_already_configured: - tracer.propagate = False - - -class Connection(object): - """ - Class responsible for maintaining a connection to an Elasticsearch node. It - holds persistent connection pool to it and it's main interface - (`perform_request`) is thread-safe. - - Also responsible for logging. - - :arg host: hostname of the node (default: localhost) - :arg port: port to use (integer, default: 9200) - :arg use_ssl: use ssl for the connection if `True` - :arg url_prefix: optional url prefix for elasticsearch - :arg timeout: default timeout in seconds (float, default: 10) - :arg http_compress: Use gzip compression - :arg cloud_id: The Cloud ID from ElasticCloud. Convenient way to connect to cloud instances. - """ - - HTTP_CLIENT_META = None - - def __init__( - self, - host="localhost", - port=None, - use_ssl=False, - url_prefix="", - timeout=10, - headers=None, - http_compress=None, - cloud_id=None, - meta_header=True, - **kwargs - ): - - if cloud_id: - try: - _, cloud_id = cloud_id.split(":") - parent_dn, es_uuid = ( - binascii.a2b_base64(cloud_id.encode("utf-8")) - .decode("utf-8") - .split("$")[:2] - ) - if ":" in parent_dn: - parent_dn, _, parent_port = parent_dn.rpartition(":") - if port is None and parent_port != "443": - port = int(parent_port) - except (ValueError, IndexError): - raise ImproperlyConfigured("'cloud_id' is not properly formatted") - - host = "%s.%s" % (es_uuid, parent_dn) - use_ssl = True - if http_compress is None: - http_compress = True - - # If cloud_id isn't set and port is default then use 9200. - # Cloud should use '443' by default via the 'https' scheme. - elif port is None: - port = 9200 - - # Work-around if the implementing class doesn't - # define the headers property before calling super().__init__() - if not hasattr(self, "headers"): - self.headers = {} - - headers = headers or {} - for key in headers: - self.headers[key.lower()] = headers[key] - - self.headers.setdefault("content-type", "application/json") - self.headers.setdefault("user-agent", self._get_default_user_agent()) - - if http_compress: - self.headers["accept-encoding"] = "gzip,deflate" - - scheme = kwargs.get("scheme", "http") - if use_ssl or scheme == "https": - scheme = "https" - use_ssl = True - self.use_ssl = use_ssl - self.http_compress = http_compress or False - - self.hostname = host - self.port = port - self.host = "%s://%s" % (scheme, host) - if self.port is not None: - self.host += ":%s" % self.port - if url_prefix: - url_prefix = "/" + url_prefix.strip("/") - self.url_prefix = url_prefix - self.timeout = timeout - - if not isinstance(meta_header, bool): - raise TypeError("meta_header must be of type bool") - self.meta_header = meta_header - - def __repr__(self): - return "<%s: %s>" % (self.__class__.__name__, self.host) - - def _gzip_compress(self, body): - buf = io.BytesIO() - with gzip.GzipFile(fileobj=buf, mode="wb") as f: - f.write(body) - return buf.getvalue() - - def _pretty_json(self, data): - # pretty JSON in tracer curl logs - try: - return json.dumps( - json.loads(data), sort_keys=True, indent=2, separators=(",", ": ") - ).replace("'", r"\u0027") - except (ValueError, TypeError): - # non-json data or a bulk request - return data - - def _log_trace(self, method, path, body, status_code, response, duration): - if not tracer.isEnabledFor(logging.INFO) or not tracer.handlers: - return - - # include pretty in trace curls - path = path.replace("?", "?pretty&", 1) if "?" in path else path + "?pretty" - if self.url_prefix: - path = path.replace(self.url_prefix, "", 1) - tracer.info( - "curl %s-X%s 'http://localhost:9200%s' -d '%s'", - "-H 'Content-Type: application/json' " if body else "", - method, - path, - self._pretty_json(body) if body else "", - ) - - if tracer.isEnabledFor(logging.DEBUG): - tracer.debug( - "#[%s] (%.3fs)\n#%s", - status_code, - duration, - self._pretty_json(response).replace("\n", "\n#") if response else "", - ) - - def log_request_success( - self, method, full_url, path, body, status_code, response, duration - ): - """ Log a successful API call. """ - # TODO: optionally pass in params instead of full_url and do urlencode only when needed - - # body has already been serialized to utf-8, deserialize it for logging - # TODO: find a better way to avoid (de)encoding the body back and forth - if body: - try: - body = body.decode("utf-8", "ignore") - except AttributeError: - pass - - logger.info( - "%s %s [status:%s request:%.3fs]", method, full_url, status_code, duration - ) - logger.debug("> %s", body) - logger.debug("< %s", response) - - self._log_trace(method, path, body, status_code, response, duration) - - def log_request_fail( - self, - method, - full_url, - path, - body, - duration, - status_code=None, - response=None, - exception=None, - ): - """ Log an unsuccessful API call. """ - # do not log 404s on HEAD requests - if method == "HEAD" and status_code == 404: - return - logger.warning( - "%s %s [status:%s request:%.3fs]", - method, - full_url, - status_code or "N/A", - duration, - exc_info=exception is not None, - ) - - # body has already been serialized to utf-8, deserialize it for logging - # TODO: find a better way to avoid (de)encoding the body back and forth - if body: - try: - body = body.decode("utf-8", "ignore") - except AttributeError: - pass - - logger.debug("> %s", body) - - self._log_trace(method, path, body, status_code, response, duration) - - if response is not None: - logger.debug("< %s", response) - - def _raise_error(self, status_code, raw_data): - """ Locate appropriate exception and raise it. """ - error_message = raw_data - additional_info = None - try: - if raw_data: - additional_info = json.loads(raw_data) - error_message = additional_info.get("error", error_message) - if isinstance(error_message, dict) and "type" in error_message: - error_message = error_message["type"] - except (ValueError, TypeError) as err: - logger.warning("Undecodable raw error response from server: %s", err) - - raise HTTP_EXCEPTIONS.get(status_code, TransportError)( - status_code, error_message, additional_info - ) - - def _get_default_user_agent(self): - return "elasticsearch-py/%s (Python %s)" % (__versionstr__, python_version()) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/http_requests.py b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/http_requests.py deleted file mode 100644 index 8c17bffbf..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/http_requests.py +++ /dev/null @@ -1,208 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import time -import warnings - -from .base import Connection -from ..exceptions import ( - ConnectionError, - ImproperlyConfigured, - ConnectionTimeout, - SSLError, -) -from ..compat import urlencode, string_types -from ..utils import _client_meta_version - -try: - import requests - - REQUESTS_AVAILABLE = True - _REQUESTS_META_VERSION = _client_meta_version(requests.__version__) -except ImportError: - REQUESTS_AVAILABLE = False - _REQUESTS_META_VERSION = "" - - -class RequestsHttpConnection(Connection): - """ - Connection using the `requests` library. - - :arg http_auth: optional http auth information as either ':' separated - string or a tuple. Any value will be passed into requests as `auth`. - :arg use_ssl: use ssl for the connection if `True` - :arg verify_certs: whether to verify SSL certificates - :arg ca_certs: optional path to CA bundle. By default standard requests' - bundle will be used. - :arg client_cert: path to the file containing the private key and the - certificate, or cert only if using client_key - :arg client_key: path to the file containing the private key if using - separate cert and key files (client_cert will contain only the cert) - :arg headers: any custom http headers to be add to requests - :arg http_compress: Use gzip compression - :arg cloud_id: The Cloud ID from ElasticCloud. Convenient way to connect to cloud instances. - Other host connection params will be ignored. - """ - - HTTP_CLIENT_META = ("rq", _REQUESTS_META_VERSION) - - def __init__( - self, - host="localhost", - port=None, - http_auth=None, - use_ssl=False, - verify_certs=True, - ca_certs=None, - client_cert=None, - client_key=None, - headers=None, - http_compress=None, - cloud_id=None, - **kwargs - ): - if not REQUESTS_AVAILABLE: - raise ImproperlyConfigured( - "Please install requests to use RequestsHttpConnection." - ) - - # Initialize Session so .headers works before calling super().__init__(). - self.session = requests.Session() - for key in list(self.session.headers): - self.session.headers.pop(key) - - super(RequestsHttpConnection, self).__init__( - host=host, - port=port, - use_ssl=use_ssl, - headers=headers, - http_compress=http_compress, - cloud_id=cloud_id, - **kwargs - ) - - if not self.http_compress: - # Need to set this to 'None' otherwise Requests adds its own. - self.session.headers["accept-encoding"] = None - - if http_auth is not None: - if isinstance(http_auth, (tuple, list)): - http_auth = tuple(http_auth) - elif isinstance(http_auth, string_types): - http_auth = tuple(http_auth.split(":", 1)) - self.session.auth = http_auth - - self.base_url = "%s%s" % ( - self.host, - self.url_prefix, - ) - self.session.verify = verify_certs - if not client_key: - self.session.cert = client_cert - elif client_cert: - # cert is a tuple of (certfile, keyfile) - self.session.cert = (client_cert, client_key) - if ca_certs: - if not verify_certs: - raise ImproperlyConfigured( - "You cannot pass CA certificates when verify SSL is off." - ) - self.session.verify = ca_certs - - if self.use_ssl and not verify_certs: - warnings.warn( - "Connecting to %s using SSL with verify_certs=False is insecure." - % self.host - ) - - def perform_request( - self, method, url, params=None, body=None, timeout=None, ignore=(), headers=None - ): - url = self.base_url + url - headers = headers or {} - if params: - url = "%s?%s" % (url, urlencode(params)) - - orig_body = body - if self.http_compress and body: - body = self._gzip_compress(body) - headers["content-encoding"] = "gzip" - - start = time.time() - request = requests.Request(method=method, headers=headers, url=url, data=body) - prepared_request = self.session.prepare_request(request) - settings = self.session.merge_environment_settings( - prepared_request.url, {}, None, None, None - ) - send_kwargs = {"timeout": timeout or self.timeout} - send_kwargs.update(settings) - try: - response = self.session.send(prepared_request, **send_kwargs) - duration = time.time() - start - raw_data = response.content.decode("utf-8", "surrogatepass") - except Exception as e: - self.log_request_fail( - method, - url, - prepared_request.path_url, - body, - time.time() - start, - exception=e, - ) - if isinstance(e, requests.exceptions.SSLError): - raise SSLError("N/A", str(e), e) - if isinstance(e, requests.Timeout): - raise ConnectionTimeout("TIMEOUT", str(e), e) - raise ConnectionError("N/A", str(e), e) - - # raise errors based on http status codes, let the client handle those if needed - if ( - not (200 <= response.status_code < 300) - and response.status_code not in ignore - ): - self.log_request_fail( - method, - url, - response.request.path_url, - orig_body, - duration, - response.status_code, - raw_data, - ) - self._raise_error(response.status_code, raw_data) - - self.log_request_success( - method, - url, - response.request.path_url, - orig_body, - response.status_code, - raw_data, - duration, - ) - - return response.status_code, response.headers, raw_data - - @property - def headers(self): - return self.session.headers - - def close(self): - """ - Explicitly closes connections - """ - self.session.close() diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/http_urllib3.py b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/http_urllib3.py deleted file mode 100644 index b1b655367..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/http_urllib3.py +++ /dev/null @@ -1,264 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import time -import ssl -import urllib3 -from urllib3.exceptions import ReadTimeoutError, SSLError as UrllibSSLError -from urllib3.util.retry import Retry -import warnings - -from .base import Connection -from ..exceptions import ( - ConnectionError, - ImproperlyConfigured, - ConnectionTimeout, - SSLError, -) -from ..compat import urlencode -from ..utils import _client_meta_version - -# sentinel value for `verify_certs`. -# This is used to detect if a user is passing in a value for `verify_certs` -# so we can raise a warning if using SSL kwargs AND SSLContext. -VERIFY_CERTS_DEFAULT = None - -CA_CERTS = None - -try: - import certifi - - CA_CERTS = certifi.where() -except ImportError: - pass - - -def create_ssl_context(**kwargs): - """ - A helper function around creating an SSL context - - https://docs.python.org/3/library/ssl.html#context-creation - - Accepts kwargs in the same manner as `create_default_context`. - """ - ctx = ssl.create_default_context(**kwargs) - return ctx - - -class Urllib3HttpConnection(Connection): - """ - Default connection class using the `urllib3` library and the http protocol. - - :arg host: hostname of the node (default: localhost) - :arg port: port to use (integer, default: 9200) - :arg url_prefix: optional url prefix for elasticsearch - :arg timeout: default timeout in seconds (float, default: 10) - :arg http_auth: optional http auth information as either ':' separated - string or a tuple - :arg use_ssl: use ssl for the connection if `True` - :arg verify_certs: whether to verify SSL certificates - :arg ca_certs: optional path to CA bundle. - See https://urllib3.readthedocs.io/en/latest/security.html#using-certifi-with-urllib3 - for instructions how to get default set - :arg client_cert: path to the file containing the private key and the - certificate, or cert only if using client_key - :arg client_key: path to the file containing the private key if using - separate cert and key files (client_cert will contain only the cert) - :arg ssl_version: version of the SSL protocol to use. Choices are: - SSLv23 (default) SSLv2 SSLv3 TLSv1 (see ``PROTOCOL_*`` constants in the - ``ssl`` module for exact options for your environment). - :arg ssl_assert_hostname: use hostname verification if not `False` - :arg ssl_assert_fingerprint: verify the supplied certificate fingerprint if not `None` - :arg maxsize: the number of connections which will be kept open to this - host. See https://urllib3.readthedocs.io/en/1.4/pools.html#api for more - information. - :arg headers: any custom http headers to be add to requests - :arg http_compress: Use gzip compression - :arg cloud_id: The Cloud ID from ElasticCloud. Convenient way to connect to cloud instances. - Other host connection params will be ignored. - """ - - HTTP_CLIENT_META = ("ur", _client_meta_version(urllib3.__version__)) - - def __init__( - self, - host="localhost", - port=None, - http_auth=None, - use_ssl=False, - verify_certs=VERIFY_CERTS_DEFAULT, - ca_certs=None, - client_cert=None, - client_key=None, - ssl_version=None, - ssl_assert_hostname=None, - ssl_assert_fingerprint=None, - maxsize=10, - headers=None, - ssl_context=None, - http_compress=None, - cloud_id=None, - **kwargs - ): - # Initialize headers before calling super().__init__(). - self.headers = urllib3.make_headers(keep_alive=True) - - super(Urllib3HttpConnection, self).__init__( - host=host, - port=port, - use_ssl=use_ssl, - headers=headers, - http_compress=http_compress, - cloud_id=cloud_id, - **kwargs - ) - if http_auth is not None: - if isinstance(http_auth, (tuple, list)): - http_auth = ":".join(http_auth) - self.headers.update(urllib3.make_headers(basic_auth=http_auth)) - - pool_class = urllib3.HTTPConnectionPool - kw = {} - - # if providing an SSL context, raise error if any other SSL related flag is used - if ssl_context and ( - (verify_certs is not VERIFY_CERTS_DEFAULT) - or ca_certs - or client_cert - or client_key - or ssl_version - ): - warnings.warn( - "When using `ssl_context`, all other SSL related kwargs are ignored" - ) - - # if ssl_context provided use SSL by default - if ssl_context and self.use_ssl: - pool_class = urllib3.HTTPSConnectionPool - kw.update( - { - "assert_fingerprint": ssl_assert_fingerprint, - "ssl_context": ssl_context, - } - ) - - elif self.use_ssl: - pool_class = urllib3.HTTPSConnectionPool - kw.update( - { - "ssl_version": ssl_version, - "assert_hostname": ssl_assert_hostname, - "assert_fingerprint": ssl_assert_fingerprint, - } - ) - - # If `verify_certs` is sentinal value, default `verify_certs` to `True` - if verify_certs is VERIFY_CERTS_DEFAULT: - verify_certs = True - - ca_certs = CA_CERTS if ca_certs is None else ca_certs - if verify_certs: - if not ca_certs: - raise ImproperlyConfigured( - "Root certificates are missing for certificate " - "validation. Either pass them in using the ca_certs parameter or " - "install certifi to use it automatically." - ) - - kw.update( - { - "cert_reqs": "CERT_REQUIRED", - "ca_certs": ca_certs, - "cert_file": client_cert, - "key_file": client_key, - } - ) - else: - warnings.warn( - "Connecting to %s using SSL with verify_certs=False is insecure." - % self.host - ) - kw["cert_reqs"] = "CERT_NONE" - - self.pool = pool_class( - self.hostname, port=self.port, timeout=self.timeout, maxsize=maxsize, **kw - ) - - def perform_request( - self, method, url, params=None, body=None, timeout=None, ignore=(), headers=None - ): - url = self.url_prefix + url - if params: - url = "%s?%s" % (url, urlencode(params)) - - full_url = self.host + url - - start = time.time() - orig_body = body - try: - kw = {} - if timeout: - kw["timeout"] = timeout - - # in python2 we need to make sure the url and method are not - # unicode. Otherwise the body will be decoded into unicode too and - # that will fail (#133, #201). - if not isinstance(url, str): - url = url.encode("utf-8") - if not isinstance(method, str): - method = method.encode("utf-8") - - request_headers = self.headers.copy() - request_headers.update(headers or ()) - - if self.http_compress and body: - body = self._gzip_compress(body) - request_headers["content-encoding"] = "gzip" - - response = self.pool.urlopen( - method, url, body, retries=Retry(False), headers=request_headers, **kw - ) - duration = time.time() - start - raw_data = response.data.decode("utf-8", "surrogatepass") - except Exception as e: - self.log_request_fail( - method, full_url, url, orig_body, time.time() - start, exception=e - ) - if isinstance(e, UrllibSSLError): - raise SSLError("N/A", str(e), e) - if isinstance(e, ReadTimeoutError): - raise ConnectionTimeout("TIMEOUT", str(e), e) - raise ConnectionError("N/A", str(e), e) - - # raise errors based on http status codes, let the client handle those if needed - if not (200 <= response.status < 300) and response.status not in ignore: - self.log_request_fail( - method, full_url, url, orig_body, duration, response.status, raw_data - ) - self._raise_error(response.status, raw_data) - - self.log_request_success( - method, full_url, url, orig_body, response.status, raw_data, duration - ) - - return response.status, response.getheaders(), raw_data - - def close(self): - """ - Explicitly closes connection - """ - self.pool.close() diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/pooling.py b/infrastructure/sandbox/Data/lambda/elasticsearch/connection/pooling.py deleted file mode 100644 index de2ccd35f..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/connection/pooling.py +++ /dev/null @@ -1,50 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -try: - import queue -except ImportError: - import Queue as queue -from .base import Connection - - -class PoolingConnection(Connection): - """ - Base connection class for connections that use libraries without thread - safety and no capacity for connection pooling. To use this just implement a - ``_make_connection`` method that constructs a new connection and returns - it. - """ - - def __init__(self, *args, **kwargs): - self._free_connections = queue.Queue() - super(PoolingConnection, self).__init__(*args, **kwargs) - - def _get_connection(self): - try: - return self._free_connections.get_nowait() - except queue.Empty: - return self._make_connection() - - def _release_connection(self, con): - self._free_connections.put(con) - - def close(self): - """ - Explicitly close connection - """ - pass diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/connection_pool.py b/infrastructure/sandbox/Data/lambda/elasticsearch/connection_pool.py deleted file mode 100644 index 3fd3c0a64..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/connection_pool.py +++ /dev/null @@ -1,295 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import time -import random -import logging -import threading - -try: - from Queue import PriorityQueue, Empty -except ImportError: - from queue import PriorityQueue, Empty - -from .exceptions import ImproperlyConfigured - -logger = logging.getLogger("elasticsearch") - - -class ConnectionSelector(object): - """ - Simple class used to select a connection from a list of currently live - connection instances. In init time it is passed a dictionary containing all - the connections' options which it can then use during the selection - process. When the `select` method is called it is given a list of - *currently* live connections to choose from. - - The options dictionary is the one that has been passed to - :class:`~elasticsearch.Transport` as `hosts` param and the same that is - used to construct the Connection object itself. When the Connection was - created from information retrieved from the cluster via the sniffing - process it will be the dictionary returned by the `host_info_callback`. - - Example of where this would be useful is a zone-aware selector that would - only select connections from it's own zones and only fall back to other - connections where there would be none in it's zones. - """ - - def __init__(self, opts): - """ - :arg opts: dictionary of connection instances and their options - """ - self.connection_opts = opts - - def select(self, connections): - """ - Select a connection from the given list. - - :arg connections: list of live connections to choose from - """ - pass - - -class RandomSelector(ConnectionSelector): - """ - Select a connection at random - """ - - def select(self, connections): - return random.choice(connections) - - -class RoundRobinSelector(ConnectionSelector): - """ - Selector using round-robin. - """ - - def __init__(self, opts): - super(RoundRobinSelector, self).__init__(opts) - self.data = threading.local() - - def select(self, connections): - self.data.rr = getattr(self.data, "rr", -1) + 1 - self.data.rr %= len(connections) - return connections[self.data.rr] - - -class ConnectionPool(object): - """ - Container holding the :class:`~elasticsearch.Connection` instances, - managing the selection process (via a - :class:`~elasticsearch.ConnectionSelector`) and dead connections. - - It's only interactions are with the :class:`~elasticsearch.Transport` class - that drives all the actions within `ConnectionPool`. - - Initially connections are stored on the class as a list and, along with the - connection options, get passed to the `ConnectionSelector` instance for - future reference. - - Upon each request the `Transport` will ask for a `Connection` via the - `get_connection` method. If the connection fails (it's `perform_request` - raises a `ConnectionError`) it will be marked as dead (via `mark_dead`) and - put on a timeout (if it fails N times in a row the timeout is exponentially - longer - the formula is `default_timeout * 2 ** (fail_count - 1)`). When - the timeout is over the connection will be resurrected and returned to the - live pool. A connection that has been previously marked as dead and - succeeds will be marked as live (its fail count will be deleted). - """ - - def __init__( - self, - connections, - dead_timeout=60, - timeout_cutoff=5, - selector_class=RoundRobinSelector, - randomize_hosts=True, - **kwargs - ): - """ - :arg connections: list of tuples containing the - :class:`~elasticsearch.Connection` instance and it's options - :arg dead_timeout: number of seconds a connection should be retired for - after a failure, increases on consecutive failures - :arg timeout_cutoff: number of consecutive failures after which the - timeout doesn't increase - :arg selector_class: :class:`~elasticsearch.ConnectionSelector` - subclass to use if more than one connection is live - :arg randomize_hosts: shuffle the list of connections upon arrival to - avoid dog piling effect across processes - """ - if not connections: - raise ImproperlyConfigured( - "No defined connections, you need to " "specify at least one host." - ) - self.connection_opts = connections - self.connections = [c for (c, opts) in connections] - # remember original connection list for resurrect(force=True) - self.orig_connections = tuple(self.connections) - # PriorityQueue for thread safety and ease of timeout management - self.dead = PriorityQueue(len(self.connections)) - self.dead_count = {} - - if randomize_hosts: - # randomize the connection list to avoid all clients hitting same node - # after startup/restart - random.shuffle(self.connections) - - # default timeout after which to try resurrecting a connection - self.dead_timeout = dead_timeout - self.timeout_cutoff = timeout_cutoff - - self.selector = selector_class(dict(connections)) - - def mark_dead(self, connection, now=None): - """ - Mark the connection as dead (failed). Remove it from the live pool and - put it on a timeout. - - :arg connection: the failed instance - """ - # allow inject for testing purposes - now = now if now else time.time() - try: - self.connections.remove(connection) - except ValueError: - # connection not alive or another thread marked it already, ignore - return - else: - dead_count = self.dead_count.get(connection, 0) + 1 - self.dead_count[connection] = dead_count - timeout = self.dead_timeout * 2 ** min(dead_count - 1, self.timeout_cutoff) - self.dead.put((now + timeout, connection)) - logger.warning( - "Connection %r has failed for %i times in a row, putting on %i second timeout.", - connection, - dead_count, - timeout, - ) - - def mark_live(self, connection): - """ - Mark connection as healthy after a resurrection. Resets the fail - counter for the connection. - - :arg connection: the connection to redeem - """ - try: - del self.dead_count[connection] - except KeyError: - # race condition, safe to ignore - pass - - def resurrect(self, force=False): - """ - Attempt to resurrect a connection from the dead pool. It will try to - locate one (not all) eligible (it's timeout is over) connection to - return to the live pool. Any resurrected connection is also returned. - - :arg force: resurrect a connection even if there is none eligible (used - when we have no live connections). If force is specified resurrect - always returns a connection. - - """ - # no dead connections - if self.dead.empty(): - # we are forced to return a connection, take one from the original - # list. This is to avoid a race condition where get_connection can - # see no live connections but when it calls resurrect self.dead is - # also empty. We assume that other threat has resurrected all - # available connections so we can safely return one at random. - if force: - return random.choice(self.orig_connections) - return - - try: - # retrieve a connection to check - timeout, connection = self.dead.get(block=False) - except Empty: - # other thread has been faster and the queue is now empty. If we - # are forced, return a connection at random again. - if force: - return random.choice(self.orig_connections) - return - - if not force and timeout > time.time(): - # return it back if not eligible and not forced - self.dead.put((timeout, connection)) - return - - # either we were forced or the connection is elligible to be retried - self.connections.append(connection) - logger.info("Resurrecting connection %r (force=%s).", connection, force) - return connection - - def get_connection(self): - """ - Return a connection from the pool using the `ConnectionSelector` - instance. - - It tries to resurrect eligible connections, forces a resurrection when - no connections are availible and passes the list of live connections to - the selector instance to choose from. - - Returns a connection instance and it's current fail count. - """ - self.resurrect() - connections = self.connections[:] - - # no live nodes, resurrect one by force and return it - if not connections: - return self.resurrect(True) - - # only call selector if we have a selection - if len(connections) > 1: - return self.selector.select(connections) - - # only one connection, no need for a selector - return connections[0] - - def close(self): - """ - Explicitly closes connections - """ - for conn in self.orig_connections: - conn.close() - - -class DummyConnectionPool(ConnectionPool): - def __init__(self, connections, **kwargs): - if len(connections) != 1: - raise ImproperlyConfigured( - "DummyConnectionPool needs exactly one " "connection defined." - ) - # we need connection opts for sniffing logic - self.connection_opts = connections - self.connection = connections[0][0] - self.connections = (self.connection,) - - def get_connection(self): - return self.connection - - def close(self): - """ - Explicitly closes connections - """ - self.connection.close() - - def _noop(self, *args, **kwargs): - pass - - mark_dead = mark_live = resurrect = _noop diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/exceptions.py b/infrastructure/sandbox/Data/lambda/elasticsearch/exceptions.py deleted file mode 100644 index 49b0da622..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/exceptions.py +++ /dev/null @@ -1,156 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -__all__ = [ - "ImproperlyConfigured", - "ElasticsearchException", - "SerializationError", - "TransportError", - "NotFoundError", - "ConflictError", - "RequestError", - "ConnectionError", - "SSLError", - "ConnectionTimeout", - "AuthenticationException", - "AuthorizationException", -] - - -class ImproperlyConfigured(Exception): - """ - Exception raised when the config passed to the client is inconsistent or invalid. - """ - - -class ElasticsearchException(Exception): - """ - Base class for all exceptions raised by this package's operations (doesn't - apply to :class:`~elasticsearch.ImproperlyConfigured`). - """ - - -class SerializationError(ElasticsearchException): - """ - Data passed in failed to serialize properly in the ``Serializer`` being - used. - """ - - -class TransportError(ElasticsearchException): - """ - Exception raised when ES returns a non-OK (>=400) HTTP status code. Or when - an actual connection error happens; in that case the ``status_code`` will - be set to ``'N/A'``. - """ - - @property - def status_code(self): - """ - The HTTP status code of the response that precipitated the error or - ``'N/A'`` if not applicable. - """ - return self.args[0] - - @property - def error(self): - """ A string error message. """ - return self.args[1] - - @property - def info(self): - """ - Dict of returned error info from ES, where available, underlying - exception when not. - """ - return self.args[2] - - def __str__(self): - cause = "" - try: - if self.info and "error" in self.info: - if isinstance(self.info["error"], dict): - cause = ", %r" % self.info["error"]["root_cause"][0]["reason"] - else: - cause = ", %r" % self.info["error"] - except LookupError: - pass - return "%s(%s, %r%s)" % ( - self.__class__.__name__, - self.status_code, - self.error, - cause, - ) - - -class ConnectionError(TransportError): - """ - Error raised when there was an exception while talking to ES. Original - exception from the underlying :class:`~elasticsearch.Connection` - implementation is available as ``.info.`` - """ - - def __str__(self): - return "ConnectionError(%s) caused by: %s(%s)" % ( - self.error, - self.info.__class__.__name__, - self.info, - ) - - -class SSLError(ConnectionError): - """ Error raised when encountering SSL errors. """ - - -class ConnectionTimeout(ConnectionError): - """ A network timeout. Doesn't cause a node retry by default. """ - - def __str__(self): - return "ConnectionTimeout caused by - %s(%s)" % ( - self.info.__class__.__name__, - self.info, - ) - - -class NotFoundError(TransportError): - """ Exception representing a 404 status code. """ - - -class ConflictError(TransportError): - """ Exception representing a 409 status code. """ - - -class RequestError(TransportError): - """ Exception representing a 400 status code. """ - - -class AuthenticationException(TransportError): - """ Exception representing a 401 status code. """ - - -class AuthorizationException(TransportError): - """ Exception representing a 403 status code. """ - - -# more generic mappings from status_code to python exceptions -HTTP_EXCEPTIONS = { - 400: RequestError, - 401: AuthenticationException, - 403: AuthorizationException, - 404: NotFoundError, - 409: ConflictError, -} diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__init__.py b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__init__.py deleted file mode 100644 index 32fbbde32..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__init__.py +++ /dev/null @@ -1,34 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from .errors import BulkIndexError, ScanError -from .actions import expand_action, streaming_bulk, bulk, parallel_bulk -from .actions import scan, reindex -from .actions import _chunk_actions, _process_bulk_chunk - -__all__ = [ - "BulkIndexError", - "ScanError", - "expand_action", - "streaming_bulk", - "bulk", - "parallel_bulk", - "scan", - "reindex", - "_chunk_actions", - "_process_bulk_chunk", -] diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 9c4fccf5f..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/actions.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/actions.cpython-310.pyc deleted file mode 100644 index b96842cae..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/actions.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/errors.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/errors.cpython-310.pyc deleted file mode 100644 index 01aef7401..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/errors.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/test.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/test.cpython-310.pyc deleted file mode 100644 index 52d2405cb..000000000 Binary files a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/__pycache__/test.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/actions.py b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/actions.py deleted file mode 100644 index 6b0efd3b9..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/actions.py +++ /dev/null @@ -1,565 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from operator import methodcaller -import time - -from ..exceptions import TransportError -from ..compat import map, string_types, Queue - -from .errors import ScanError, BulkIndexError - -import logging - - -logger = logging.getLogger("elasticsearch.helpers") - - -def expand_action(data): - """ - From one document or action definition passed in by the user extract the - action/data lines needed for elasticsearch's - :meth:`~elasticsearch.Elasticsearch.bulk` api. - """ - # when given a string, assume user wants to index raw json - if isinstance(data, string_types): - return '{"index":{}}', data - - # make sure we don't alter the action - data = data.copy() - op_type = data.pop("_op_type", "index") - action = {op_type: {}} - for key in ( - "_index", - "_parent", - "_percolate", - "_routing", - "_timestamp", - "routing", - "_type", - "_version", - "_version_type", - "_id", - "retry_on_conflict", - "pipeline", - ): - if key in data: - action[op_type][key] = data.pop(key) - - # no data payload for delete - if op_type == "delete": - return action, None - - return action, data.get("_source", data) - - -def _chunk_actions(actions, chunk_size, max_chunk_bytes, serializer): - """ - Split actions into chunks by number or size, serialize them into strings in - the process. - """ - bulk_actions, bulk_data = [], [] - size, action_count = 0, 0 - for action, data in actions: - raw_data, raw_action = data, action - action = serializer.dumps(action) - # +1 to account for the trailing new line character - cur_size = len(action.encode("utf-8")) + 1 - - if data is not None: - data = serializer.dumps(data) - cur_size += len(data.encode("utf-8")) + 1 - - # full chunk, send it and start a new one - if bulk_actions and ( - size + cur_size > max_chunk_bytes or action_count == chunk_size - ): - yield bulk_data, bulk_actions - bulk_actions, bulk_data = [], [] - size, action_count = 0, 0 - - bulk_actions.append(action) - if data is not None: - bulk_actions.append(data) - bulk_data.append((raw_action, raw_data)) - else: - bulk_data.append((raw_action,)) - - size += cur_size - action_count += 1 - - if bulk_actions: - yield bulk_data, bulk_actions - - -def _process_bulk_chunk( - client, - bulk_actions, - bulk_data, - raise_on_exception=True, - raise_on_error=True, - *args, - **kwargs -): - """ - Send a bulk request to elasticsearch and process the output. - """ - # if raise on error is set, we need to collect errors per chunk before raising them - errors = [] - - try: - # send the actual request - resp = client.bulk("\n".join(bulk_actions) + "\n", *args, **kwargs) - except TransportError as e: - # default behavior - just propagate exception - if raise_on_exception: - raise e - - # if we are not propagating, mark all actions in current chunk as failed - err_message = str(e) - exc_errors = [] - - for data in bulk_data: - # collect all the information about failed actions - op_type, action = data[0].copy().popitem() - info = {"error": err_message, "status": e.status_code, "exception": e} - if op_type != "delete": - info["data"] = data[1] - info.update(action) - exc_errors.append({op_type: info}) - - # emulate standard behavior for failed actions - if raise_on_error: - raise BulkIndexError( - "%i document(s) failed to index." % len(exc_errors), exc_errors - ) - else: - for err in exc_errors: - yield False, err - return - - # go through request-response pairs and detect failures - for data, (op_type, item) in zip( - bulk_data, map(methodcaller("popitem"), resp["items"]) - ): - ok = 200 <= item.get("status", 500) < 300 - if not ok and raise_on_error: - # include original document source - if len(data) > 1: - item["data"] = data[1] - errors.append({op_type: item}) - - if ok or not errors: - # if we are not just recording all errors to be able to raise - # them all at once, yield items individually - yield ok, {op_type: item} - - if errors: - raise BulkIndexError("%i document(s) failed to index." % len(errors), errors) - - -def streaming_bulk( - client, - actions, - chunk_size=500, - max_chunk_bytes=100 * 1024 * 1024, - raise_on_error=True, - expand_action_callback=expand_action, - raise_on_exception=True, - max_retries=0, - initial_backoff=2, - max_backoff=600, - yield_ok=True, - *args, - **kwargs -): - - """ - Streaming bulk consumes actions from the iterable passed in and yields - results per action. For non-streaming usecases use - :func:`~elasticsearch.helpers.bulk` which is a wrapper around streaming - bulk that returns summary information about the bulk operation once the - entire input is consumed and sent. - - If you specify ``max_retries`` it will also retry any documents that were - rejected with a ``429`` status code. To do this it will wait (**by calling - time.sleep which will block**) for ``initial_backoff`` seconds and then, - every subsequent rejection for the same chunk, for double the time every - time up to ``max_backoff`` seconds. - - :arg client: instance of :class:`~elasticsearch.Elasticsearch` to use - :arg actions: iterable containing the actions to be executed - :arg chunk_size: number of docs in one chunk sent to es (default: 500) - :arg max_chunk_bytes: the maximum size of the request in bytes (default: 100MB) - :arg raise_on_error: raise ``BulkIndexError`` containing errors (as `.errors`) - from the execution of the last chunk when some occur. By default we raise. - :arg raise_on_exception: if ``False`` then don't propagate exceptions from - call to ``bulk`` and just report the items that failed as failed. - :arg expand_action_callback: callback executed on each action passed in, - should return a tuple containing the action line and the data line - (`None` if data line should be omitted). - :arg max_retries: maximum number of times a document will be retried when - ``429`` is received, set to 0 (default) for no retries on ``429`` - :arg initial_backoff: number of seconds we should wait before the first - retry. Any subsequent retries will be powers of ``initial_backoff * - 2**retry_number`` - :arg max_backoff: maximum number of seconds a retry will wait - :arg yield_ok: if set to False will skip successful documents in the output - """ - actions = map(expand_action_callback, actions) - - for bulk_data, bulk_actions in _chunk_actions( - actions, chunk_size, max_chunk_bytes, client.transport.serializer - ): - - for attempt in range(max_retries + 1): - to_retry, to_retry_data = [], [] - if attempt: - time.sleep(min(max_backoff, initial_backoff * 2 ** (attempt - 1))) - - try: - for data, (ok, info) in zip( - bulk_data, - _process_bulk_chunk( - client, - bulk_actions, - bulk_data, - raise_on_exception, - raise_on_error, - *args, - **kwargs - ), - ): - - if not ok: - action, info = info.popitem() - # retry if retries enabled, we get 429, and we are not - # in the last attempt - if ( - max_retries - and info["status"] == 429 - and (attempt + 1) <= max_retries - ): - # _process_bulk_chunk expects strings so we need to - # re-serialize the data - to_retry.extend( - map(client.transport.serializer.dumps, data) - ) - to_retry_data.append(data) - else: - yield ok, {action: info} - elif yield_ok: - yield ok, info - - except TransportError as e: - # suppress 429 errors since we will retry them - if attempt == max_retries or e.status_code != 429: - raise - else: - if not to_retry: - break - # retry only subset of documents that didn't succeed - bulk_actions, bulk_data = to_retry, to_retry_data - - -def bulk(client, actions, stats_only=False, *args, **kwargs): - """ - Helper for the :meth:`~elasticsearch.Elasticsearch.bulk` api that provides - a more human friendly interface - it consumes an iterator of actions and - sends them to elasticsearch in chunks. It returns a tuple with summary - information - number of successfully executed actions and either list of - errors or number of errors if ``stats_only`` is set to ``True``. Note that - by default we raise a ``BulkIndexError`` when we encounter an error so - options like ``stats_only`` only apply when ``raise_on_error`` is set to - ``False``. - - When errors are being collected original document data is included in the - error dictionary which can lead to an extra high memory usage. If you need - to process a lot of data and want to ignore/collect errors please consider - using the :func:`~elasticsearch.helpers.streaming_bulk` helper which will - just return the errors and not store them in memory. - - - :arg client: instance of :class:`~elasticsearch.Elasticsearch` to use - :arg actions: iterator containing the actions - :arg stats_only: if `True` only report number of successful/failed - operations instead of just number of successful and a list of error responses - - Any additional keyword arguments will be passed to - :func:`~elasticsearch.helpers.streaming_bulk` which is used to execute - the operation, see :func:`~elasticsearch.helpers.streaming_bulk` for more - accepted parameters. - """ - success, failed = 0, 0 - - # list of errors to be collected is not stats_only - errors = [] - - # make streaming_bulk yield successful results so we can count them - kwargs["yield_ok"] = True - for ok, item in streaming_bulk(client, actions, *args, **kwargs): - # go through request-response pairs and detect failures - if not ok: - if not stats_only: - errors.append(item) - failed += 1 - else: - success += 1 - - return success, failed if stats_only else errors - - -def parallel_bulk( - client, - actions, - thread_count=4, - chunk_size=500, - max_chunk_bytes=100 * 1024 * 1024, - queue_size=4, - expand_action_callback=expand_action, - *args, - **kwargs -): - """ - Parallel version of the bulk helper run in multiple threads at once. - - :arg client: instance of :class:`~elasticsearch.Elasticsearch` to use - :arg actions: iterator containing the actions - :arg thread_count: size of the threadpool to use for the bulk requests - :arg chunk_size: number of docs in one chunk sent to es (default: 500) - :arg max_chunk_bytes: the maximum size of the request in bytes (default: 100MB) - :arg raise_on_error: raise ``BulkIndexError`` containing errors (as `.errors`) - from the execution of the last chunk when some occur. By default we raise. - :arg raise_on_exception: if ``False`` then don't propagate exceptions from - call to ``bulk`` and just report the items that failed as failed. - :arg expand_action_callback: callback executed on each action passed in, - should return a tuple containing the action line and the data line - (`None` if data line should be omitted). - :arg queue_size: size of the task queue between the main thread (producing - chunks to send) and the processing threads. - """ - # Avoid importing multiprocessing unless parallel_bulk is used - # to avoid exceptions on restricted environments like App Engine - from multiprocessing.pool import ThreadPool - - actions = map(expand_action_callback, actions) - - class BlockingPool(ThreadPool): - def _setup_queues(self): - super(BlockingPool, self)._setup_queues() - self._inqueue = Queue(queue_size) - self._quick_put = self._inqueue.put - - pool = BlockingPool(thread_count) - - try: - for result in pool.imap( - lambda bulk_chunk: list( - _process_bulk_chunk( - client, bulk_chunk[1], bulk_chunk[0], *args, **kwargs - ) - ), - _chunk_actions( - actions, chunk_size, max_chunk_bytes, client.transport.serializer - ), - ): - for item in result: - yield item - - finally: - pool.close() - pool.join() - - -def scan( - client, - query=None, - scroll="5m", - raise_on_error=True, - preserve_order=False, - size=1000, - request_timeout=None, - clear_scroll=True, - scroll_kwargs=None, - **kwargs -): - """ - Simple abstraction on top of the - :meth:`~elasticsearch.Elasticsearch.scroll` api - a simple iterator that - yields all hits as returned by underlining scroll requests. - - By default scan does not return results in any pre-determined order. To - have a standard order in the returned documents (either by score or - explicit sort definition) when scrolling, use ``preserve_order=True``. This - may be an expensive operation and will negate the performance benefits of - using ``scan``. - - :arg client: instance of :class:`~elasticsearch.Elasticsearch` to use - :arg query: body for the :meth:`~elasticsearch.Elasticsearch.search` api - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg raise_on_error: raises an exception (``ScanError``) if an error is - encountered (some shards fail to execute). By default we raise. - :arg preserve_order: don't set the ``search_type`` to ``scan`` - this will - cause the scroll to paginate with preserving the order. Note that this - can be an extremely expensive operation and can easily lead to - unpredictable results, use with caution. - :arg size: size (per shard) of the batch send at each iteration. - :arg request_timeout: explicit timeout for each call to ``scan`` - :arg clear_scroll: explicitly calls delete on the scroll id via the clear - scroll API at the end of the method on completion or error, defaults - to true. - :arg scroll_kwargs: additional kwargs to be passed to - :meth:`~elasticsearch.Elasticsearch.scroll` - - Any additional keyword arguments will be passed to the initial - :meth:`~elasticsearch.Elasticsearch.search` call:: - - scan(es, - query={"query": {"match": {"title": "python"}}}, - index="orders-*", - doc_type="books" - ) - - """ - scroll_kwargs = scroll_kwargs or {} - _add_helper_meta_to_kwargs(scroll_kwargs, "s") - - if not preserve_order: - query = query.copy() if query else {} - query["sort"] = "_doc" - # initial search - resp = client.search( - body=query, scroll=scroll, size=size, request_timeout=request_timeout, **kwargs - ) - - scroll_id = resp.get("_scroll_id") - if scroll_id is None: - return - - try: - first_run = True - while True: - # if we didn't set search_type to scan initial search contains data - if first_run: - first_run = False - else: - resp = client.scroll( - scroll_id=scroll_id, - scroll=scroll, - request_timeout=request_timeout, - **scroll_kwargs - ) - - for hit in resp["hits"]["hits"]: - yield hit - - # check if we have any errrors - if resp["_shards"]["successful"] < resp["_shards"]["total"]: - logger.warning( - "Scroll request has only succeeded on %d shards out of %d.", - resp["_shards"]["successful"], - resp["_shards"]["total"], - ) - if raise_on_error: - raise ScanError( - scroll_id, - "Scroll request has only succeeded on %d shards out of %d." - % (resp["_shards"]["successful"], resp["_shards"]["total"]), - ) - - scroll_id = resp.get("_scroll_id") - # end of scroll - if scroll_id is None or not resp["hits"]["hits"]: - break - finally: - if scroll_id and clear_scroll: - client.clear_scroll( - body={"scroll_id": [scroll_id]}, - ignore=(404,), - params={"__elastic_client_meta": (("h", "s"),)}, - ) - - -def reindex( - client, - source_index, - target_index, - query=None, - target_client=None, - chunk_size=500, - scroll="5m", - scan_kwargs={}, - bulk_kwargs={}, -): - - """ - Reindex all documents from one index that satisfy a given query - to another, potentially (if `target_client` is specified) on a different cluster. - If you don't specify the query you will reindex all the documents. - - Since ``2.3`` a :meth:`~elasticsearch.Elasticsearch.reindex` api is - available as part of elasticsearch itself. It is recommended to use the api - instead of this helper wherever possible. The helper is here mostly for - backwards compatibility and for situations where more flexibility is - needed. - - .. note:: - - This helper doesn't transfer mappings, just the data. - - :arg client: instance of :class:`~elasticsearch.Elasticsearch` to use (for - read if `target_client` is specified as well) - :arg source_index: index (or list of indices) to read documents from - :arg target_index: name of the index in the target cluster to populate - :arg query: body for the :meth:`~elasticsearch.Elasticsearch.search` api - :arg target_client: optional, is specified will be used for writing (thus - enabling reindex between clusters) - :arg chunk_size: number of docs in one chunk sent to es (default: 500) - :arg scroll: Specify how long a consistent view of the index should be - maintained for scrolled search - :arg scan_kwargs: additional kwargs to be passed to - :func:`~elasticsearch.helpers.scan` - :arg bulk_kwargs: additional kwargs to be passed to - :func:`~elasticsearch.helpers.bulk` - """ - target_client = client if target_client is None else target_client - - docs = scan(client, query=query, index=source_index, scroll=scroll, **scan_kwargs) - - def _change_doc_index(hits, index): - for h in hits: - h["_index"] = index - if "fields" in h: - h.update(h.pop("fields")) - yield h - - kwargs = {"stats_only": True} - kwargs.update(bulk_kwargs) - return bulk( - target_client, - _change_doc_index(docs, target_index), - chunk_size=chunk_size, - **kwargs - ) - - -def _add_helper_meta_to_kwargs(kwargs, helper_meta): - params = (kwargs or {}).pop("params", {}) - params["__elastic_client_meta"] = (("h", helper_meta),) - kwargs["params"] = params - return kwargs diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/errors.py b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/errors.py deleted file mode 100644 index e6292cf9b..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/errors.py +++ /dev/null @@ -1,31 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -from ..exceptions import ElasticsearchException - - -class BulkIndexError(ElasticsearchException): - @property - def errors(self): - """ List of errors from execution of the last chunk. """ - return self.args[1] - - -class ScanError(ElasticsearchException): - def __init__(self, scroll_id, *args, **kwargs): - super(ScanError, self).__init__(*args, **kwargs) - self.scroll_id = scroll_id diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/test.py b/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/test.py deleted file mode 100644 index 8d9480f2d..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/helpers/test.py +++ /dev/null @@ -1,81 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import time -import os - -try: - # python 2.6 - from unittest2 import TestCase, SkipTest -except ImportError: - from unittest import TestCase, SkipTest - -from elasticsearch import Elasticsearch -from elasticsearch.exceptions import ConnectionError - - -def get_test_client(nowait=False, **kwargs): - # construct kwargs from the environment - kw = {"timeout": 5} - if "TEST_ES_CONNECTION" in os.environ: - from elasticsearch import connection - - kw["connection_class"] = getattr(connection, os.environ["TEST_ES_CONNECTION"]) - - kw.update(kwargs) - client = Elasticsearch([os.environ.get("TEST_ES_SERVER", {})], **kw) - - # wait for yellow status - for _ in range(1 if nowait else 1): - try: - client.cluster.health(wait_for_status="yellow") - return client - except ConnectionError: - time.sleep(0.1) - else: - # timeout - raise SkipTest("Elasticsearch failed to start.") - - -def _get_version(version_string): - if "." not in version_string: - return () - version = version_string.strip().split(".") - return tuple(int(v) if v.isdigit() else 999 for v in version) - - -class ElasticsearchTestCase(TestCase): - @staticmethod - def _get_client(): - return get_test_client() - - @classmethod - def setUpClass(cls): - super(ElasticsearchTestCase, cls).setUpClass() - cls.client = cls._get_client() - - def tearDown(self): - super(ElasticsearchTestCase, self).tearDown() - self.client.indices.delete(index="*", ignore=404) - self.client.indices.delete_template(name="*", ignore=404) - - @property - def es_version(self): - if not hasattr(self, "_es_version"): - version_string = self.client.info()["version"]["number"] - self._es_version = _get_version(version_string) - return self._es_version diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/serializer.py b/infrastructure/sandbox/Data/lambda/elasticsearch/serializer.py deleted file mode 100644 index 62f66de18..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/serializer.py +++ /dev/null @@ -1,156 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -try: - import simplejson as json -except ImportError: - import json - -import uuid -from datetime import date, datetime -from decimal import Decimal - -from .exceptions import SerializationError, ImproperlyConfigured -from .compat import string_types - -INTEGER_TYPES = () -FLOAT_TYPES = (Decimal,) -TIME_TYPES = (date, datetime) - -try: - import numpy as np - - INTEGER_TYPES += ( - np.int_, - np.intc, - np.int8, - np.int16, - np.int32, - np.int64, - np.uint8, - np.uint16, - np.uint32, - np.uint64, - ) - FLOAT_TYPES += ( - np.float_, - np.float16, - np.float32, - np.float64, - ) -except ImportError: - np = None - -try: - import pandas as pd - - TIME_TYPES += (pd.Timestamp,) -except ImportError: - pd = None - - -class TextSerializer(object): - mimetype = "text/plain" - - def loads(self, s): - return s - - def dumps(self, data): - if isinstance(data, string_types): - return data - - raise SerializationError("Cannot serialize %r into text." % data) - - -class JSONSerializer(object): - mimetype = "application/json" - - def default(self, data): - if isinstance(data, TIME_TYPES): - return data.isoformat() - elif isinstance(data, uuid.UUID): - return str(data) - elif isinstance(data, FLOAT_TYPES): - return float(data) - elif INTEGER_TYPES and isinstance(data, INTEGER_TYPES): - return int(data) - - # Special cases for numpy and pandas types - elif np: - if isinstance(data, np.bool_): - return bool(data) - elif isinstance(data, np.datetime64): - return data.item().isoformat() - elif isinstance(data, np.ndarray): - return data.tolist() - if pd: - if isinstance(data, (pd.Series, pd.Categorical)): - return data.tolist() - elif hasattr(pd, "NA") and pd.isna(data): - return None - - raise TypeError("Unable to serialize %r (type: %s)" % (data, type(data))) - - def loads(self, s): - try: - return json.loads(s) - except (ValueError, TypeError) as e: - raise SerializationError(s, e) - - def dumps(self, data): - # don't serialize strings - if isinstance(data, string_types): - return data - - try: - return json.dumps( - data, default=self.default, ensure_ascii=False, separators=(",", ":") - ) - except (ValueError, TypeError) as e: - raise SerializationError(data, e) - - -DEFAULT_SERIALIZERS = { - JSONSerializer.mimetype: JSONSerializer(), - TextSerializer.mimetype: TextSerializer(), -} - - -class Deserializer(object): - def __init__(self, serializers, default_mimetype="application/json"): - try: - self.default = serializers[default_mimetype] - except KeyError: - raise ImproperlyConfigured( - "Cannot find default serializer (%s)" % default_mimetype - ) - self.serializers = serializers - - def loads(self, s, mimetype=None): - if not mimetype: - deserializer = self.default - else: - # split out charset - mimetype, _, _ = mimetype.partition(";") - try: - deserializer = self.serializers[mimetype] - except KeyError: - raise SerializationError( - "Unknown mimetype, unable to deserialize: %s" % mimetype - ) - - return deserializer.loads(s) diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/transport.py b/infrastructure/sandbox/Data/lambda/elasticsearch/transport.py deleted file mode 100644 index 569862ada..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/transport.py +++ /dev/null @@ -1,450 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import time -from platform import python_version -from itertools import chain - -from .connection import Urllib3HttpConnection -from .connection_pool import ConnectionPool, DummyConnectionPool -from .serializer import JSONSerializer, Deserializer, DEFAULT_SERIALIZERS -from .exceptions import ( - ConnectionError, - TransportError, - SerializationError, - ConnectionTimeout, -) -from .utils import _client_meta_version - - -def get_host_info(node_info, host): - """ - Simple callback that takes the node info from `/_cluster/nodes` and a - parsed connection information and return the connection information. If - `None` is returned this node will be skipped. - - Useful for filtering nodes (by proximity for example) or if additional - information needs to be provided for the :class:`~elasticsearch.Connection` - class. By default master only nodes are filtered out since they shouldn't - typically be used for API operations. - - :arg node_info: node information from `/_cluster/nodes` - :arg host: connection information (host, port) extracted from the node info - """ - # ignore master only nodes - if node_info.get("roles", []) == ["master"]: - return None - return host - - -class Transport(object): - """ - Encapsulation of transport-related to logic. Handles instantiation of the - individual connections as well as creating a connection pool to hold them. - - Main interface is the `perform_request` method. - """ - - def __init__( - self, - hosts, - connection_class=Urllib3HttpConnection, - connection_pool_class=ConnectionPool, - host_info_callback=get_host_info, - sniff_on_start=False, - sniffer_timeout=None, - sniff_timeout=0.1, - sniff_on_connection_fail=False, - serializer=JSONSerializer(), - serializers=None, - default_mimetype="application/json", - max_retries=3, - retry_on_status=(502, 503, 504), - retry_on_timeout=False, - send_get_body_as="GET", - meta_header=True, - **kwargs - ): - """ - :arg hosts: list of dictionaries, each containing keyword arguments to - create a `connection_class` instance - :arg connection_class: subclass of :class:`~elasticsearch.Connection` to use - :arg connection_pool_class: subclass of :class:`~elasticsearch.ConnectionPool` to use - :arg host_info_callback: callback responsible for taking the node information from - `/_cluser/nodes`, along with already extracted information, and - producing a list of arguments (same as `hosts` parameter) - :arg sniff_on_start: flag indicating whether to obtain a list of nodes - from the cluser at startup time - :arg sniffer_timeout: number of seconds between automatic sniffs - :arg sniff_on_connection_fail: flag controlling if connection failure triggers a sniff - :arg sniff_timeout: timeout used for the sniff request - it should be a - fast api call and we are talking potentially to more nodes so we want - to fail quickly. Not used during initial sniffing (if - ``sniff_on_start`` is on) when the connection still isn't - initialized. - :arg serializer: serializer instance - :arg serializers: optional dict of serializer instances that will be - used for deserializing data coming from the server. (key is the mimetype) - :arg default_mimetype: when no mimetype is specified by the server - response assume this mimetype, defaults to `'application/json'` - :arg max_retries: maximum number of retries before an exception is propagated - :arg retry_on_status: set of HTTP status codes on which we should retry - on a different node. defaults to ``(502, 503, 504)`` - :arg retry_on_timeout: should timeout trigger a retry on different - node? (default `False`) - :arg send_get_body_as: for GET requests with body this option allows - you to specify an alternate way of execution for environments that - don't support passing bodies with GET requests. If you set this to - 'POST' a POST method will be used instead, if to 'source' then the body - will be serialized and passed as a query parameter `source`. - :arg meta_header: If True will send the 'X-Elastic-Client-Meta' HTTP header containing - simple client metadata. Setting to False will disable the header. Defaults to True. - - Any extra keyword arguments will be passed to the `connection_class` - when creating and instance unless overridden by that connection's - options provided as part of the hosts parameter. - """ - if not isinstance(meta_header, bool): - raise TypeError("meta_header must be of type bool") - - # serialization config - _serializers = DEFAULT_SERIALIZERS.copy() - # if a serializer has been specified, use it for deserialization as well - _serializers[serializer.mimetype] = serializer - # if custom serializers map has been supplied, override the defaults with it - if serializers: - _serializers.update(serializers) - # create a deserializer with our config - self.deserializer = Deserializer(_serializers, default_mimetype) - - self.max_retries = max_retries - self.retry_on_timeout = retry_on_timeout - self.retry_on_status = retry_on_status - self.send_get_body_as = send_get_body_as - self.meta_header = meta_header - - # data serializer - self.serializer = serializer - - # store all strategies... - self.connection_pool_class = connection_pool_class - self.connection_class = connection_class - - # ...save kwargs to be passed to the connections - self.kwargs = kwargs - self.hosts = hosts - - # ...and instantiate them - self.set_connections(hosts) - # retain the original connection instances for sniffing - self.seed_connections = self.connection_pool.connections[:] - - # Don't enable sniffing on Cloud instances. - if kwargs.get("cloud_id", False): - sniff_on_start = False - sniff_on_connection_fail = False - - # sniffing data - self.sniffer_timeout = sniffer_timeout - self.sniff_on_connection_fail = sniff_on_connection_fail - self.last_sniff = time.time() - self.sniff_timeout = sniff_timeout - - # callback to construct host dict from data in /_cluster/nodes - self.host_info_callback = host_info_callback - - if sniff_on_start: - self.sniff_hosts(True) - - # Create the default metadata for the x-elastic-client-meta - # HTTP header. Only requires adding the (service, service_version) - # tuple to the beginning of the client_meta - from . import __versionstr__ - - self._client_meta = ( - ("es", _client_meta_version(__versionstr__)), - ("py", _client_meta_version(python_version())), - ("t", _client_meta_version(__versionstr__)), - ) - - # Grab the 'HTTP_CLIENT_META' property from the connection class - http_client_meta = getattr(connection_class, "HTTP_CLIENT_META", None) - if http_client_meta: - self._client_meta += (http_client_meta,) - - def add_connection(self, host): - """ - Create a new :class:`~elasticsearch.Connection` instance and add it to the pool. - - :arg host: kwargs that will be used to create the instance - """ - self.hosts.append(host) - self.set_connections(self.hosts) - - def set_connections(self, hosts): - """ - Instantiate all the connections and create new connection pool to hold them. - Tries to identify unchanged hosts and re-use existing - :class:`~elasticsearch.Connection` instances. - - :arg hosts: same as `__init__` - """ - # construct the connections - def _create_connection(host): - # if this is not the initial setup look at the existing connection - # options and identify connections that haven't changed and can be - # kept around. - if hasattr(self, "connection_pool"): - for (connection, old_host) in self.connection_pool.connection_opts: - if old_host == host: - return connection - - # previously unseen params, create new connection - kwargs = self.kwargs.copy() - kwargs.update(host) - return self.connection_class(**kwargs) - - connections = map(_create_connection, hosts) - - connections = list(zip(connections, hosts)) - if len(connections) == 1: - self.connection_pool = DummyConnectionPool(connections) - else: - # pass the hosts dicts to the connection pool to optionally extract parameters from - self.connection_pool = self.connection_pool_class( - connections, **self.kwargs - ) - - def get_connection(self): - """ - Retreive a :class:`~elasticsearch.Connection` instance from the - :class:`~elasticsearch.ConnectionPool` instance. - """ - if self.sniffer_timeout: - if time.time() >= self.last_sniff + self.sniffer_timeout: - self.sniff_hosts() - return self.connection_pool.get_connection() - - def _get_sniff_data(self, initial=False): - """ - Perform the request to get sniffins information. Returns a list of - dictionaries (one per node) containing all the information from the - cluster. - - It also sets the last_sniff attribute in case of a successful attempt. - - In rare cases it might be possible to override this method in your - custom Transport class to serve data from alternative source like - configuration management. - """ - previous_sniff = self.last_sniff - - try: - # reset last_sniff timestamp - self.last_sniff = time.time() - # go through all current connections as well as the - # seed_connections for good measure - for c in chain(self.connection_pool.connections, self.seed_connections): - try: - # use small timeout for the sniffing request, should be a fast api call - _, headers, node_info = c.perform_request( - "GET", - "/_nodes/_all/http", - timeout=self.sniff_timeout if not initial else None, - ) - node_info = self.deserializer.loads( - node_info, headers.get("content-type") - ) - break - except (ConnectionError, SerializationError): - pass - else: - raise TransportError("N/A", "Unable to sniff hosts.") - except Exception: - # keep the previous value on error - self.last_sniff = previous_sniff - raise - - return list(node_info["nodes"].values()) - - def _get_host_info(self, host_info): - host = {} - address = host_info.get("http", {}).get("publish_address") - - # malformed or no address given - if not address or ":" not in address: - return None - - host["host"], host["port"] = address.rsplit(":", 1) - host["port"] = int(host["port"]) - - return self.host_info_callback(host_info, host) - - def sniff_hosts(self, initial=False): - """ - Obtain a list of nodes from the cluster and create a new connection - pool using the information retrieved. - - To extract the node connection parameters use the ``nodes_to_host_callback``. - - :arg initial: flag indicating if this is during startup - (``sniff_on_start``), ignore the ``sniff_timeout`` if ``True`` - """ - node_info = self._get_sniff_data(initial) - - hosts = list(filter(None, (self._get_host_info(n) for n in node_info))) - - # we weren't able to get any nodes or host_info_callback blocked all - - # raise error. - if not hosts: - raise TransportError( - "N/A", "Unable to sniff hosts - no viable hosts found." - ) - - self.set_connections(hosts) - - def mark_dead(self, connection): - """ - Mark a connection as dead (failed) in the connection pool. If sniffing - on failure is enabled this will initiate the sniffing process. - - :arg connection: instance of :class:`~elasticsearch.Connection` that failed - """ - # mark as dead even when sniffing to avoid hitting this host during the sniff process - self.connection_pool.mark_dead(connection) - if self.sniff_on_connection_fail: - self.sniff_hosts() - - def perform_request(self, method, url, headers=None, params=None, body=None): - """ - Perform the actual request. Retrieve a connection from the connection - pool, pass all the information to it's perform_request method and - return the data. - - If an exception was raised, mark the connection as failed and retry (up - to `max_retries` times). - - If the operation was succesful and the connection used was previously - marked as dead, mark it as live, resetting it's failure count. - - :arg method: HTTP method to use - :arg url: absolute url (without host) to target - :arg headers: dictionary of headers, will be handed over to the - underlying :class:`~elasticsearch.Connection` class - :arg params: dictionary of query parameters, will be handed over to the - underlying :class:`~elasticsearch.Connection` class for serialization - :arg body: body of the request, will be serializes using serializer and - passed to the connection - """ - if body is not None: - body = self.serializer.dumps(body) - - # some clients or environments don't support sending GET with body - if method in ("HEAD", "GET") and self.send_get_body_as != "GET": - # send it as post instead - if self.send_get_body_as == "POST": - method = "POST" - - # or as source parameter - elif self.send_get_body_as == "source": - if params is None: - params = {} - params["source"] = body - body = None - - if body is not None: - try: - body = body.encode("utf-8", "surrogatepass") - except (UnicodeDecodeError, AttributeError): - # bytes/str - no need to re-encode - pass - - ignore = () - timeout = None - if params: - timeout = params.pop("request_timeout", None) - ignore = params.pop("ignore", ()) - if isinstance(ignore, int): - ignore = (ignore,) - client_meta = params.pop("__elastic_client_meta", ()) - else: - client_meta = () - - if self.meta_header: - headers = headers or {} - client_meta = self._client_meta + client_meta - headers["x-elastic-client-meta"] = ",".join( - "%s=%s" % (k, v) for k, v in client_meta - ) - - for attempt in range(self.max_retries + 1): - connection = self.get_connection() - - try: - # add a delay before attempting the next retry - # 0, 1, 3, 7, etc... - delay = 2 ** attempt - 1 - time.sleep(delay) - status, headers_response, data = connection.perform_request( - method, - url, - params, - body, - headers=headers, - ignore=ignore, - timeout=timeout, - ) - - except TransportError as e: - if method == "HEAD" and e.status_code == 404: - return False - - retry = False - if isinstance(e, ConnectionTimeout): - retry = self.retry_on_timeout - elif isinstance(e, ConnectionError): - retry = True - elif e.status_code in self.retry_on_status: - retry = True - - if retry: - # only mark as dead if we are retrying - self.mark_dead(connection) - # raise exception on last retry - if attempt == self.max_retries: - raise - else: - raise - - else: - # connection didn't fail, confirm it's live status - self.connection_pool.mark_live(connection) - - if method == "HEAD": - return 200 <= status < 300 - - if data: - data = self.deserializer.loads( - data, headers_response.get("content-type") - ) - return data - - def close(self): - """ - Explicitly closes connections - """ - self.connection_pool.close() diff --git a/infrastructure/sandbox/Data/lambda/elasticsearch/utils.py b/infrastructure/sandbox/Data/lambda/elasticsearch/utils.py deleted file mode 100644 index 2aad4eb71..000000000 --- a/infrastructure/sandbox/Data/lambda/elasticsearch/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -# Licensed to Elasticsearch B.V. under one or more contributor -# license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright -# ownership. Elasticsearch B.V. licenses this file to you under -# the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -import re - - -def _client_meta_version(version): - """Transforms a Python package version to one - compatible with 'X-Elastic-Client-Meta'. Essentially - replaces any pre-release information with a 'p' suffix. - """ - version, version_pre = re.match( - r"^([0-9][0-9.]*[0-9]|[0-9])(.*)$", version - ).groups() - if version_pre: - version += "p" - return version diff --git a/infrastructure/sandbox/Data/lambda/geohash.py b/infrastructure/sandbox/Data/lambda/geohash.py deleted file mode 100644 index 72b036698..000000000 --- a/infrastructure/sandbox/Data/lambda/geohash.py +++ /dev/null @@ -1,465 +0,0 @@ -# coding: UTF-8 -""" -Copyright (C) 2009 Hiroaki Kawai -""" -try: - import _geohash -except ImportError: - _geohash = None - -__version__ = "0.8.5" -__all__ = ['encode','decode','decode_exactly','bbox', 'neighbors', 'expand'] - -_base32 = '0123456789bcdefghjkmnpqrstuvwxyz' -_base32_map = {} -for i in range(len(_base32)): - _base32_map[_base32[i]] = i -del i - -LONG_ZERO = 0 -import sys -if sys.version_info[0] < 3: - LONG_ZERO = long(0) - -def _float_hex_to_int(f): - if f<-1.0 or f>=1.0: - return None - - if f==0.0: - return 1,1 - - h = f.hex() - x = h.find("0x1.") - assert(x>=0) - p = h.find("p") - assert(p>0) - - half_len = len(h[x+4:p])*4-int(h[p+1:]) - if x==0: - r = (1<= half: - i = i-half - return float.fromhex(("0x0.%0"+str(s)+"xp1") % (i<<(s*4-l),)) - else: - i = half-i - return float.fromhex(("-0x0.%0"+str(s)+"xp1") % (i<<(s*4-l),)) - -def _encode_i2c(lat,lon,lat_length,lon_length): - precision = int((lat_length+lon_length)/5) - if lat_length < lon_length: - a = lon - b = lat - else: - a = lat - b = lon - - boost = (0,1,4,5,16,17,20,21) - ret = '' - for i in range(precision): - ret+=_base32[(boost[a&7]+(boost[b&3]<<1))&0x1F] - t = a>>3 - a = b>>2 - b = t - - return ret[::-1] - -def encode(latitude, longitude, precision=12): - if latitude >= 90.0 or latitude < -90.0: - raise Exception("invalid latitude.") - while longitude < -180.0: - longitude += 360.0 - while longitude >= 180.0: - longitude -= 360.0 - - if _geohash: - basecode=_geohash.encode(latitude,longitude) - if len(basecode)>precision: - return basecode[0:precision] - return basecode+'0'*(precision-len(basecode)) - - xprecision=precision+1 - lat_length = lon_length = int(xprecision*5/2) - if xprecision%2==1: - lon_length+=1 - - if hasattr(float, "fromhex"): - a = _float_hex_to_int(latitude/90.0) - o = _float_hex_to_int(longitude/180.0) - if a[1] > lat_length: - ai = a[0]>>(a[1]-lat_length) - else: - ai = a[0]<<(lat_length-a[1]) - - if o[1] > lon_length: - oi = o[0]>>(o[1]-lon_length) - else: - oi = o[0]<<(lon_length-o[1]) - - return _encode_i2c(ai, oi, lat_length, lon_length)[:precision] - - lat = latitude/180.0 - lon = longitude/360.0 - - if lat>0: - lat = int((1<0: - lon = int((1<>2)&4 - lat += (t>>2)&2 - lon += (t>>1)&2 - lat += (t>>1)&1 - lon += t&1 - lon_length+=3 - lat_length+=2 - else: - lon = lon<<2 - lat = lat<<3 - lat += (t>>2)&4 - lon += (t>>2)&2 - lat += (t>>1)&2 - lon += (t>>1)&1 - lat += t&1 - lon_length+=2 - lat_length+=3 - - bit_length+=5 - - return (lat,lon,lat_length,lon_length) - -def decode(hashcode, delta=False): - ''' - decode a hashcode and get center coordinate, and distance between center and outer border - ''' - if _geohash: - (lat,lon,lat_bits,lon_bits) = _geohash.decode(hashcode) - latitude_delta = 90.0/(1<> lat_length: - for tlon in (lon-1, lon, lon+1): - ret.append(_encode_i2c(tlat,tlon,lat_length,lon_length)) - - tlat = lat-1 - if tlat >= 0: - for tlon in (lon-1, lon, lon+1): - ret.append(_encode_i2c(tlat,tlon,lat_length,lon_length)) - - return ret - -def expand(hashcode): - ret = neighbors(hashcode) - ret.append(hashcode) - return ret - -def _uint64_interleave(lat32, lon32): - intr = 0 - boost = (0,1,4,5,16,17,20,21,64,65,68,69,80,81,84,85) - for i in range(8): - intr = (intr<<8) + (boost[(lon32>>(28-i*4))%16]<<1) + boost[(lat32>>(28-i*4))%16] - - return intr - -def _uint64_deinterleave(ui64): - lat = lon = 0 - boost = ((0,0),(0,1),(1,0),(1,1),(0,2),(0,3),(1,2),(1,3), - (2,0),(2,1),(3,0),(3,1),(2,2),(2,3),(3,2),(3,3)) - for i in range(16): - p = boost[(ui64>>(60-i*4))%16] - lon = (lon<<2) + p[0] - lat = (lat<<2) + p[1] - - return (lat, lon) - -def encode_uint64(latitude, longitude): - if latitude >= 90.0 or latitude < -90.0: - raise ValueError("Latitude must be in the range of (-90.0, 90.0)") - while longitude < -180.0: - longitude += 360.0 - while longitude >= 180.0: - longitude -= 360.0 - - if _geohash: - ui128 = _geohash.encode_int(latitude,longitude) - if _geohash.intunit == 64: - return ui128[0] - elif _geohash.intunit == 32: - return (ui128[0]<<32) + ui128[1] - elif _geohash.intunit == 16: - return (ui128[0]<<48) + (ui128[1]<<32) + (ui128[2]<<16) + ui128[3] - - lat = int(((latitude + 90.0)/180.0)*(1<<32)) - lon = int(((longitude+180.0)/360.0)*(1<<32)) - return _uint64_interleave(lat, lon) - -def decode_uint64(ui64): - if _geohash: - latlon = _geohash.decode_int(ui64 % 0xFFFFFFFFFFFFFFFF, LONG_ZERO) - if latlon: - return latlon - - lat,lon = _uint64_deinterleave(ui64) - return (180.0*lat/(1<<32) - 90.0, 360.0*lon/(1<<32) - 180.0) - -def expand_uint64(ui64, precision=50): - ui64 = ui64 & (0xFFFFFFFFFFFFFFFF << (64-precision)) - lat,lon = _uint64_deinterleave(ui64) - lat_grid = 1<<(32-int(precision/2)) - lon_grid = lat_grid>>(precision%2) - - if precision<=2: # expand becomes to the whole range - return [] - - ranges = [] - if lat & lat_grid: - if lon & lon_grid: - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+2)))) - if precision%2==0: - # lat,lon = (1, 1) and even precision - ui64 = _uint64_interleave(lat-lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - if lat + lat_grid < 0xFFFFFFFF: - ui64 = _uint64_interleave(lat+lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat+lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat+lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - # lat,lon = (1, 1) and odd precision - if lat + lat_grid < 0xFFFFFFFF: - ui64 = _uint64_interleave(lat+lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - ui64 = _uint64_interleave(lat+lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - - ui64 = _uint64_interleave(lat, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat-lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - ui64 = _uint64_interleave(lat-lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision+2)))) - if precision%2==0: - # lat,lon = (1, 0) and odd precision - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - if lat + lat_grid < 0xFFFFFFFF: - ui64 = _uint64_interleave(lat+lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat+lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat+lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - # lat,lon = (1, 0) and odd precision - if lat + lat_grid < 0xFFFFFFFF: - ui64 = _uint64_interleave(lat+lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - ui64 = _uint64_interleave(lat+lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - if lon & lon_grid: - ui64 = _uint64_interleave(lat, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+2)))) - if precision%2==0: - # lat,lon = (0, 1) and even precision - ui64 = _uint64_interleave(lat, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - if lat > 0: - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat-lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat-lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - # lat,lon = (0, 1) and odd precision - if lat > 0: - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - ui64 = _uint64_interleave(lat-lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat+lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - ui64 = _uint64_interleave(lat, lon) - ranges.append((ui64, ui64 + (1<<(64-precision+2)))) - if precision%2==0: - # lat,lon = (0, 0) and even precision - ui64 = _uint64_interleave(lat, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - if lat > 0: - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat-lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat-lat_grid, lon+lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - else: - # lat,lon = (0, 0) and odd precision - if lat > 0: - ui64 = _uint64_interleave(lat-lat_grid, lon) - ranges.append((ui64, ui64 + (1<<(64-precision+1)))) - - ui64 = _uint64_interleave(lat-lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - ui64 = _uint64_interleave(lat+lat_grid, lon-lon_grid) - ranges.append((ui64, ui64 + (1<<(64-precision)))) - - ranges.sort() - - # merge the conditions - shrink = [] - prev = None - for i in ranges: - if prev: - if prev[1] != i[0]: - shrink.append(prev) - prev = i - else: - prev = (prev[0], i[1]) - else: - prev = i - - shrink.append(prev) - - ranges = [] - for i in shrink: - a,b=i - if a == 0: - a = None # we can remove the condition because it is the lowest value - if b == 0x10000000000000000: - b = None # we can remove the condition because it is the highest value - - ranges.append((a,b)) - - return ranges diff --git a/infrastructure/sandbox/Data/lambda/geoip.py b/infrastructure/sandbox/Data/lambda/geoip.py deleted file mode 100644 index c10df792f..000000000 --- a/infrastructure/sandbox/Data/lambda/geoip.py +++ /dev/null @@ -1,509 +0,0 @@ -import sys -import mmap -import socket -import urllib - -from threading import Lock -from datetime import datetime -from struct import Struct - - -MMDB_METADATA_START = b'\xAB\xCD\xEFMaxMind.com' -MMDB_METADATA_BLOCK_MAX_SIZE = 131072 -MMDB_DATA_SECTION_SEPARATOR = 16 - -_int_unpack = Struct('>I').unpack -_long_unpack = Struct('>Q').unpack -_short_unpack = Struct('>H').unpack - - -def _native_str(x): - """Attempts to coerce a string into native if it's ASCII safe.""" - try: - return str(x) - except UnicodeError: - return x - - -def pack_ip(ip): - """Given an IP string, converts it into packed format for internal - usage. - """ - for fmly in socket.AF_INET, socket.AF_INET6: - try: - return socket.inet_pton(fmly, ip) - except socket.error: - continue - raise ValueError('Malformed IP address') - - -class DatabaseInfo(object): - """Provides information about the GeoIP database.""" - - def __init__(self, filename=None, date=None, - internal_name=None, provider=None): - #: If available the filename which backs the database. - self.filename = filename - #: Optionally the build date of the database as datetime object. - self.date = date - #: Optionally the internal name of the database. - self.internal_name = internal_name - #: Optionally the name of the database provider. - self.provider = provider - - def __repr__(self): - return '<%s filename=%r date=%r internal_name=%r provider=%r>' % ( - self.__class__.__name__, - self.filename, - self.date, - self.internal_name, - self.provider, - ) - - -class IPInfo(object): - """Provides information about the located IP as returned by - :meth:`Database.lookup`. - """ - __slots__ = ('ip', '_data') - - def __init__(self, ip, data): - #: The IP that was looked up. - self.ip = ip - self._data = data - - @property - def country(self): - """The country code as ISO code if available.""" - if 'country' in self._data: - return _native_str(self._data['country']['iso_code']) - - @property - def continent(self): - """The continent as ISO code if available.""" - if 'continent' in self._data: - return _native_str(self._data['continent']['code']) - - @property - def subdivisions(self): - """The subdivisions as a list of ISO codes as an immutable set.""" - return frozenset(_native_str(x['iso_code']) for x in - self._data.get('subdivisions') or () if 'iso_code' - in x) - - @property - def timezone(self): - """The timezone if available as tzinfo name.""" - if 'location' in self._data: - return _native_str(self._data['location'].get('time_zone')) - - @property - def location(self): - """The location as ``(lat, long)`` tuple if available.""" - if 'location' in self._data: - lat = self._data['location'].get('latitude') - long = self._data['location'].get('longitude') - if lat is not None and long is not None: - return lat, long - - def to_dict(self): - """A dict representation of the available information. This - is a dictionary with the same keys as the attributes of this - object. - """ - return { - 'ip': self.ip, - 'country': self.country, - 'continent': self.continent, - 'subdivisions': self.subdivisions, - 'timezone': self.timezone, - 'location': self.location, - } - - def get_info_dict(self): - """Returns the internal info dictionary. For a maxmind database - this is the metadata dictionary. - """ - return self._data - - def __hash__(self): - return hash(self.addr) - - def __eq__(self, other): - return type(self) is type(other) and self.addr == other.addr - - def __ne__(self, other): - return not self.__eq__(other) - - def __repr__(self): - return ('') % ( - self.ip, - self.country, - self.continent, - self.subdivisions, - self.timezone, - self.location, - ) - - -class Database(object): - """Provides access to a GeoIP database. This is an abstract class - that is implemented by different providers. The :func:`open_database` - function can be used to open a MaxMind database. - - Example usage:: - - from geoip import open_database - - with open_database('data/GeoLite2-City.mmdb') as db: - match = db.lookup_mine() - print 'My IP info:', match - """ - - def __init__(self): - self.closed = False - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, tb): - self.close() - - def close(self): - """Closes the database. The whole object can also be used as a - context manager. Databases that are packaged up (such as the - :data:`geolite2` database) do not need to be closed. - """ - self.closed = True - - def get_info(self): - """Returns an info object about the database. This can be used to - check for the build date of the database or what provides the GeoIP - data. - - :rtype: :class:`DatabaseInfo` - """ - raise NotImplementedError('This database does not provide info') - - def get_metadata(self): - """Return the metadata dictionary of the loaded database. This - dictionary is specific to the database provider. - """ - raise NotImplementedError('This database does not provide metadata') - - def lookup(self, ip_addr): - """Looks up the IP information in the database and returns a - :class:`IPInfo`. If it does not exist, `None` is returned. What - IP addresses are supported is specific to the GeoIP provider. - - :rtype: :class:`IPInfo` - """ - if self.closed: - raise RuntimeError('Database is closed.') - return self._lookup(ip_addr) - - def lookup_mine(self): - """Looks up the computer's IP by asking a web service and then - checks the database for a match. - - :rtype: :class:`IPInfo` - """ - ip = urllib.urlopen('http://icanhazip.com/').read().strip() - return self.lookup(ip) - - -class MaxMindDatabase(Database): - """Provides access to a maxmind database.""" - - def __init__(self, filename, buf, md): - Database.__init__(self) - self.filename = filename - self.is_ipv6 = md['ip_version'] == 6 - self.nodes = md['node_count'] - self.record_size = md['record_size'] - self.node_size = int(self.record_size / 4) - self.db_size = self.nodes * self.node_size - - self._buf = buf - self._md = md - self._reader = _MaxMindParser(buf, self.db_size) - self._ipv4_start = None - - def close(self): - Database.close(self) - self._buf.close() - - def get_metadata(self): - return self._md - - def get_info(self): - return DatabaseInfo( - filename=self.filename, - date=datetime.utcfromtimestamp(self._md['build_epoch']), - internal_name=_native_str(self._md['database_type']), - provider='maxmind', - ) - - def _lookup(self, ip_addr): - packed_addr = pack_ip(ip_addr) - bits = len(packed_addr) * 8 - - node = self._find_start_node(bits) - - seen = set() - for i in range(bits): - if node >= self.nodes: - break - bit = (packed_addr[i >> 3] >> (7 - (i % 8))) & 1 - node = self._parse_node(node, bit) - if node in seen: - raise LookupError('Circle in tree detected') - seen.add(node) - - if node > self.nodes: - offset = node - self.nodes + self.db_size - return IPInfo(ip_addr, self._reader.read(offset)[0]) - - def _find_start_node(self, bits): - if bits == 128 or not self.is_ipv6: - return 0 - - if self._ipv4_start is not None: - return self._ipv4_start - - # XXX: technically the next code is racy if used concurrently but - # the worst thing that can happen is that the ipv4 start node is - # calculated multiple times. - node = 0 - for netmask in range(96): - if node >= self.nodes: - break - node = self._parse_node(netmask, 0) - - self._ipv4_start = node - return node - - def _parse_node(self, node, index): - offset = node * self.node_size - - if self.record_size == 24: - offset += index * 3 - bytes = b'\x00' + self._buf[offset:offset + 3] - elif self.record_size == 28: - b = ord(self._buf[offset + 3:offset + 4]) - if index: - b &= 0x0F - else: - b = (0xF0 & b) >> 4 - offset += index * 4 - bytes = chr(b).encode('utf8') + self._buf[offset:offset + 3] - elif self.record_size == 32: - offset += index * 4 - bytes = self._buf[offset:offset + 4] - else: - raise LookupError('Invalid record size') - return _int_unpack(bytes)[0] - - def __repr__(self): - return '<%s %r>' % ( - self.__class__.__name__, - self.filename, - ) - - -class PackagedDatabase(Database): - """Provides access to a packaged database. Upon first usage the - system will import the provided package and invoke the ``loader`` - function to construct the actual database object. - - This is used for instance to implement the ``geolite2`` database - that is provided. - """ - - def __init__(self, name, package, pypi_name=None): - Database.__init__(self) - self.name = name - self.package = package - self.pypi_name = pypi_name - self._lock = Lock() - self._db = None - - def _load_database(self): - try: - mod = __import__(self.package, None, None, ['loader']) - except ImportError: - msg = 'Cannot use packaged database "%s" ' \ - 'because package "%s" is not available.' % (self.name, - self.package) - if self.pypi_name is not None: - msg += ' It\'s provided by PyPI package "%s"' % self.pypi_name - raise RuntimeError(msg) - return mod.loader(self, sys.modules[__name__]) - - def _get_actual_db(self): - if self._db is not None: - return self._db - with self._lock: - if self._db is not None: - return self._db - rv = self._load_database() - self._db = rv - return rv - - def close(self): - pass - - def get_info(self): - return self._get_actual_db().get_info() - - def get_metadata(self): - return self._get_actual_db().get_metadata() - - def lookup(self, ip_addr): - return self._get_actual_db().lookup(ip_addr) - - def __repr__(self): - return '<%s %r>' % ( - self.__class__.__name__, - self.name, - ) - - -#: Provides access to the geolite2 cities database. In order to use this -#: database the ``python-geoip-geolite2`` package needs to be installed. -geolite2 = PackagedDatabase('geolite2', '_geoip_geolite2', - pypi_name='python-geoip-geolite2') - - -def _read_mmdb_metadata(buf): - """Reads metadata from a given memory mapped buffer.""" - offset = buf.rfind(MMDB_METADATA_START, - buf.size() - MMDB_METADATA_BLOCK_MAX_SIZE) - if offset < 0: - raise ValueError('Could not find metadata') - offset += len(MMDB_METADATA_START) - return _MaxMindParser(buf, offset).read(offset)[0] - - -def make_struct_parser(code): - struct = Struct('>' + code) - def unpack_func(self, size, offset): - new_offset = offset + struct.size - bytes = self._buf[offset:new_offset].rjust(struct.size, b'\x00') - value = struct.unpack(bytes)[0] - return value, new_offset - return unpack_func - - -class _MaxMindParser(object): - - def __init__(self, buf, data_offset=0): - self._buf = buf - self._data_offset = data_offset - - def _parse_ptr(self, size, offset): - ptr_size = ((size >> 3) & 0x3) + 1 - bytes = self._buf[offset:offset + ptr_size] - if ptr_size != 4: - bytes = chr(size & 0x7).encode('utf8') + bytes - - ptr = ( - _int_unpack(bytes.rjust(4, b'\x00'))[0] + - self._data_offset + - MMDB_DATA_SECTION_SEPARATOR + - (0, 2048, 526336, 0)[ptr_size - 1] - ) - - return self.read(ptr)[0], offset + ptr_size - - def _parse_str(self, size, offset): - bytes = self._buf[offset:offset + size] - return bytes.decode('utf-8', 'replace'), offset + size - - _parse_double = make_struct_parser('d') - - def _parse_bytes(self, size, offset): - return self._buf[offset:offset + size], offset + size - - def _parse_uint(self, size, offset): - bytes = self._buf[offset:offset + size] - return _long_unpack(bytes.rjust(8, b'\x00'))[0], offset + size - - def _parse_dict(self, size, offset): - container = {} - for _ in range(size): - key, offset = self.read(offset) - value, offset = self.read(offset) - container[key] = value - return container, offset - - _parse_int32 = make_struct_parser('i') - - def _parse_list(self, size, offset): - rv = [None] * size - for idx in range(size): - rv[idx], offset = self.read(offset) - return rv, offset - - def _parse_error(self, size, offset): - raise AssertionError('Read invalid type code') - - def _parse_bool(self, size, offset): - return size != 0, offset - - _parse_float = make_struct_parser('f') - - _callbacks = ( - _parse_error, # 0 - _parse_ptr, # 1 pointer - _parse_str, # 2 utf-8 string - _parse_double, # 3 double - _parse_bytes, # 4 bytes - _parse_uint, # 5 uint16 - _parse_uint, # 6 uint32 - _parse_dict, # 7 map - _parse_int32, # 8 int32 - _parse_uint, # 9 uint64 - _parse_uint, # 10 uint128 - _parse_list, # 11 array - _parse_error, # 12 - _parse_error, # 13 - _parse_bool, # 14 boolean - _parse_float, # 15 float - ) - - def read(self, offset): - new_offset = offset + 1 - byte = ord(self._buf[offset:new_offset]) - size = byte & 0x1f - ty = byte >> 5 - - if ty == 0: - byte = ord(self._buf[new_offset:new_offset + 1]) - ty = byte + 7 - new_offset += 1 - - if ty != 1 and size >= 29: - to_read = size - 28 - bytes = self._buf[new_offset:new_offset + to_read] - new_offset += to_read - if size == 29: - size = 29 + ord(bytes) - elif size == 30: - size = 285 + _short_unpack(bytes)[0] - elif size > 30: - size = 65821 + _int_unpack(bytes.rjust(4, b'\x00'))[0] - - return self._callbacks[ty](self, size, new_offset) - - -def open_database(filename): - """Open a given database. This currently only supports maxmind - databases (mmdb). If the file cannot be opened an ``IOError`` is - raised. - """ - with open(filename, 'rb') as f: - buf = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) - md = _read_mmdb_metadata(buf) - return MaxMindDatabase(filename, buf, md) diff --git a/infrastructure/sandbox/Data/lambda/jpgrid.py b/infrastructure/sandbox/Data/lambda/jpgrid.py deleted file mode 100644 index e7216f7e5..000000000 --- a/infrastructure/sandbox/Data/lambda/jpgrid.py +++ /dev/null @@ -1,161 +0,0 @@ -# coding: UTF-8 -# Coder for Japanese grid square code. (JIS C 6304 / JIS X 0410) -# 行政管理庁告示第143号 http://www.stat.go.jp/data/mesh/ - -def _encode_i2c(lat, lon, base1): - t=[] - while base1>80: - t.append(1 + (lat&1)*2 + (lon&1)) - lat = lat>>1 - lon = lon>>1 - base1 = base1>>1 - - if base1==80: - t.append(lon%10) - t.append(lat%10) - lat = int(lat/10) - lon = int(lon/10) - base1 = int(base1/10) - elif base1==16: # Uni5 - t.append(1 + (lat&1)*2 + (lon&1)) - lat = lat>>1 - lon = lon>>1 - base1 = base1>>1 - elif base1==40: # Uni2 - t.append(5) - t.append(lon%5*2) - t.append(lat%5*2) - lat = int(lat/5) - lon = int(lon/5) - base1 = int(base1/5) - - if base1==8: - t.append(lon%8) - t.append(lat%8) - lat = lat>>3 - lon = lon>>3 - base1 = base1>>3 - - t.append(lon) - t.append(lat) - t.reverse() - return ''.join([str(i) for i in t]) - -def encode(latitude, longitude, base1=80): - return _encode_i2c(int(latitude*base1*1.5), int(longitude*base1-100.0*base1), base1) - -#def _encode_i2c(lat, lon, base1): -def _decode_c2i(gridcode): - base1 = 1 - lat = lon = 0 - codelen = len(gridcode) - if codelen>0: - lat = int(gridcode[0:2]) - lon = int(gridcode[2:4]) - - if codelen>4: - lat = (lat<<3) + int(gridcode[4:5]) - lon = (lon<<3) + int(gridcode[5:6]) - base1 = base1<<3 - - if codelen>6: - if codelen==7: - i = int(gridcode[6:7])-1 - lat = (lat<<1) + int(i/2) - lon = (lon<<1) + i%2 - base1 = base1<<1 - else: - lat = lat*10 + int(gridcode[6:7]) - lon = lon*10 + int(gridcode[7:8]) - base1 = base1*10 - - if codelen>8: - if gridcode[8:]=='5': - lat = lat>>1 - lon = lon>>1 - base1 = base1>>1 - else: - for i in gridcode[8:]: - i = int(i)-1 - lat = (lat<<1) + int(i/2) - lon = (lon<<1) + i%2 - base1 = base1<<1 - - return (lat, lon, base1) - -def decode_sw(gridcode, delta=False): - (lat, lon, base1) = _decode_c2i(gridcode) - - lat = lat/(base1*1.5) - lon = lon/float(base1) + 100.0 - - if delta: - return (lat, lon, 1.0/(base1*1.5), 1.0/base1) - else: - return (lat, lon) - -def decode(gridcode): - (lat, lon, base1) = _decode_c2i(gridcode) - - # center position of the meshcode. - lat = (lat<<1) + 1 - lon = (lon<<1) + 1 - base1 = base1<<1 - return (lat/(base1*1.5), lon/float(base1) + 100.0) - -def bbox(gridcode): - (a,b,c,d) = decode_sw(gridcode, True) - return {'w':a, 's':b, 'n':b+d, 'e':a+c} - - -## short-cut methods -def encodeLv1(lat, lon): - return encode(lat,lon,1) - -def encodeLv2(lat, lon): - return encode(lat,lon,8) - -def encodeLv3(lat, lon): - return encode(lat,lon,80) - -def encodeBase(lat,lon): - return encodeLv3(lat,lon) - -def encodeHalf(lat,lon): - return encode(lat,lon,160) - -def encodeQuarter(lat,lon): - return encode(lat,lon,320) - -def encodeEighth(lat,lon): - return encode(lat,lon,640) - -def encodeUni10(lat,lon): - return encodeLv2(lat,lon) - -def encodeUni5(lat, lon): - return encode(lat,lon,16) - -def encodeUni2(lat, lon): - return encode(lat,lon,40) - - -def neighbors(gridcode): - (lat,lon,base1)=_decode_c2i(gridcode) - ret = [] - for i in ((0,-1),(0,1),(1,-1),(1,0),(1,1),(-1,-1),(-1,0),(-1,1)): - tlat=lat+i[0] - tlon=lon+i[1] - if tlat<0 or tlat>(90*base1): - continue - if tlon<0 or tlon>(100*base1): - continue - - ret.append(_encode_i2c(tlat,tlon,base1)) - - return ret - -def expand(gridcode): - ret = neighbors(gridcode) - ret.append(gridcode) - return ret diff --git a/infrastructure/sandbox/Data/lambda/jpiarea.py b/infrastructure/sandbox/Data/lambda/jpiarea.py deleted file mode 100644 index 4158211ef..000000000 --- a/infrastructure/sandbox/Data/lambda/jpiarea.py +++ /dev/null @@ -1,87 +0,0 @@ -# coding: UTF-8 -# Coder for Japanese iarea grid code. -# NTT DoCoMo's Open iArea in Japan use a gridcode which is very similar to -# JIS X 0410, but absolutely different in detail. - -def _encode_i2c(lat,lon,basebits): - t=[] - for i in range(basebits-3): - t.append((lat&1)*2 + (lon&1)) - lat = lat>>1 - lon = lon>>1 - - if basebits>=3: - t.append(lon&7) - t.append(lat&7) - lat = lat>>3 - lon = lon>>3 - - t.append(lon) - t.append(lat) - t.reverse() - return ''.join([str(i) for i in t]) - -def encode(lat, lon): - if lat<7 or lon<100: - raise Exception('Unsupported location') - - basebits = 8 - return _encode_i2c(int(lat * (1<6: - for i in gridcode[6:]: - lat = (lat<<1) + int(int(i)/2) - lon = (lon<<1) + int(i)%2 - base = base<<1 - basebits += 1 - - if len(gridcode)>4: - lat = int(gridcode[4:5])*base + lat - lon = int(gridcode[5:6])*base + lon - base = base<<3 - basebits += 3 - - lat = int(gridcode[0:2])*base + lat - lon = int(gridcode[2:4])*base + lon - - return (lat, lon, basebits) - -def decode_sw(gridcode, delta=False): - lat, lon, basebits = _decode_c2i(gridcode) - - if delta: - return (float(lat)/(1.5*(1<(90<(100< -""" -try: - import _geohash -except ImportError: - _geohash = None - -def _encode_i2c(lat,lon,bitlength): - digits='0123' - r = '' - while bitlength>0: - r += digits[((lat&1)<<1)+(lon&1)] - lat = lat>>1 - lon = lon>>1 - bitlength -= 1 - - return r[::-1] - -def _decode_c2i(treecode): - lat = 0 - lon = 0 - for i in treecode: - b = ord(i)-48 - lat = (lat<<1)+int(b/2) - lon = (lon<<1)+b%2 - - return (lat,lon,len(treecode)) - -def encode(lat,lon,precision=12): - if _geohash and precision<=64: - ints = _geohash.encode_int(lat, lon) - ret = "" - for intu in ints: - for i in range(int(_geohash.intunit/2)): - if len(ret) > precision: - break - ret += "0213"[(intu>>(_geohash.intunit-2-i*2))&0x03] - - return ret[:precision] - - b = 1<>bitlength: - for tlon in (lon-1, lon, lon+1): - r.append(_encode_i2c(tlat, tlon, bitlength)) - - tlat = lat-1 - if tlat>=0: - for tlon in (lon-1, lon, lon+1): - r.append(_encode_i2c(tlat, tlon, bitlength)) - - return r - -def expand(treecode): - r = neighbors(treecode) - r.append(treecode) - return r - diff --git a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/INSTALLER b/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/INSTALLER deleted file mode 100644 index a1b589e38..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/INSTALLER +++ /dev/null @@ -1 +0,0 @@ -pip diff --git a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/METADATA b/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/METADATA deleted file mode 100644 index 1239f90c1..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/METADATA +++ /dev/null @@ -1,1448 +0,0 @@ -Metadata-Version: 2.1 -Name: urllib3 -Version: 1.26.12 -Summary: HTTP library with thread-safe connection pooling, file post, and more. -Home-page: https://urllib3.readthedocs.io/ -Author: Andrey Petrov -Author-email: andrey.petrov@shazow.net -License: MIT -Project-URL: Documentation, https://urllib3.readthedocs.io/ -Project-URL: Code, https://github.com/urllib3/urllib3 -Project-URL: Issue tracker, https://github.com/urllib3/urllib3/issues -Keywords: urllib httplib threadsafe filepost http https ssl pooling -Classifier: Environment :: Web Environment -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: MIT License -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2 -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.6 -Classifier: Programming Language :: Python :: 3.7 -Classifier: Programming Language :: Python :: 3.8 -Classifier: Programming Language :: Python :: 3.9 -Classifier: Programming Language :: Python :: 3.10 -Classifier: Programming Language :: Python :: 3.11 -Classifier: Programming Language :: Python :: Implementation :: CPython -Classifier: Programming Language :: Python :: Implementation :: PyPy -Classifier: Topic :: Internet :: WWW/HTTP -Classifier: Topic :: Software Development :: Libraries -Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4 -Description-Content-Type: text/x-rst -License-File: LICENSE.txt -Provides-Extra: brotli -Requires-Dist: brotlicffi (>=0.8.0) ; ((os_name != "nt" or python_version >= "3") and platform_python_implementation != "CPython") and extra == 'brotli' -Requires-Dist: brotli (>=1.0.9) ; ((os_name != "nt" or python_version >= "3") and platform_python_implementation == "CPython") and extra == 'brotli' -Requires-Dist: brotlipy (>=0.6.0) ; (os_name == "nt" and python_version < "3") and extra == 'brotli' -Provides-Extra: secure -Requires-Dist: pyOpenSSL (>=0.14) ; extra == 'secure' -Requires-Dist: cryptography (>=1.3.4) ; extra == 'secure' -Requires-Dist: idna (>=2.0.0) ; extra == 'secure' -Requires-Dist: certifi ; extra == 'secure' -Requires-Dist: urllib3-secure-extra ; extra == 'secure' -Requires-Dist: ipaddress ; (python_version == "2.7") and extra == 'secure' -Provides-Extra: socks -Requires-Dist: PySocks (!=1.5.7,<2.0,>=1.5.6) ; extra == 'socks' - - -urllib3 is a powerful, *user-friendly* HTTP client for Python. Much of the -Python ecosystem already uses urllib3 and you should too. -urllib3 brings many critical features that are missing from the Python -standard libraries: - -- Thread safety. -- Connection pooling. -- Client-side SSL/TLS verification. -- File uploads with multipart encoding. -- Helpers for retrying requests and dealing with HTTP redirects. -- Support for gzip, deflate, and brotli encoding. -- Proxy support for HTTP and SOCKS. -- 100% test coverage. - -urllib3 is powerful and easy to use: - -.. code-block:: python - - >>> import urllib3 - >>> http = urllib3.PoolManager() - >>> r = http.request('GET', 'http://httpbin.org/robots.txt') - >>> r.status - 200 - >>> r.data - 'User-agent: *\nDisallow: /deny\n' - - -Installing ----------- - -urllib3 can be installed with `pip `_:: - - $ python -m pip install urllib3 - -Alternatively, you can grab the latest source code from `GitHub `_:: - - $ git clone https://github.com/urllib3/urllib3.git - $ cd urllib3 - $ git checkout 1.26.x - $ pip install . - - -Documentation -------------- - -urllib3 has usage and reference documentation at `urllib3.readthedocs.io `_. - - -Contributing ------------- - -urllib3 happily accepts contributions. Please see our -`contributing documentation `_ -for some tips on getting started. - - -Security Disclosures --------------------- - -To report a security vulnerability, please use the -`Tidelift security contact `_. -Tidelift will coordinate the fix and disclosure with maintainers. - - -Maintainers ------------ - -- `@sethmlarson `__ (Seth M. Larson) -- `@pquentin `__ (Quentin Pradet) -- `@theacodes `__ (Thea Flowers) -- `@haikuginger `__ (Jess Shapiro) -- `@lukasa `__ (Cory Benfield) -- `@sigmavirus24 `__ (Ian Stapleton Cordasco) -- `@shazow `__ (Andrey Petrov) - -👋 - - -Sponsorship ------------ - -If your company benefits from this library, please consider `sponsoring its -development `_. - - -For Enterprise --------------- - -.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logos_RGB_Tidelift_Shorthand_On-White_small.png - :width: 75 - :alt: Tidelift - -.. list-table:: - :widths: 10 100 - - * - |tideliftlogo| - - Professional support for urllib3 is available as part of the `Tidelift - Subscription`_. Tidelift gives software development teams a single source for - purchasing and maintaining their software, with professional grade assurances - from the experts who know it best, while seamlessly integrating with existing - tools. - -.. _Tidelift Subscription: https://tidelift.com/subscription/pkg/pypi-urllib3?utm_source=pypi-urllib3&utm_medium=referral&utm_campaign=readme - - -Changes -======= - -1.26.12 (2022-08-22) --------------------- - -* Deprecated the `urllib3[secure]` extra and the `urllib3.contrib.pyopenssl` module. - Both will be removed in v2.x. See this `GitHub issue `_ - for justification and info on how to migrate. - - -1.26.11 (2022-07-25) --------------------- - -* Fixed an issue where reading more than 2 GiB in a call to ``HTTPResponse.read`` would - raise an ``OverflowError`` on Python 3.9 and earlier. - - -1.26.10 (2022-07-07) --------------------- - -* Removed support for Python 3.5 -* Fixed an issue where a ``ProxyError`` recommending configuring the proxy as HTTP - instead of HTTPS could appear even when an HTTPS proxy wasn't configured. - - -1.26.9 (2022-03-16) -------------------- - -* Changed ``urllib3[brotli]`` extra to favor installing Brotli libraries that are still - receiving updates like ``brotli`` and ``brotlicffi`` instead of ``brotlipy``. - This change does not impact behavior of urllib3, only which dependencies are installed. -* Fixed a socket leaking when ``HTTPSConnection.connect()`` raises an exception. -* Fixed ``server_hostname`` being forwarded from ``PoolManager`` to ``HTTPConnectionPool`` - when requesting an HTTP URL. Should only be forwarded when requesting an HTTPS URL. - - -1.26.8 (2022-01-07) -------------------- - -* Added extra message to ``urllib3.exceptions.ProxyError`` when urllib3 detects that - a proxy is configured to use HTTPS but the proxy itself appears to only use HTTP. -* Added a mention of the size of the connection pool when discarding a connection due to the pool being full. -* Added explicit support for Python 3.11. -* Deprecated the ``Retry.MAX_BACKOFF`` class property in favor of ``Retry.DEFAULT_MAX_BACKOFF`` - to better match the rest of the default parameter names. ``Retry.MAX_BACKOFF`` is removed in v2.0. -* Changed location of the vendored ``ssl.match_hostname`` function from ``urllib3.packages.ssl_match_hostname`` - to ``urllib3.util.ssl_match_hostname`` to ensure Python 3.10+ compatibility after being repackaged - by downstream distributors. -* Fixed absolute imports, all imports are now relative. - - -1.26.7 (2021-09-22) -------------------- - -* Fixed a bug with HTTPS hostname verification involving IP addresses and lack - of SNI. (Issue #2400) -* Fixed a bug where IPv6 braces weren't stripped during certificate hostname - matching. (Issue #2240) - - -1.26.6 (2021-06-25) -------------------- - -* Deprecated the ``urllib3.contrib.ntlmpool`` module. urllib3 is not able to support - it properly due to `reasons listed in this issue `_. - If you are a user of this module please leave a comment. -* Changed ``HTTPConnection.request_chunked()`` to not erroneously emit multiple - ``Transfer-Encoding`` headers in the case that one is already specified. -* Fixed typo in deprecation message to recommend ``Retry.DEFAULT_ALLOWED_METHODS``. - - -1.26.5 (2021-05-26) -------------------- - -* Fixed deprecation warnings emitted in Python 3.10. -* Updated vendored ``six`` library to 1.16.0. -* Improved performance of URL parser when splitting - the authority component. - - -1.26.4 (2021-03-15) -------------------- - -* Changed behavior of the default ``SSLContext`` when connecting to HTTPS proxy - during HTTPS requests. The default ``SSLContext`` now sets ``check_hostname=True``. - - -1.26.3 (2021-01-26) -------------------- - -* Fixed bytes and string comparison issue with headers (Pull #2141) - -* Changed ``ProxySchemeUnknown`` error message to be - more actionable if the user supplies a proxy URL without - a scheme. (Pull #2107) - - -1.26.2 (2020-11-12) -------------------- - -* Fixed an issue where ``wrap_socket`` and ``CERT_REQUIRED`` wouldn't - be imported properly on Python 2.7.8 and earlier (Pull #2052) - - -1.26.1 (2020-11-11) -------------------- - -* Fixed an issue where two ``User-Agent`` headers would be sent if a - ``User-Agent`` header key is passed as ``bytes`` (Pull #2047) - - -1.26.0 (2020-11-10) -------------------- - -* **NOTE: urllib3 v2.0 will drop support for Python 2**. - `Read more in the v2.0 Roadmap `_. - -* Added support for HTTPS proxies contacting HTTPS servers (Pull #1923, Pull #1806) - -* Deprecated negotiating TLSv1 and TLSv1.1 by default. Users that - still wish to use TLS earlier than 1.2 without a deprecation warning - should opt-in explicitly by setting ``ssl_version=ssl.PROTOCOL_TLSv1_1`` (Pull #2002) - **Starting in urllib3 v2.0: Connections that receive a ``DeprecationWarning`` will fail** - -* Deprecated ``Retry`` options ``Retry.DEFAULT_METHOD_WHITELIST``, ``Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST`` - and ``Retry(method_whitelist=...)`` in favor of ``Retry.DEFAULT_ALLOWED_METHODS``, - ``Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT``, and ``Retry(allowed_methods=...)`` - (Pull #2000) **Starting in urllib3 v2.0: Deprecated options will be removed** - -* Added default ``User-Agent`` header to every request (Pull #1750) - -* Added ``urllib3.util.SKIP_HEADER`` for skipping ``User-Agent``, ``Accept-Encoding``, - and ``Host`` headers from being automatically emitted with requests (Pull #2018) - -* Collapse ``transfer-encoding: chunked`` request data and framing into - the same ``socket.send()`` call (Pull #1906) - -* Send ``http/1.1`` ALPN identifier with every TLS handshake by default (Pull #1894) - -* Properly terminate SecureTransport connections when CA verification fails (Pull #1977) - -* Don't emit an ``SNIMissingWarning`` when passing ``server_hostname=None`` - to SecureTransport (Pull #1903) - -* Disabled requesting TLSv1.2 session tickets as they weren't being used by urllib3 (Pull #1970) - -* Suppress ``BrokenPipeError`` when writing request body after the server - has closed the socket (Pull #1524) - -* Wrap ``ssl.SSLError`` that can be raised from reading a socket (e.g. "bad MAC") - into an ``urllib3.exceptions.SSLError`` (Pull #1939) - - -1.25.11 (2020-10-19) --------------------- - -* Fix retry backoff time parsed from ``Retry-After`` header when given - in the HTTP date format. The HTTP date was parsed as the local timezone - rather than accounting for the timezone in the HTTP date (typically - UTC) (Pull #1932, Pull #1935, Pull #1938, Pull #1949) - -* Fix issue where an error would be raised when the ``SSLKEYLOGFILE`` - environment variable was set to the empty string. Now ``SSLContext.keylog_file`` - is not set in this situation (Pull #2016) - - -1.25.10 (2020-07-22) --------------------- - -* Added support for ``SSLKEYLOGFILE`` environment variable for - logging TLS session keys with use with programs like - Wireshark for decrypting captured web traffic (Pull #1867) - -* Fixed loading of SecureTransport libraries on macOS Big Sur - due to the new dynamic linker cache (Pull #1905) - -* Collapse chunked request bodies data and framing into one - call to ``send()`` to reduce the number of TCP packets by 2-4x (Pull #1906) - -* Don't insert ``None`` into ``ConnectionPool`` if the pool - was empty when requesting a connection (Pull #1866) - -* Avoid ``hasattr`` call in ``BrotliDecoder.decompress()`` (Pull #1858) - - -1.25.9 (2020-04-16) -------------------- - -* Added ``InvalidProxyConfigurationWarning`` which is raised when - erroneously specifying an HTTPS proxy URL. urllib3 doesn't currently - support connecting to HTTPS proxies but will soon be able to - and we would like users to migrate properly without much breakage. - - See `this GitHub issue `_ - for more information on how to fix your proxy config. (Pull #1851) - -* Drain connection after ``PoolManager`` redirect (Pull #1817) - -* Ensure ``load_verify_locations`` raises ``SSLError`` for all backends (Pull #1812) - -* Rename ``VerifiedHTTPSConnection`` to ``HTTPSConnection`` (Pull #1805) - -* Allow the CA certificate data to be passed as a string (Pull #1804) - -* Raise ``ValueError`` if method contains control characters (Pull #1800) - -* Add ``__repr__`` to ``Timeout`` (Pull #1795) - - -1.25.8 (2020-01-20) -------------------- - -* Drop support for EOL Python 3.4 (Pull #1774) - -* Optimize _encode_invalid_chars (Pull #1787) - - -1.25.7 (2019-11-11) -------------------- - -* Preserve ``chunked`` parameter on retries (Pull #1715, Pull #1734) - -* Allow unset ``SERVER_SOFTWARE`` in App Engine (Pull #1704, Issue #1470) - -* Fix issue where URL fragment was sent within the request target. (Pull #1732) - -* Fix issue where an empty query section in a URL would fail to parse. (Pull #1732) - -* Remove TLS 1.3 support in SecureTransport due to Apple removing support (Pull #1703) - - -1.25.6 (2019-09-24) -------------------- - -* Fix issue where tilde (``~``) characters were incorrectly - percent-encoded in the path. (Pull #1692) - - -1.25.5 (2019-09-19) -------------------- - -* Add mitigation for BPO-37428 affecting Python <3.7.4 and OpenSSL 1.1.1+ which - caused certificate verification to be enabled when using ``cert_reqs=CERT_NONE``. - (Issue #1682) - - -1.25.4 (2019-09-19) -------------------- - -* Propagate Retry-After header settings to subsequent retries. (Pull #1607) - -* Fix edge case where Retry-After header was still respected even when - explicitly opted out of. (Pull #1607) - -* Remove dependency on ``rfc3986`` for URL parsing. - -* Fix issue where URLs containing invalid characters within ``Url.auth`` would - raise an exception instead of percent-encoding those characters. - -* Add support for ``HTTPResponse.auto_close = False`` which makes HTTP responses - work well with BufferedReaders and other ``io`` module features. (Pull #1652) - -* Percent-encode invalid characters in URL for ``HTTPConnectionPool.request()`` (Pull #1673) - - -1.25.3 (2019-05-23) -------------------- - -* Change ``HTTPSConnection`` to load system CA certificates - when ``ca_certs``, ``ca_cert_dir``, and ``ssl_context`` are - unspecified. (Pull #1608, Issue #1603) - -* Upgrade bundled rfc3986 to v1.3.2. (Pull #1609, Issue #1605) - - -1.25.2 (2019-04-28) -------------------- - -* Change ``is_ipaddress`` to not detect IPvFuture addresses. (Pull #1583) - -* Change ``parse_url`` to percent-encode invalid characters within the - path, query, and target components. (Pull #1586) - - -1.25.1 (2019-04-24) -------------------- - -* Add support for Google's ``Brotli`` package. (Pull #1572, Pull #1579) - -* Upgrade bundled rfc3986 to v1.3.1 (Pull #1578) - - -1.25 (2019-04-22) ------------------ - -* Require and validate certificates by default when using HTTPS (Pull #1507) - -* Upgraded ``urllib3.utils.parse_url()`` to be RFC 3986 compliant. (Pull #1487) - -* Added support for ``key_password`` for ``HTTPSConnectionPool`` to use - encrypted ``key_file`` without creating your own ``SSLContext`` object. (Pull #1489) - -* Add TLSv1.3 support to CPython, pyOpenSSL, and SecureTransport ``SSLContext`` - implementations. (Pull #1496) - -* Switched the default multipart header encoder from RFC 2231 to HTML 5 working draft. (Issue #303, Pull #1492) - -* Fixed issue where OpenSSL would block if an encrypted client private key was - given and no password was given. Instead an ``SSLError`` is raised. (Pull #1489) - -* Added support for Brotli content encoding. It is enabled automatically if - ``brotlipy`` package is installed which can be requested with - ``urllib3[brotli]`` extra. (Pull #1532) - -* Drop ciphers using DSS key exchange from default TLS cipher suites. - Improve default ciphers when using SecureTransport. (Pull #1496) - -* Implemented a more efficient ``HTTPResponse.__iter__()`` method. (Issue #1483) - -1.24.3 (2019-05-01) -------------------- - -* Apply fix for CVE-2019-9740. (Pull #1591) - -1.24.2 (2019-04-17) -------------------- - -* Don't load system certificates by default when any other ``ca_certs``, ``ca_certs_dir`` or - ``ssl_context`` parameters are specified. - -* Remove Authorization header regardless of case when redirecting to cross-site. (Issue #1510) - -* Add support for IPv6 addresses in subjectAltName section of certificates. (Issue #1269) - - -1.24.1 (2018-11-02) -------------------- - -* Remove quadratic behavior within ``GzipDecoder.decompress()`` (Issue #1467) - -* Restored functionality of ``ciphers`` parameter for ``create_urllib3_context()``. (Issue #1462) - - -1.24 (2018-10-16) ------------------ - -* Allow key_server_hostname to be specified when initializing a PoolManager to allow custom SNI to be overridden. (Pull #1449) - -* Test against Python 3.7 on AppVeyor. (Pull #1453) - -* Early-out ipv6 checks when running on App Engine. (Pull #1450) - -* Change ambiguous description of backoff_factor (Pull #1436) - -* Add ability to handle multiple Content-Encodings (Issue #1441 and Pull #1442) - -* Skip DNS names that can't be idna-decoded when using pyOpenSSL (Issue #1405). - -* Add a server_hostname parameter to HTTPSConnection which allows for - overriding the SNI hostname sent in the handshake. (Pull #1397) - -* Drop support for EOL Python 2.6 (Pull #1429 and Pull #1430) - -* Fixed bug where responses with header Content-Type: message/* erroneously - raised HeaderParsingError, resulting in a warning being logged. (Pull #1439) - -* Move urllib3 to src/urllib3 (Pull #1409) - - -1.23 (2018-06-04) ------------------ - -* Allow providing a list of headers to strip from requests when redirecting - to a different host. Defaults to the ``Authorization`` header. Different - headers can be set via ``Retry.remove_headers_on_redirect``. (Issue #1316) - -* Fix ``util.selectors._fileobj_to_fd`` to accept ``long`` (Issue #1247). - -* Dropped Python 3.3 support. (Pull #1242) - -* Put the connection back in the pool when calling stream() or read_chunked() on - a chunked HEAD response. (Issue #1234) - -* Fixed pyOpenSSL-specific ssl client authentication issue when clients - attempted to auth via certificate + chain (Issue #1060) - -* Add the port to the connectionpool connect print (Pull #1251) - -* Don't use the ``uuid`` module to create multipart data boundaries. (Pull #1380) - -* ``read_chunked()`` on a closed response returns no chunks. (Issue #1088) - -* Add Python 2.6 support to ``contrib.securetransport`` (Pull #1359) - -* Added support for auth info in url for SOCKS proxy (Pull #1363) - - -1.22 (2017-07-20) ------------------ - -* Fixed missing brackets in ``HTTP CONNECT`` when connecting to IPv6 address via - IPv6 proxy. (Issue #1222) - -* Made the connection pool retry on ``SSLError``. The original ``SSLError`` - is available on ``MaxRetryError.reason``. (Issue #1112) - -* Drain and release connection before recursing on retry/redirect. Fixes - deadlocks with a blocking connectionpool. (Issue #1167) - -* Fixed compatibility for cookiejar. (Issue #1229) - -* pyopenssl: Use vendored version of ``six``. (Issue #1231) - - -1.21.1 (2017-05-02) -------------------- - -* Fixed SecureTransport issue that would cause long delays in response body - delivery. (Pull #1154) - -* Fixed regression in 1.21 that threw exceptions when users passed the - ``socket_options`` flag to the ``PoolManager``. (Issue #1165) - -* Fixed regression in 1.21 that threw exceptions when users passed the - ``assert_hostname`` or ``assert_fingerprint`` flag to the ``PoolManager``. - (Pull #1157) - - -1.21 (2017-04-25) ------------------ - -* Improved performance of certain selector system calls on Python 3.5 and - later. (Pull #1095) - -* Resolved issue where the PyOpenSSL backend would not wrap SysCallError - exceptions appropriately when sending data. (Pull #1125) - -* Selectors now detects a monkey-patched select module after import for modules - that patch the select module like eventlet, greenlet. (Pull #1128) - -* Reduced memory consumption when streaming zlib-compressed responses - (as opposed to raw deflate streams). (Pull #1129) - -* Connection pools now use the entire request context when constructing the - pool key. (Pull #1016) - -* ``PoolManager.connection_from_*`` methods now accept a new keyword argument, - ``pool_kwargs``, which are merged with the existing ``connection_pool_kw``. - (Pull #1016) - -* Add retry counter for ``status_forcelist``. (Issue #1147) - -* Added ``contrib`` module for using SecureTransport on macOS: - ``urllib3.contrib.securetransport``. (Pull #1122) - -* urllib3 now only normalizes the case of ``http://`` and ``https://`` schemes: - for schemes it does not recognise, it assumes they are case-sensitive and - leaves them unchanged. - (Issue #1080) - - -1.20 (2017-01-19) ------------------ - -* Added support for waiting for I/O using selectors other than select, - improving urllib3's behaviour with large numbers of concurrent connections. - (Pull #1001) - -* Updated the date for the system clock check. (Issue #1005) - -* ConnectionPools now correctly consider hostnames to be case-insensitive. - (Issue #1032) - -* Outdated versions of PyOpenSSL now cause the PyOpenSSL contrib module - to fail when it is injected, rather than at first use. (Pull #1063) - -* Outdated versions of cryptography now cause the PyOpenSSL contrib module - to fail when it is injected, rather than at first use. (Issue #1044) - -* Automatically attempt to rewind a file-like body object when a request is - retried or redirected. (Pull #1039) - -* Fix some bugs that occur when modules incautiously patch the queue module. - (Pull #1061) - -* Prevent retries from occurring on read timeouts for which the request method - was not in the method whitelist. (Issue #1059) - -* Changed the PyOpenSSL contrib module to lazily load idna to avoid - unnecessarily bloating the memory of programs that don't need it. (Pull - #1076) - -* Add support for IPv6 literals with zone identifiers. (Pull #1013) - -* Added support for socks5h:// and socks4a:// schemes when working with SOCKS - proxies, and controlled remote DNS appropriately. (Issue #1035) - - -1.19.1 (2016-11-16) -------------------- - -* Fixed AppEngine import that didn't function on Python 3.5. (Pull #1025) - - -1.19 (2016-11-03) ------------------ - -* urllib3 now respects Retry-After headers on 413, 429, and 503 responses when - using the default retry logic. (Pull #955) - -* Remove markers from setup.py to assist ancient setuptools versions. (Issue - #986) - -* Disallow superscripts and other integerish things in URL ports. (Issue #989) - -* Allow urllib3's HTTPResponse.stream() method to continue to work with - non-httplib underlying FPs. (Pull #990) - -* Empty filenames in multipart headers are now emitted as such, rather than - being suppressed. (Issue #1015) - -* Prefer user-supplied Host headers on chunked uploads. (Issue #1009) - - -1.18.1 (2016-10-27) -------------------- - -* CVE-2016-9015. Users who are using urllib3 version 1.17 or 1.18 along with - PyOpenSSL injection and OpenSSL 1.1.0 *must* upgrade to this version. This - release fixes a vulnerability whereby urllib3 in the above configuration - would silently fail to validate TLS certificates due to erroneously setting - invalid flags in OpenSSL's ``SSL_CTX_set_verify`` function. These erroneous - flags do not cause a problem in OpenSSL versions before 1.1.0, which - interprets the presence of any flag as requesting certificate validation. - - There is no PR for this patch, as it was prepared for simultaneous disclosure - and release. The master branch received the same fix in Pull #1010. - - -1.18 (2016-09-26) ------------------ - -* Fixed incorrect message for IncompleteRead exception. (Pull #973) - -* Accept ``iPAddress`` subject alternative name fields in TLS certificates. - (Issue #258) - -* Fixed consistency of ``HTTPResponse.closed`` between Python 2 and 3. - (Issue #977) - -* Fixed handling of wildcard certificates when using PyOpenSSL. (Issue #979) - - -1.17 (2016-09-06) ------------------ - -* Accept ``SSLContext`` objects for use in SSL/TLS negotiation. (Issue #835) - -* ConnectionPool debug log now includes scheme, host, and port. (Issue #897) - -* Substantially refactored documentation. (Issue #887) - -* Used URLFetch default timeout on AppEngine, rather than hardcoding our own. - (Issue #858) - -* Normalize the scheme and host in the URL parser (Issue #833) - -* ``HTTPResponse`` contains the last ``Retry`` object, which now also - contains retries history. (Issue #848) - -* Timeout can no longer be set as boolean, and must be greater than zero. - (Pull #924) - -* Removed pyasn1 and ndg-httpsclient from dependencies used for PyOpenSSL. We - now use cryptography and idna, both of which are already dependencies of - PyOpenSSL. (Pull #930) - -* Fixed infinite loop in ``stream`` when amt=None. (Issue #928) - -* Try to use the operating system's certificates when we are using an - ``SSLContext``. (Pull #941) - -* Updated cipher suite list to allow ChaCha20+Poly1305. AES-GCM is preferred to - ChaCha20, but ChaCha20 is then preferred to everything else. (Pull #947) - -* Updated cipher suite list to remove 3DES-based cipher suites. (Pull #958) - -* Removed the cipher suite fallback to allow HIGH ciphers. (Pull #958) - -* Implemented ``length_remaining`` to determine remaining content - to be read. (Pull #949) - -* Implemented ``enforce_content_length`` to enable exceptions when - incomplete data chunks are received. (Pull #949) - -* Dropped connection start, dropped connection reset, redirect, forced retry, - and new HTTPS connection log levels to DEBUG, from INFO. (Pull #967) - - -1.16 (2016-06-11) ------------------ - -* Disable IPv6 DNS when IPv6 connections are not possible. (Issue #840) - -* Provide ``key_fn_by_scheme`` pool keying mechanism that can be - overridden. (Issue #830) - -* Normalize scheme and host to lowercase for pool keys, and include - ``source_address``. (Issue #830) - -* Cleaner exception chain in Python 3 for ``_make_request``. - (Issue #861) - -* Fixed installing ``urllib3[socks]`` extra. (Issue #864) - -* Fixed signature of ``ConnectionPool.close`` so it can actually safely be - called by subclasses. (Issue #873) - -* Retain ``release_conn`` state across retries. (Issues #651, #866) - -* Add customizable ``HTTPConnectionPool.ResponseCls``, which defaults to - ``HTTPResponse`` but can be replaced with a subclass. (Issue #879) - - -1.15.1 (2016-04-11) -------------------- - -* Fix packaging to include backports module. (Issue #841) - - -1.15 (2016-04-06) ------------------ - -* Added Retry(raise_on_status=False). (Issue #720) - -* Always use setuptools, no more distutils fallback. (Issue #785) - -* Dropped support for Python 3.2. (Issue #786) - -* Chunked transfer encoding when requesting with ``chunked=True``. - (Issue #790) - -* Fixed regression with IPv6 port parsing. (Issue #801) - -* Append SNIMissingWarning messages to allow users to specify it in - the PYTHONWARNINGS environment variable. (Issue #816) - -* Handle unicode headers in Py2. (Issue #818) - -* Log certificate when there is a hostname mismatch. (Issue #820) - -* Preserve order of request/response headers. (Issue #821) - - -1.14 (2015-12-29) ------------------ - -* contrib: SOCKS proxy support! (Issue #762) - -* Fixed AppEngine handling of transfer-encoding header and bug - in Timeout defaults checking. (Issue #763) - - -1.13.1 (2015-12-18) -------------------- - -* Fixed regression in IPv6 + SSL for match_hostname. (Issue #761) - - -1.13 (2015-12-14) ------------------ - -* Fixed ``pip install urllib3[secure]`` on modern pip. (Issue #706) - -* pyopenssl: Fixed SSL3_WRITE_PENDING error. (Issue #717) - -* pyopenssl: Support for TLSv1.1 and TLSv1.2. (Issue #696) - -* Close connections more defensively on exception. (Issue #734) - -* Adjusted ``read_chunked`` to handle gzipped, chunk-encoded bodies without - repeatedly flushing the decoder, to function better on Jython. (Issue #743) - -* Accept ``ca_cert_dir`` for SSL-related PoolManager configuration. (Issue #758) - - -1.12 (2015-09-03) ------------------ - -* Rely on ``six`` for importing ``httplib`` to work around - conflicts with other Python 3 shims. (Issue #688) - -* Add support for directories of certificate authorities, as supported by - OpenSSL. (Issue #701) - -* New exception: ``NewConnectionError``, raised when we fail to establish - a new connection, usually ``ECONNREFUSED`` socket error. - - -1.11 (2015-07-21) ------------------ - -* When ``ca_certs`` is given, ``cert_reqs`` defaults to - ``'CERT_REQUIRED'``. (Issue #650) - -* ``pip install urllib3[secure]`` will install Certifi and - PyOpenSSL as dependencies. (Issue #678) - -* Made ``HTTPHeaderDict`` usable as a ``headers`` input value - (Issues #632, #679) - -* Added `urllib3.contrib.appengine `_ - which has an ``AppEngineManager`` for using ``URLFetch`` in a - Google AppEngine environment. (Issue #664) - -* Dev: Added test suite for AppEngine. (Issue #631) - -* Fix performance regression when using PyOpenSSL. (Issue #626) - -* Passing incorrect scheme (e.g. ``foo://``) will raise - ``ValueError`` instead of ``AssertionError`` (backwards - compatible for now, but please migrate). (Issue #640) - -* Fix pools not getting replenished when an error occurs during a - request using ``release_conn=False``. (Issue #644) - -* Fix pool-default headers not applying for url-encoded requests - like GET. (Issue #657) - -* log.warning in Python 3 when headers are skipped due to parsing - errors. (Issue #642) - -* Close and discard connections if an error occurs during read. - (Issue #660) - -* Fix host parsing for IPv6 proxies. (Issue #668) - -* Separate warning type SubjectAltNameWarning, now issued once - per host. (Issue #671) - -* Fix ``httplib.IncompleteRead`` not getting converted to - ``ProtocolError`` when using ``HTTPResponse.stream()`` - (Issue #674) - -1.10.4 (2015-05-03) -------------------- - -* Migrate tests to Tornado 4. (Issue #594) - -* Append default warning configuration rather than overwrite. - (Issue #603) - -* Fix streaming decoding regression. (Issue #595) - -* Fix chunked requests losing state across keep-alive connections. - (Issue #599) - -* Fix hanging when chunked HEAD response has no body. (Issue #605) - - -1.10.3 (2015-04-21) -------------------- - -* Emit ``InsecurePlatformWarning`` when SSLContext object is missing. - (Issue #558) - -* Fix regression of duplicate header keys being discarded. - (Issue #563) - -* ``Response.stream()`` returns a generator for chunked responses. - (Issue #560) - -* Set upper-bound timeout when waiting for a socket in PyOpenSSL. - (Issue #585) - -* Work on platforms without `ssl` module for plain HTTP requests. - (Issue #587) - -* Stop relying on the stdlib's default cipher list. (Issue #588) - - -1.10.2 (2015-02-25) -------------------- - -* Fix file descriptor leakage on retries. (Issue #548) - -* Removed RC4 from default cipher list. (Issue #551) - -* Header performance improvements. (Issue #544) - -* Fix PoolManager not obeying redirect retry settings. (Issue #553) - - -1.10.1 (2015-02-10) -------------------- - -* Pools can be used as context managers. (Issue #545) - -* Don't re-use connections which experienced an SSLError. (Issue #529) - -* Don't fail when gzip decoding an empty stream. (Issue #535) - -* Add sha256 support for fingerprint verification. (Issue #540) - -* Fixed handling of header values containing commas. (Issue #533) - - -1.10 (2014-12-14) ------------------ - -* Disabled SSLv3. (Issue #473) - -* Add ``Url.url`` property to return the composed url string. (Issue #394) - -* Fixed PyOpenSSL + gevent ``WantWriteError``. (Issue #412) - -* ``MaxRetryError.reason`` will always be an exception, not string. - (Issue #481) - -* Fixed SSL-related timeouts not being detected as timeouts. (Issue #492) - -* Py3: Use ``ssl.create_default_context()`` when available. (Issue #473) - -* Emit ``InsecureRequestWarning`` for *every* insecure HTTPS request. - (Issue #496) - -* Emit ``SecurityWarning`` when certificate has no ``subjectAltName``. - (Issue #499) - -* Close and discard sockets which experienced SSL-related errors. - (Issue #501) - -* Handle ``body`` param in ``.request(...)``. (Issue #513) - -* Respect timeout with HTTPS proxy. (Issue #505) - -* PyOpenSSL: Handle ZeroReturnError exception. (Issue #520) - - -1.9.1 (2014-09-13) ------------------- - -* Apply socket arguments before binding. (Issue #427) - -* More careful checks if fp-like object is closed. (Issue #435) - -* Fixed packaging issues of some development-related files not - getting included. (Issue #440) - -* Allow performing *only* fingerprint verification. (Issue #444) - -* Emit ``SecurityWarning`` if system clock is waaay off. (Issue #445) - -* Fixed PyOpenSSL compatibility with PyPy. (Issue #450) - -* Fixed ``BrokenPipeError`` and ``ConnectionError`` handling in Py3. - (Issue #443) - - - -1.9 (2014-07-04) ----------------- - -* Shuffled around development-related files. If you're maintaining a distro - package of urllib3, you may need to tweak things. (Issue #415) - -* Unverified HTTPS requests will trigger a warning on the first request. See - our new `security documentation - `_ for details. - (Issue #426) - -* New retry logic and ``urllib3.util.retry.Retry`` configuration object. - (Issue #326) - -* All raised exceptions should now wrapped in a - ``urllib3.exceptions.HTTPException``-extending exception. (Issue #326) - -* All errors during a retry-enabled request should be wrapped in - ``urllib3.exceptions.MaxRetryError``, including timeout-related exceptions - which were previously exempt. Underlying error is accessible from the - ``.reason`` property. (Issue #326) - -* ``urllib3.exceptions.ConnectionError`` renamed to - ``urllib3.exceptions.ProtocolError``. (Issue #326) - -* Errors during response read (such as IncompleteRead) are now wrapped in - ``urllib3.exceptions.ProtocolError``. (Issue #418) - -* Requesting an empty host will raise ``urllib3.exceptions.LocationValueError``. - (Issue #417) - -* Catch read timeouts over SSL connections as - ``urllib3.exceptions.ReadTimeoutError``. (Issue #419) - -* Apply socket arguments before connecting. (Issue #427) - - -1.8.3 (2014-06-23) ------------------- - -* Fix TLS verification when using a proxy in Python 3.4.1. (Issue #385) - -* Add ``disable_cache`` option to ``urllib3.util.make_headers``. (Issue #393) - -* Wrap ``socket.timeout`` exception with - ``urllib3.exceptions.ReadTimeoutError``. (Issue #399) - -* Fixed proxy-related bug where connections were being reused incorrectly. - (Issues #366, #369) - -* Added ``socket_options`` keyword parameter which allows to define - ``setsockopt`` configuration of new sockets. (Issue #397) - -* Removed ``HTTPConnection.tcp_nodelay`` in favor of - ``HTTPConnection.default_socket_options``. (Issue #397) - -* Fixed ``TypeError`` bug in Python 2.6.4. (Issue #411) - - -1.8.2 (2014-04-17) ------------------- - -* Fix ``urllib3.util`` not being included in the package. - - -1.8.1 (2014-04-17) ------------------- - -* Fix AppEngine bug of HTTPS requests going out as HTTP. (Issue #356) - -* Don't install ``dummyserver`` into ``site-packages`` as it's only needed - for the test suite. (Issue #362) - -* Added support for specifying ``source_address``. (Issue #352) - - -1.8 (2014-03-04) ----------------- - -* Improved url parsing in ``urllib3.util.parse_url`` (properly parse '@' in - username, and blank ports like 'hostname:'). - -* New ``urllib3.connection`` module which contains all the HTTPConnection - objects. - -* Several ``urllib3.util.Timeout``-related fixes. Also changed constructor - signature to a more sensible order. [Backwards incompatible] - (Issues #252, #262, #263) - -* Use ``backports.ssl_match_hostname`` if it's installed. (Issue #274) - -* Added ``.tell()`` method to ``urllib3.response.HTTPResponse`` which - returns the number of bytes read so far. (Issue #277) - -* Support for platforms without threading. (Issue #289) - -* Expand default-port comparison in ``HTTPConnectionPool.is_same_host`` - to allow a pool with no specified port to be considered equal to to an - HTTP/HTTPS url with port 80/443 explicitly provided. (Issue #305) - -* Improved default SSL/TLS settings to avoid vulnerabilities. - (Issue #309) - -* Fixed ``urllib3.poolmanager.ProxyManager`` not retrying on connect errors. - (Issue #310) - -* Disable Nagle's Algorithm on the socket for non-proxies. A subset of requests - will send the entire HTTP request ~200 milliseconds faster; however, some of - the resulting TCP packets will be smaller. (Issue #254) - -* Increased maximum number of SubjectAltNames in ``urllib3.contrib.pyopenssl`` - from the default 64 to 1024 in a single certificate. (Issue #318) - -* Headers are now passed and stored as a custom - ``urllib3.collections_.HTTPHeaderDict`` object rather than a plain ``dict``. - (Issue #329, #333) - -* Headers no longer lose their case on Python 3. (Issue #236) - -* ``urllib3.contrib.pyopenssl`` now uses the operating system's default CA - certificates on inject. (Issue #332) - -* Requests with ``retries=False`` will immediately raise any exceptions without - wrapping them in ``MaxRetryError``. (Issue #348) - -* Fixed open socket leak with SSL-related failures. (Issue #344, #348) - - -1.7.1 (2013-09-25) ------------------- - -* Added granular timeout support with new ``urllib3.util.Timeout`` class. - (Issue #231) - -* Fixed Python 3.4 support. (Issue #238) - - -1.7 (2013-08-14) ----------------- - -* More exceptions are now pickle-able, with tests. (Issue #174) - -* Fixed redirecting with relative URLs in Location header. (Issue #178) - -* Support for relative urls in ``Location: ...`` header. (Issue #179) - -* ``urllib3.response.HTTPResponse`` now inherits from ``io.IOBase`` for bonus - file-like functionality. (Issue #187) - -* Passing ``assert_hostname=False`` when creating a HTTPSConnectionPool will - skip hostname verification for SSL connections. (Issue #194) - -* New method ``urllib3.response.HTTPResponse.stream(...)`` which acts as a - generator wrapped around ``.read(...)``. (Issue #198) - -* IPv6 url parsing enforces brackets around the hostname. (Issue #199) - -* Fixed thread race condition in - ``urllib3.poolmanager.PoolManager.connection_from_host(...)`` (Issue #204) - -* ``ProxyManager`` requests now include non-default port in ``Host: ...`` - header. (Issue #217) - -* Added HTTPS proxy support in ``ProxyManager``. (Issue #170 #139) - -* New ``RequestField`` object can be passed to the ``fields=...`` param which - can specify headers. (Issue #220) - -* Raise ``urllib3.exceptions.ProxyError`` when connecting to proxy fails. - (Issue #221) - -* Use international headers when posting file names. (Issue #119) - -* Improved IPv6 support. (Issue #203) - - -1.6 (2013-04-25) ----------------- - -* Contrib: Optional SNI support for Py2 using PyOpenSSL. (Issue #156) - -* ``ProxyManager`` automatically adds ``Host: ...`` header if not given. - -* Improved SSL-related code. ``cert_req`` now optionally takes a string like - "REQUIRED" or "NONE". Same with ``ssl_version`` takes strings like "SSLv23" - The string values reflect the suffix of the respective constant variable. - (Issue #130) - -* Vendored ``socksipy`` now based on Anorov's fork which handles unexpectedly - closed proxy connections and larger read buffers. (Issue #135) - -* Ensure the connection is closed if no data is received, fixes connection leak - on some platforms. (Issue #133) - -* Added SNI support for SSL/TLS connections on Py32+. (Issue #89) - -* Tests fixed to be compatible with Py26 again. (Issue #125) - -* Added ability to choose SSL version by passing an ``ssl.PROTOCOL_*`` constant - to the ``ssl_version`` parameter of ``HTTPSConnectionPool``. (Issue #109) - -* Allow an explicit content type to be specified when encoding file fields. - (Issue #126) - -* Exceptions are now pickleable, with tests. (Issue #101) - -* Fixed default headers not getting passed in some cases. (Issue #99) - -* Treat "content-encoding" header value as case-insensitive, per RFC 2616 - Section 3.5. (Issue #110) - -* "Connection Refused" SocketErrors will get retried rather than raised. - (Issue #92) - -* Updated vendored ``six``, no longer overrides the global ``six`` module - namespace. (Issue #113) - -* ``urllib3.exceptions.MaxRetryError`` contains a ``reason`` property holding - the exception that prompted the final retry. If ``reason is None`` then it - was due to a redirect. (Issue #92, #114) - -* Fixed ``PoolManager.urlopen()`` from not redirecting more than once. - (Issue #149) - -* Don't assume ``Content-Type: text/plain`` for multi-part encoding parameters - that are not files. (Issue #111) - -* Pass `strict` param down to ``httplib.HTTPConnection``. (Issue #122) - -* Added mechanism to verify SSL certificates by fingerprint (md5, sha1) or - against an arbitrary hostname (when connecting by IP or for misconfigured - servers). (Issue #140) - -* Streaming decompression support. (Issue #159) - - -1.5 (2012-08-02) ----------------- - -* Added ``urllib3.add_stderr_logger()`` for quickly enabling STDERR debug - logging in urllib3. - -* Native full URL parsing (including auth, path, query, fragment) available in - ``urllib3.util.parse_url(url)``. - -* Built-in redirect will switch method to 'GET' if status code is 303. - (Issue #11) - -* ``urllib3.PoolManager`` strips the scheme and host before sending the request - uri. (Issue #8) - -* New ``urllib3.exceptions.DecodeError`` exception for when automatic decoding, - based on the Content-Type header, fails. - -* Fixed bug with pool depletion and leaking connections (Issue #76). Added - explicit connection closing on pool eviction. Added - ``urllib3.PoolManager.clear()``. - -* 99% -> 100% unit test coverage. - - -1.4 (2012-06-16) ----------------- - -* Minor AppEngine-related fixes. - -* Switched from ``mimetools.choose_boundary`` to ``uuid.uuid4()``. - -* Improved url parsing. (Issue #73) - -* IPv6 url support. (Issue #72) - - -1.3 (2012-03-25) ----------------- - -* Removed pre-1.0 deprecated API. - -* Refactored helpers into a ``urllib3.util`` submodule. - -* Fixed multipart encoding to support list-of-tuples for keys with multiple - values. (Issue #48) - -* Fixed multiple Set-Cookie headers in response not getting merged properly in - Python 3. (Issue #53) - -* AppEngine support with Py27. (Issue #61) - -* Minor ``encode_multipart_formdata`` fixes related to Python 3 strings vs - bytes. - - -1.2.2 (2012-02-06) ------------------- - -* Fixed packaging bug of not shipping ``test-requirements.txt``. (Issue #47) - - -1.2.1 (2012-02-05) ------------------- - -* Fixed another bug related to when ``ssl`` module is not available. (Issue #41) - -* Location parsing errors now raise ``urllib3.exceptions.LocationParseError`` - which inherits from ``ValueError``. - - -1.2 (2012-01-29) ----------------- - -* Added Python 3 support (tested on 3.2.2) - -* Dropped Python 2.5 support (tested on 2.6.7, 2.7.2) - -* Use ``select.poll`` instead of ``select.select`` for platforms that support - it. - -* Use ``Queue.LifoQueue`` instead of ``Queue.Queue`` for more aggressive - connection reusing. Configurable by overriding ``ConnectionPool.QueueCls``. - -* Fixed ``ImportError`` during install when ``ssl`` module is not available. - (Issue #41) - -* Fixed ``PoolManager`` redirects between schemes (such as HTTP -> HTTPS) not - completing properly. (Issue #28, uncovered by Issue #10 in v1.1) - -* Ported ``dummyserver`` to use ``tornado`` instead of ``webob`` + - ``eventlet``. Removed extraneous unsupported dummyserver testing backends. - Added socket-level tests. - -* More tests. Achievement Unlocked: 99% Coverage. - - -1.1 (2012-01-07) ----------------- - -* Refactored ``dummyserver`` to its own root namespace module (used for - testing). - -* Added hostname verification for ``VerifiedHTTPSConnection`` by vendoring in - Py32's ``ssl_match_hostname``. (Issue #25) - -* Fixed cross-host HTTP redirects when using ``PoolManager``. (Issue #10) - -* Fixed ``decode_content`` being ignored when set through ``urlopen``. (Issue - #27) - -* Fixed timeout-related bugs. (Issues #17, #23) - - -1.0.2 (2011-11-04) ------------------- - -* Fixed typo in ``VerifiedHTTPSConnection`` which would only present as a bug if - you're using the object manually. (Thanks pyos) - -* Made RecentlyUsedContainer (and consequently PoolManager) more thread-safe by - wrapping the access log in a mutex. (Thanks @christer) - -* Made RecentlyUsedContainer more dict-like (corrected ``__delitem__`` and - ``__getitem__`` behaviour), with tests. Shouldn't affect core urllib3 code. - - -1.0.1 (2011-10-10) ------------------- - -* Fixed a bug where the same connection would get returned into the pool twice, - causing extraneous "HttpConnectionPool is full" log warnings. - - -1.0 (2011-10-08) ----------------- - -* Added ``PoolManager`` with LRU expiration of connections (tested and - documented). -* Added ``ProxyManager`` (needs tests, docs, and confirmation that it works - with HTTPS proxies). -* Added optional partial-read support for responses when - ``preload_content=False``. You can now make requests and just read the headers - without loading the content. -* Made response decoding optional (default on, same as before). -* Added optional explicit boundary string for ``encode_multipart_formdata``. -* Convenience request methods are now inherited from ``RequestMethods``. Old - helpers like ``get_url`` and ``post_url`` should be abandoned in favour of - the new ``request(method, url, ...)``. -* Refactored code to be even more decoupled, reusable, and extendable. -* License header added to ``.py`` files. -* Embiggened the documentation: Lots of Sphinx-friendly docstrings in the code - and docs in ``docs/`` and on https://urllib3.readthedocs.io/. -* Embettered all the things! -* Started writing this file. - - -0.4.1 (2011-07-17) ------------------- - -* Minor bug fixes, code cleanup. - - -0.4 (2011-03-01) ----------------- - -* Better unicode support. -* Added ``VerifiedHTTPSConnection``. -* Added ``NTLMConnectionPool`` in contrib. -* Minor improvements. - - -0.3.1 (2010-07-13) ------------------- - -* Added ``assert_host_name`` optional parameter. Now compatible with proxies. - - -0.3 (2009-12-10) ----------------- - -* Added HTTPS support. -* Minor bug fixes. -* Refactored, broken backwards compatibility with 0.2. -* API to be treated as stable from this version forward. - - -0.2 (2008-11-17) ----------------- - -* Added unit tests. -* Bug fixes. - - -0.1 (2008-11-16) ----------------- - -* First release. diff --git a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/RECORD b/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/RECORD deleted file mode 100644 index 4894a4f2a..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/RECORD +++ /dev/null @@ -1,82 +0,0 @@ -urllib3-1.26.12.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 -urllib3-1.26.12.dist-info/LICENSE.txt,sha256=w3vxhuJ8-dvpYZ5V7f486nswCRzrPaY8fay-Dm13kHs,1115 -urllib3-1.26.12.dist-info/METADATA,sha256=z8RtNpb9x1iDCVZ86eU_xDg9pmSf4QXBddRTVPUh4bM,47076 -urllib3-1.26.12.dist-info/RECORD,, -urllib3-1.26.12.dist-info/WHEEL,sha256=z9j0xAa_JmUKMpmz72K0ZGALSM_n-wQVmGbleXx2VHg,110 -urllib3-1.26.12.dist-info/top_level.txt,sha256=EMiXL2sKrTcmrMxIHTqdc3ET54pQI2Y072LexFEemvo,8 -urllib3/__init__.py,sha256=iXLcYiJySn0GNbWOOZDDApgBL1JgP44EZ8i1760S8Mc,3333 -urllib3/__pycache__/__init__.cpython-310.pyc,, -urllib3/__pycache__/_collections.cpython-310.pyc,, -urllib3/__pycache__/_version.cpython-310.pyc,, -urllib3/__pycache__/connection.cpython-310.pyc,, -urllib3/__pycache__/connectionpool.cpython-310.pyc,, -urllib3/__pycache__/exceptions.cpython-310.pyc,, -urllib3/__pycache__/fields.cpython-310.pyc,, -urllib3/__pycache__/filepost.cpython-310.pyc,, -urllib3/__pycache__/poolmanager.cpython-310.pyc,, -urllib3/__pycache__/request.cpython-310.pyc,, -urllib3/__pycache__/response.cpython-310.pyc,, -urllib3/_collections.py,sha256=Rp1mVyBgc_UlAcp6M3at1skJBXR5J43NawRTvW2g_XY,10811 -urllib3/_version.py,sha256=GhuGBUT_MtRxHEHDb-LYs5yLPeYWlCwFBPjGZmVJbVg,64 -urllib3/connection.py,sha256=8976wL6sGeVMW0JnXvx5mD00yXu87uQjxtB9_VL8dx8,20070 -urllib3/connectionpool.py,sha256=vEzk1iJEw1qR2vHBo7m3Y98iDfna6rKkUz3AyK5lJKQ,39093 -urllib3/contrib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 -urllib3/contrib/__pycache__/__init__.cpython-310.pyc,, -urllib3/contrib/__pycache__/_appengine_environ.cpython-310.pyc,, -urllib3/contrib/__pycache__/appengine.cpython-310.pyc,, -urllib3/contrib/__pycache__/ntlmpool.cpython-310.pyc,, -urllib3/contrib/__pycache__/pyopenssl.cpython-310.pyc,, -urllib3/contrib/__pycache__/securetransport.cpython-310.pyc,, -urllib3/contrib/__pycache__/socks.cpython-310.pyc,, -urllib3/contrib/_appengine_environ.py,sha256=bDbyOEhW2CKLJcQqAKAyrEHN-aklsyHFKq6vF8ZFsmk,957 -urllib3/contrib/_securetransport/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 -urllib3/contrib/_securetransport/__pycache__/__init__.cpython-310.pyc,, -urllib3/contrib/_securetransport/__pycache__/bindings.cpython-310.pyc,, -urllib3/contrib/_securetransport/__pycache__/low_level.cpython-310.pyc,, -urllib3/contrib/_securetransport/bindings.py,sha256=4Xk64qIkPBt09A5q-RIFUuDhNc9mXilVapm7WnYnzRw,17632 -urllib3/contrib/_securetransport/low_level.py,sha256=B2JBB2_NRP02xK6DCa1Pa9IuxrPwxzDzZbixQkb7U9M,13922 -urllib3/contrib/appengine.py,sha256=jz515jZYBDFTnhR4zqfeaCo6JdDgAQqYbqzHK9sDkfw,11010 -urllib3/contrib/ntlmpool.py,sha256=ej9gGvfAb2Gt00lafFp45SIoRz-QwrQ4WChm6gQmAlM,4538 -urllib3/contrib/pyopenssl.py,sha256=YeK9CA7D4MfdaqorAWZ8oGHfKnhHzASSUXa2GIftxsI,17156 -urllib3/contrib/securetransport.py,sha256=QOhVbWrFQTKbmV-vtyG69amekkKVxXkdjk9oymaO0Ag,34416 -urllib3/contrib/socks.py,sha256=aRi9eWXo9ZEb95XUxef4Z21CFlnnjbEiAo9HOseoMt4,7097 -urllib3/exceptions.py,sha256=0Mnno3KHTNfXRfY7638NufOPkUb6mXOm-Lqj-4x2w8A,8217 -urllib3/fields.py,sha256=kvLDCg_JmH1lLjUUEY_FLS8UhY7hBvDPuVETbY8mdrM,8579 -urllib3/filepost.py,sha256=5b_qqgRHVlL7uLtdAYBzBh-GHmU5AfJVt_2N0XS3PeY,2440 -urllib3/packages/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 -urllib3/packages/__pycache__/__init__.cpython-310.pyc,, -urllib3/packages/__pycache__/six.cpython-310.pyc,, -urllib3/packages/backports/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 -urllib3/packages/backports/__pycache__/__init__.cpython-310.pyc,, -urllib3/packages/backports/__pycache__/makefile.cpython-310.pyc,, -urllib3/packages/backports/makefile.py,sha256=nbzt3i0agPVP07jqqgjhaYjMmuAi_W5E0EywZivVO8E,1417 -urllib3/packages/six.py,sha256=b9LM0wBXv7E7SrbCjAm4wwN-hrH-iNxv18LgWNMMKPo,34665 -urllib3/poolmanager.py,sha256=0KOOJECoeLYVjUHvv-0h4Oq3FFQQ2yb-Fnjkbj8gJO0,19786 -urllib3/request.py,sha256=ZFSIqX0C6WizixecChZ3_okyu7BEv0lZu1VT0s6h4SM,5985 -urllib3/response.py,sha256=B0MM0o8p1i8IQ7QShnRZJzU-8mHCn2Aw3mEhCoE3994,30229 -urllib3/util/__init__.py,sha256=JEmSmmqqLyaw8P51gUImZh8Gwg9i1zSe-DoqAitn2nc,1155 -urllib3/util/__pycache__/__init__.cpython-310.pyc,, -urllib3/util/__pycache__/connection.cpython-310.pyc,, -urllib3/util/__pycache__/proxy.cpython-310.pyc,, -urllib3/util/__pycache__/queue.cpython-310.pyc,, -urllib3/util/__pycache__/request.cpython-310.pyc,, -urllib3/util/__pycache__/response.cpython-310.pyc,, -urllib3/util/__pycache__/retry.cpython-310.pyc,, -urllib3/util/__pycache__/ssl_.cpython-310.pyc,, -urllib3/util/__pycache__/ssl_match_hostname.cpython-310.pyc,, -urllib3/util/__pycache__/ssltransport.cpython-310.pyc,, -urllib3/util/__pycache__/timeout.cpython-310.pyc,, -urllib3/util/__pycache__/url.cpython-310.pyc,, -urllib3/util/__pycache__/wait.cpython-310.pyc,, -urllib3/util/connection.py,sha256=5Lx2B1PW29KxBn2T0xkN1CBgRBa3gGVJBKoQoRogEVk,4901 -urllib3/util/proxy.py,sha256=zUvPPCJrp6dOF0N4GAVbOcl6o-4uXKSrGiTkkr5vUS4,1605 -urllib3/util/queue.py,sha256=nRgX8_eX-_VkvxoX096QWoz8Ps0QHUAExILCY_7PncM,498 -urllib3/util/request.py,sha256=fWiAaa8pwdLLIqoTLBxCC2e4ed80muzKU3e3HWWTzFQ,4225 -urllib3/util/response.py,sha256=GJpg3Egi9qaJXRwBh5wv-MNuRWan5BIu40oReoxWP28,3510 -urllib3/util/retry.py,sha256=iESg2PvViNdXBRY4MpL4h0kqwOOkHkxmLn1kkhFHPU8,22001 -urllib3/util/ssl_.py,sha256=c0sYiSC6272r6uPkxQpo5rYPP9QC1eR6oI7004gYqZo,17165 -urllib3/util/ssl_match_hostname.py,sha256=Ir4cZVEjmAk8gUAIHWSi7wtOO83UCYABY2xFD1Ql_WA,5758 -urllib3/util/ssltransport.py,sha256=NA-u5rMTrDFDFC8QzRKUEKMG0561hOD4qBTr3Z4pv6E,6895 -urllib3/util/timeout.py,sha256=QSbBUNOB9yh6AnDn61SrLQ0hg5oz0I9-uXEG91AJuIg,10003 -urllib3/util/url.py,sha256=m8crWKyNGjnxmqw0FZ4CpzHVIjv8DornwW22IJIOq-g,14270 -urllib3/util/wait.py,sha256=fOX0_faozG2P7iVojQoE1mbydweNyTcm-hXEfFrTtLI,5403 diff --git a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/WHEEL b/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/WHEEL deleted file mode 100644 index 0b18a2811..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/WHEEL +++ /dev/null @@ -1,6 +0,0 @@ -Wheel-Version: 1.0 -Generator: bdist_wheel (0.37.1) -Root-Is-Purelib: true -Tag: py2-none-any -Tag: py3-none-any - diff --git a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/top_level.txt b/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/top_level.txt deleted file mode 100644 index a42590beb..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/top_level.txt +++ /dev/null @@ -1 +0,0 @@ -urllib3 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__init__.py b/infrastructure/sandbox/Data/lambda/urllib3/__init__.py deleted file mode 100644 index c6fa38212..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/__init__.py +++ /dev/null @@ -1,102 +0,0 @@ -""" -Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more -""" -from __future__ import absolute_import - -# Set default logging handler to avoid "No handler found" warnings. -import logging -import warnings -from logging import NullHandler - -from . import exceptions -from ._version import __version__ -from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url -from .filepost import encode_multipart_formdata -from .poolmanager import PoolManager, ProxyManager, proxy_from_url -from .response import HTTPResponse -from .util.request import make_headers -from .util.retry import Retry -from .util.timeout import Timeout -from .util.url import get_host - -# === NOTE TO REPACKAGERS AND VENDORS === -# Please delete this block, this logic is only -# for urllib3 being distributed via PyPI. -# See: https://github.com/urllib3/urllib3/issues/2680 -try: - import urllib3_secure_extra # type: ignore # noqa: F401 -except ImportError: - pass -else: - warnings.warn( - "'urllib3[secure]' extra is deprecated and will be removed " - "in a future release of urllib3 2.x. Read more in this issue: " - "https://github.com/urllib3/urllib3/issues/2680", - category=DeprecationWarning, - stacklevel=2, - ) - -__author__ = "Andrey Petrov (andrey.petrov@shazow.net)" -__license__ = "MIT" -__version__ = __version__ - -__all__ = ( - "HTTPConnectionPool", - "HTTPSConnectionPool", - "PoolManager", - "ProxyManager", - "HTTPResponse", - "Retry", - "Timeout", - "add_stderr_logger", - "connection_from_url", - "disable_warnings", - "encode_multipart_formdata", - "get_host", - "make_headers", - "proxy_from_url", -) - -logging.getLogger(__name__).addHandler(NullHandler()) - - -def add_stderr_logger(level=logging.DEBUG): - """ - Helper for quickly adding a StreamHandler to the logger. Useful for - debugging. - - Returns the handler after adding it. - """ - # This method needs to be in this __init__.py to get the __name__ correct - # even if urllib3 is vendored within another package. - logger = logging.getLogger(__name__) - handler = logging.StreamHandler() - handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s")) - logger.addHandler(handler) - logger.setLevel(level) - logger.debug("Added a stderr logging handler to logger: %s", __name__) - return handler - - -# ... Clean up. -del NullHandler - - -# All warning filters *must* be appended unless you're really certain that they -# shouldn't be: otherwise, it's very hard for users to use most Python -# mechanisms to silence them. -# SecurityWarning's always go off by default. -warnings.simplefilter("always", exceptions.SecurityWarning, append=True) -# SubjectAltNameWarning's should go off once per host -warnings.simplefilter("default", exceptions.SubjectAltNameWarning, append=True) -# InsecurePlatformWarning's don't vary between requests, so we keep it default. -warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True) -# SNIMissingWarnings should go off only once. -warnings.simplefilter("default", exceptions.SNIMissingWarning, append=True) - - -def disable_warnings(category=exceptions.HTTPWarning): - """ - Helper for quickly disabling all urllib3 warnings. - """ - warnings.simplefilter("ignore", category) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 2294068a0..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/_collections.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/_collections.cpython-310.pyc deleted file mode 100644 index a027477ac..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/_collections.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/_version.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/_version.cpython-310.pyc deleted file mode 100644 index 4c0c0d840..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/_version.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/connection.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/connection.cpython-310.pyc deleted file mode 100644 index 5c53ff2e7..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/connection.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/connectionpool.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/connectionpool.cpython-310.pyc deleted file mode 100644 index ea3e012ee..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/connectionpool.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/exceptions.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/exceptions.cpython-310.pyc deleted file mode 100644 index bb7e49383..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/exceptions.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/fields.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/fields.cpython-310.pyc deleted file mode 100644 index 9646b4242..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/fields.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/filepost.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/filepost.cpython-310.pyc deleted file mode 100644 index 36641a4fa..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/filepost.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/poolmanager.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/poolmanager.cpython-310.pyc deleted file mode 100644 index 3b57fd7de..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/poolmanager.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/request.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/request.cpython-310.pyc deleted file mode 100644 index f0bdc5288..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/request.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/response.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/response.cpython-310.pyc deleted file mode 100644 index c39ce9951..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/__pycache__/response.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/_collections.py b/infrastructure/sandbox/Data/lambda/urllib3/_collections.py deleted file mode 100644 index da9857e98..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/_collections.py +++ /dev/null @@ -1,337 +0,0 @@ -from __future__ import absolute_import - -try: - from collections.abc import Mapping, MutableMapping -except ImportError: - from collections import Mapping, MutableMapping -try: - from threading import RLock -except ImportError: # Platform-specific: No threads available - - class RLock: - def __enter__(self): - pass - - def __exit__(self, exc_type, exc_value, traceback): - pass - - -from collections import OrderedDict - -from .exceptions import InvalidHeader -from .packages import six -from .packages.six import iterkeys, itervalues - -__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"] - - -_Null = object() - - -class RecentlyUsedContainer(MutableMapping): - """ - Provides a thread-safe dict-like container which maintains up to - ``maxsize`` keys while throwing away the least-recently-used keys beyond - ``maxsize``. - - :param maxsize: - Maximum number of recent elements to retain. - - :param dispose_func: - Every time an item is evicted from the container, - ``dispose_func(value)`` is called. Callback which will get called - """ - - ContainerCls = OrderedDict - - def __init__(self, maxsize=10, dispose_func=None): - self._maxsize = maxsize - self.dispose_func = dispose_func - - self._container = self.ContainerCls() - self.lock = RLock() - - def __getitem__(self, key): - # Re-insert the item, moving it to the end of the eviction line. - with self.lock: - item = self._container.pop(key) - self._container[key] = item - return item - - def __setitem__(self, key, value): - evicted_value = _Null - with self.lock: - # Possibly evict the existing value of 'key' - evicted_value = self._container.get(key, _Null) - self._container[key] = value - - # If we didn't evict an existing value, we might have to evict the - # least recently used item from the beginning of the container. - if len(self._container) > self._maxsize: - _key, evicted_value = self._container.popitem(last=False) - - if self.dispose_func and evicted_value is not _Null: - self.dispose_func(evicted_value) - - def __delitem__(self, key): - with self.lock: - value = self._container.pop(key) - - if self.dispose_func: - self.dispose_func(value) - - def __len__(self): - with self.lock: - return len(self._container) - - def __iter__(self): - raise NotImplementedError( - "Iteration over this class is unlikely to be threadsafe." - ) - - def clear(self): - with self.lock: - # Copy pointers to all values, then wipe the mapping - values = list(itervalues(self._container)) - self._container.clear() - - if self.dispose_func: - for value in values: - self.dispose_func(value) - - def keys(self): - with self.lock: - return list(iterkeys(self._container)) - - -class HTTPHeaderDict(MutableMapping): - """ - :param headers: - An iterable of field-value pairs. Must not contain multiple field names - when compared case-insensitively. - - :param kwargs: - Additional field-value pairs to pass in to ``dict.update``. - - A ``dict`` like container for storing HTTP Headers. - - Field names are stored and compared case-insensitively in compliance with - RFC 7230. Iteration provides the first case-sensitive key seen for each - case-insensitive pair. - - Using ``__setitem__`` syntax overwrites fields that compare equal - case-insensitively in order to maintain ``dict``'s api. For fields that - compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` - in a loop. - - If multiple fields that are equal case-insensitively are passed to the - constructor or ``.update``, the behavior is undefined and some will be - lost. - - >>> headers = HTTPHeaderDict() - >>> headers.add('Set-Cookie', 'foo=bar') - >>> headers.add('set-cookie', 'baz=quxx') - >>> headers['content-length'] = '7' - >>> headers['SET-cookie'] - 'foo=bar, baz=quxx' - >>> headers['Content-Length'] - '7' - """ - - def __init__(self, headers=None, **kwargs): - super(HTTPHeaderDict, self).__init__() - self._container = OrderedDict() - if headers is not None: - if isinstance(headers, HTTPHeaderDict): - self._copy_from(headers) - else: - self.extend(headers) - if kwargs: - self.extend(kwargs) - - def __setitem__(self, key, val): - self._container[key.lower()] = [key, val] - return self._container[key.lower()] - - def __getitem__(self, key): - val = self._container[key.lower()] - return ", ".join(val[1:]) - - def __delitem__(self, key): - del self._container[key.lower()] - - def __contains__(self, key): - return key.lower() in self._container - - def __eq__(self, other): - if not isinstance(other, Mapping) and not hasattr(other, "keys"): - return False - if not isinstance(other, type(self)): - other = type(self)(other) - return dict((k.lower(), v) for k, v in self.itermerged()) == dict( - (k.lower(), v) for k, v in other.itermerged() - ) - - def __ne__(self, other): - return not self.__eq__(other) - - if six.PY2: # Python 2 - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - - __marker = object() - - def __len__(self): - return len(self._container) - - def __iter__(self): - # Only provide the originally cased names - for vals in self._container.values(): - yield vals[0] - - def pop(self, key, default=__marker): - """D.pop(k[,d]) -> v, remove specified key and return the corresponding value. - If key is not found, d is returned if given, otherwise KeyError is raised. - """ - # Using the MutableMapping function directly fails due to the private marker. - # Using ordinary dict.pop would expose the internal structures. - # So let's reinvent the wheel. - try: - value = self[key] - except KeyError: - if default is self.__marker: - raise - return default - else: - del self[key] - return value - - def discard(self, key): - try: - del self[key] - except KeyError: - pass - - def add(self, key, val): - """Adds a (name, value) pair, doesn't overwrite the value if it already - exists. - - >>> headers = HTTPHeaderDict(foo='bar') - >>> headers.add('Foo', 'baz') - >>> headers['foo'] - 'bar, baz' - """ - key_lower = key.lower() - new_vals = [key, val] - # Keep the common case aka no item present as fast as possible - vals = self._container.setdefault(key_lower, new_vals) - if new_vals is not vals: - vals.append(val) - - def extend(self, *args, **kwargs): - """Generic import function for any type of header-like object. - Adapted version of MutableMapping.update in order to insert items - with self.add instead of self.__setitem__ - """ - if len(args) > 1: - raise TypeError( - "extend() takes at most 1 positional " - "arguments ({0} given)".format(len(args)) - ) - other = args[0] if len(args) >= 1 else () - - if isinstance(other, HTTPHeaderDict): - for key, val in other.iteritems(): - self.add(key, val) - elif isinstance(other, Mapping): - for key in other: - self.add(key, other[key]) - elif hasattr(other, "keys"): - for key in other.keys(): - self.add(key, other[key]) - else: - for key, value in other: - self.add(key, value) - - for key, value in kwargs.items(): - self.add(key, value) - - def getlist(self, key, default=__marker): - """Returns a list of all the values for the named field. Returns an - empty list if the key doesn't exist.""" - try: - vals = self._container[key.lower()] - except KeyError: - if default is self.__marker: - return [] - return default - else: - return vals[1:] - - # Backwards compatibility for httplib - getheaders = getlist - getallmatchingheaders = getlist - iget = getlist - - # Backwards compatibility for http.cookiejar - get_all = getlist - - def __repr__(self): - return "%s(%s)" % (type(self).__name__, dict(self.itermerged())) - - def _copy_from(self, other): - for key in other: - val = other.getlist(key) - if isinstance(val, list): - # Don't need to convert tuples - val = list(val) - self._container[key.lower()] = [key] + val - - def copy(self): - clone = type(self)() - clone._copy_from(self) - return clone - - def iteritems(self): - """Iterate over all header lines, including duplicate ones.""" - for key in self: - vals = self._container[key.lower()] - for val in vals[1:]: - yield vals[0], val - - def itermerged(self): - """Iterate over all headers, merging duplicate ones together.""" - for key in self: - val = self._container[key.lower()] - yield val[0], ", ".join(val[1:]) - - def items(self): - return list(self.iteritems()) - - @classmethod - def from_httplib(cls, message): # Python 2 - """Read headers from a Python 2 httplib message object.""" - # python2.7 does not expose a proper API for exporting multiheaders - # efficiently. This function re-reads raw lines from the message - # object and extracts the multiheaders properly. - obs_fold_continued_leaders = (" ", "\t") - headers = [] - - for line in message.headers: - if line.startswith(obs_fold_continued_leaders): - if not headers: - # We received a header line that starts with OWS as described - # in RFC-7230 S3.2.4. This indicates a multiline header, but - # there exists no previous header to which we can attach it. - raise InvalidHeader( - "Header continuation with no previous header: %s" % line - ) - else: - key, value = headers[-1] - headers[-1] = (key, value + " " + line.strip()) - continue - - key, value = line.split(":", 1) - headers.append((key, value.strip())) - - return cls(headers) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/_version.py b/infrastructure/sandbox/Data/lambda/urllib3/_version.py deleted file mode 100644 index 6fbc84b30..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -# This file is protected via CODEOWNERS -__version__ = "1.26.12" diff --git a/infrastructure/sandbox/Data/lambda/urllib3/connection.py b/infrastructure/sandbox/Data/lambda/urllib3/connection.py deleted file mode 100644 index 10fb36c4e..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/connection.py +++ /dev/null @@ -1,567 +0,0 @@ -from __future__ import absolute_import - -import datetime -import logging -import os -import re -import socket -import warnings -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from .packages import six -from .packages.six.moves.http_client import HTTPConnection as _HTTPConnection -from .packages.six.moves.http_client import HTTPException # noqa: F401 -from .util.proxy import create_proxy_ssl_context - -try: # Compiled with SSL? - import ssl - - BaseSSLError = ssl.SSLError -except (ImportError, AttributeError): # Platform-specific: No SSL. - ssl = None - - class BaseSSLError(BaseException): - pass - - -try: - # Python 3: not a no-op, we're adding this to the namespace so it can be imported. - ConnectionError = ConnectionError -except NameError: - # Python 2 - class ConnectionError(Exception): - pass - - -try: # Python 3: - # Not a no-op, we're adding this to the namespace so it can be imported. - BrokenPipeError = BrokenPipeError -except NameError: # Python 2: - - class BrokenPipeError(Exception): - pass - - -from ._collections import HTTPHeaderDict # noqa (historical, removed in v2) -from ._version import __version__ -from .exceptions import ( - ConnectTimeoutError, - NewConnectionError, - SubjectAltNameWarning, - SystemTimeWarning, -) -from .util import SKIP_HEADER, SKIPPABLE_HEADERS, connection -from .util.ssl_ import ( - assert_fingerprint, - create_urllib3_context, - is_ipaddress, - resolve_cert_reqs, - resolve_ssl_version, - ssl_wrap_socket, -) -from .util.ssl_match_hostname import CertificateError, match_hostname - -log = logging.getLogger(__name__) - -port_by_scheme = {"http": 80, "https": 443} - -# When it comes time to update this value as a part of regular maintenance -# (ie test_recent_date is failing) update it to ~6 months before the current date. -RECENT_DATE = datetime.date(2022, 1, 1) - -_CONTAINS_CONTROL_CHAR_RE = re.compile(r"[^-!#$%&'*+.^_`|~0-9a-zA-Z]") - - -class HTTPConnection(_HTTPConnection, object): - """ - Based on :class:`http.client.HTTPConnection` but provides an extra constructor - backwards-compatibility layer between older and newer Pythons. - - Additional keyword parameters are used to configure attributes of the connection. - Accepted parameters include: - - - ``strict``: See the documentation on :class:`urllib3.connectionpool.HTTPConnectionPool` - - ``source_address``: Set the source address for the current connection. - - ``socket_options``: Set specific options on the underlying socket. If not specified, then - defaults are loaded from ``HTTPConnection.default_socket_options`` which includes disabling - Nagle's algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy. - - For example, if you wish to enable TCP Keep Alive in addition to the defaults, - you might pass: - - .. code-block:: python - - HTTPConnection.default_socket_options + [ - (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), - ] - - Or you may want to disable the defaults by passing an empty list (e.g., ``[]``). - """ - - default_port = port_by_scheme["http"] - - #: Disable Nagle's algorithm by default. - #: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]`` - default_socket_options = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)] - - #: Whether this connection verifies the host's certificate. - is_verified = False - - #: Whether this proxy connection (if used) verifies the proxy host's - #: certificate. - proxy_is_verified = None - - def __init__(self, *args, **kw): - if not six.PY2: - kw.pop("strict", None) - - # Pre-set source_address. - self.source_address = kw.get("source_address") - - #: The socket options provided by the user. If no options are - #: provided, we use the default options. - self.socket_options = kw.pop("socket_options", self.default_socket_options) - - # Proxy options provided by the user. - self.proxy = kw.pop("proxy", None) - self.proxy_config = kw.pop("proxy_config", None) - - _HTTPConnection.__init__(self, *args, **kw) - - @property - def host(self): - """ - Getter method to remove any trailing dots that indicate the hostname is an FQDN. - - In general, SSL certificates don't include the trailing dot indicating a - fully-qualified domain name, and thus, they don't validate properly when - checked against a domain name that includes the dot. In addition, some - servers may not expect to receive the trailing dot when provided. - - However, the hostname with trailing dot is critical to DNS resolution; doing a - lookup with the trailing dot will properly only resolve the appropriate FQDN, - whereas a lookup without a trailing dot will search the system's search domain - list. Thus, it's important to keep the original host around for use only in - those cases where it's appropriate (i.e., when doing DNS lookup to establish the - actual TCP connection across which we're going to send HTTP requests). - """ - return self._dns_host.rstrip(".") - - @host.setter - def host(self, value): - """ - Setter for the `host` property. - - We assume that only urllib3 uses the _dns_host attribute; httplib itself - only uses `host`, and it seems reasonable that other libraries follow suit. - """ - self._dns_host = value - - def _new_conn(self): - """Establish a socket connection and set nodelay settings on it. - - :return: New socket connection. - """ - extra_kw = {} - if self.source_address: - extra_kw["source_address"] = self.source_address - - if self.socket_options: - extra_kw["socket_options"] = self.socket_options - - try: - conn = connection.create_connection( - (self._dns_host, self.port), self.timeout, **extra_kw - ) - - except SocketTimeout: - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - - except SocketError as e: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - return conn - - def _is_using_tunnel(self): - # Google App Engine's httplib does not define _tunnel_host - return getattr(self, "_tunnel_host", None) - - def _prepare_conn(self, conn): - self.sock = conn - if self._is_using_tunnel(): - # TODO: Fix tunnel so it doesn't depend on self.sock state. - self._tunnel() - # Mark this connection as not reusable - self.auto_open = 0 - - def connect(self): - conn = self._new_conn() - self._prepare_conn(conn) - - def putrequest(self, method, url, *args, **kwargs): - """ """ - # Empty docstring because the indentation of CPython's implementation - # is broken but we don't want this method in our documentation. - match = _CONTAINS_CONTROL_CHAR_RE.search(method) - if match: - raise ValueError( - "Method cannot contain non-token characters %r (found at least %r)" - % (method, match.group()) - ) - - return _HTTPConnection.putrequest(self, method, url, *args, **kwargs) - - def putheader(self, header, *values): - """ """ - if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): - _HTTPConnection.putheader(self, header, *values) - elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS: - raise ValueError( - "urllib3.util.SKIP_HEADER only supports '%s'" - % ("', '".join(map(str.title, sorted(SKIPPABLE_HEADERS))),) - ) - - def request(self, method, url, body=None, headers=None): - if headers is None: - headers = {} - else: - # Avoid modifying the headers passed into .request() - headers = headers.copy() - if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): - headers["User-Agent"] = _get_default_user_agent() - super(HTTPConnection, self).request(method, url, body=body, headers=headers) - - def request_chunked(self, method, url, body=None, headers=None): - """ - Alternative to the common request method, which sends the - body with chunked encoding and not as one block - """ - headers = headers or {} - header_keys = set([six.ensure_str(k.lower()) for k in headers]) - skip_accept_encoding = "accept-encoding" in header_keys - skip_host = "host" in header_keys - self.putrequest( - method, url, skip_accept_encoding=skip_accept_encoding, skip_host=skip_host - ) - if "user-agent" not in header_keys: - self.putheader("User-Agent", _get_default_user_agent()) - for header, value in headers.items(): - self.putheader(header, value) - if "transfer-encoding" not in header_keys: - self.putheader("Transfer-Encoding", "chunked") - self.endheaders() - - if body is not None: - stringish_types = six.string_types + (bytes,) - if isinstance(body, stringish_types): - body = (body,) - for chunk in body: - if not chunk: - continue - if not isinstance(chunk, bytes): - chunk = chunk.encode("utf8") - len_str = hex(len(chunk))[2:] - to_send = bytearray(len_str.encode()) - to_send += b"\r\n" - to_send += chunk - to_send += b"\r\n" - self.send(to_send) - - # After the if clause, to always have a closed body - self.send(b"0\r\n\r\n") - - -class HTTPSConnection(HTTPConnection): - """ - Many of the parameters to this constructor are passed to the underlying SSL - socket by means of :py:func:`urllib3.util.ssl_wrap_socket`. - """ - - default_port = port_by_scheme["https"] - - cert_reqs = None - ca_certs = None - ca_cert_dir = None - ca_cert_data = None - ssl_version = None - assert_fingerprint = None - tls_in_tls_required = False - - def __init__( - self, - host, - port=None, - key_file=None, - cert_file=None, - key_password=None, - strict=None, - timeout=socket._GLOBAL_DEFAULT_TIMEOUT, - ssl_context=None, - server_hostname=None, - **kw - ): - - HTTPConnection.__init__(self, host, port, strict=strict, timeout=timeout, **kw) - - self.key_file = key_file - self.cert_file = cert_file - self.key_password = key_password - self.ssl_context = ssl_context - self.server_hostname = server_hostname - - # Required property for Google AppEngine 1.9.0 which otherwise causes - # HTTPS requests to go out as HTTP. (See Issue #356) - self._protocol = "https" - - def set_cert( - self, - key_file=None, - cert_file=None, - cert_reqs=None, - key_password=None, - ca_certs=None, - assert_hostname=None, - assert_fingerprint=None, - ca_cert_dir=None, - ca_cert_data=None, - ): - """ - This method should only be called once, before the connection is used. - """ - # If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also - # have an SSLContext object in which case we'll use its verify_mode. - if cert_reqs is None: - if self.ssl_context is not None: - cert_reqs = self.ssl_context.verify_mode - else: - cert_reqs = resolve_cert_reqs(None) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - self.ca_certs = ca_certs and os.path.expanduser(ca_certs) - self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) - self.ca_cert_data = ca_cert_data - - def connect(self): - # Add certificate verification - self.sock = conn = self._new_conn() - hostname = self.host - tls_in_tls = False - - if self._is_using_tunnel(): - if self.tls_in_tls_required: - self.sock = conn = self._connect_tls_proxy(hostname, conn) - tls_in_tls = True - - # Calls self._set_hostport(), so self.host is - # self._tunnel_host below. - self._tunnel() - # Mark this connection as not reusable - self.auto_open = 0 - - # Override the host with the one we're requesting data from. - hostname = self._tunnel_host - - server_hostname = hostname - if self.server_hostname is not None: - server_hostname = self.server_hostname - - is_time_off = datetime.date.today() < RECENT_DATE - if is_time_off: - warnings.warn( - ( - "System time is way off (before {0}). This will probably " - "lead to SSL verification errors" - ).format(RECENT_DATE), - SystemTimeWarning, - ) - - # Wrap socket using verification with the root certs in - # trusted_root_certs - default_ssl_context = False - if self.ssl_context is None: - default_ssl_context = True - self.ssl_context = create_urllib3_context( - ssl_version=resolve_ssl_version(self.ssl_version), - cert_reqs=resolve_cert_reqs(self.cert_reqs), - ) - - context = self.ssl_context - context.verify_mode = resolve_cert_reqs(self.cert_reqs) - - # Try to load OS default certs if none are given. - # Works well on Windows (requires Python3.4+) - if ( - not self.ca_certs - and not self.ca_cert_dir - and not self.ca_cert_data - and default_ssl_context - and hasattr(context, "load_default_certs") - ): - context.load_default_certs() - - self.sock = ssl_wrap_socket( - sock=conn, - keyfile=self.key_file, - certfile=self.cert_file, - key_password=self.key_password, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=server_hostname, - ssl_context=context, - tls_in_tls=tls_in_tls, - ) - - # If we're using all defaults and the connection - # is TLSv1 or TLSv1.1 we throw a DeprecationWarning - # for the host. - if ( - default_ssl_context - and self.ssl_version is None - and hasattr(self.sock, "version") - and self.sock.version() in {"TLSv1", "TLSv1.1"} - ): - warnings.warn( - "Negotiating TLSv1/TLSv1.1 by default is deprecated " - "and will be disabled in urllib3 v2.0.0. Connecting to " - "'%s' with '%s' can be enabled by explicitly opting-in " - "with 'ssl_version'" % (self.host, self.sock.version()), - DeprecationWarning, - ) - - if self.assert_fingerprint: - assert_fingerprint( - self.sock.getpeercert(binary_form=True), self.assert_fingerprint - ) - elif ( - context.verify_mode != ssl.CERT_NONE - and not getattr(context, "check_hostname", False) - and self.assert_hostname is not False - ): - # While urllib3 attempts to always turn off hostname matching from - # the TLS library, this cannot always be done. So we check whether - # the TLS Library still thinks it's matching hostnames. - cert = self.sock.getpeercert() - if not cert.get("subjectAltName", ()): - warnings.warn( - ( - "Certificate for {0} has no `subjectAltName`, falling back to check for a " - "`commonName` for now. This feature is being removed by major browsers and " - "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " - "for details.)".format(hostname) - ), - SubjectAltNameWarning, - ) - _match_hostname(cert, self.assert_hostname or server_hostname) - - self.is_verified = ( - context.verify_mode == ssl.CERT_REQUIRED - or self.assert_fingerprint is not None - ) - - def _connect_tls_proxy(self, hostname, conn): - """ - Establish a TLS connection to the proxy using the provided SSL context. - """ - proxy_config = self.proxy_config - ssl_context = proxy_config.ssl_context - if ssl_context: - # If the user provided a proxy context, we assume CA and client - # certificates have already been set - return ssl_wrap_socket( - sock=conn, - server_hostname=hostname, - ssl_context=ssl_context, - ) - - ssl_context = create_proxy_ssl_context( - self.ssl_version, - self.cert_reqs, - self.ca_certs, - self.ca_cert_dir, - self.ca_cert_data, - ) - - # If no cert was provided, use only the default options for server - # certificate validation - socket = ssl_wrap_socket( - sock=conn, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=hostname, - ssl_context=ssl_context, - ) - - if ssl_context.verify_mode != ssl.CERT_NONE and not getattr( - ssl_context, "check_hostname", False - ): - # While urllib3 attempts to always turn off hostname matching from - # the TLS library, this cannot always be done. So we check whether - # the TLS Library still thinks it's matching hostnames. - cert = socket.getpeercert() - if not cert.get("subjectAltName", ()): - warnings.warn( - ( - "Certificate for {0} has no `subjectAltName`, falling back to check for a " - "`commonName` for now. This feature is being removed by major browsers and " - "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " - "for details.)".format(hostname) - ), - SubjectAltNameWarning, - ) - _match_hostname(cert, hostname) - - self.proxy_is_verified = ssl_context.verify_mode == ssl.CERT_REQUIRED - return socket - - -def _match_hostname(cert, asserted_hostname): - # Our upstream implementation of ssl.match_hostname() - # only applies this normalization to IP addresses so it doesn't - # match DNS SANs so we do the same thing! - stripped_hostname = asserted_hostname.strip("u[]") - if is_ipaddress(stripped_hostname): - asserted_hostname = stripped_hostname - - try: - match_hostname(cert, asserted_hostname) - except CertificateError as e: - log.warning( - "Certificate did not match expected hostname: %s. Certificate: %s", - asserted_hostname, - cert, - ) - # Add cert to exception and reraise so client code can inspect - # the cert when catching the exception, if they want to - e._peer_cert = cert - raise - - -def _get_default_user_agent(): - return "python-urllib3/%s" % __version__ - - -class DummyConnection(object): - """Used to detect a failed ConnectionCls import.""" - - pass - - -if not ssl: - HTTPSConnection = DummyConnection # noqa: F811 - - -VerifiedHTTPSConnection = HTTPSConnection diff --git a/infrastructure/sandbox/Data/lambda/urllib3/connectionpool.py b/infrastructure/sandbox/Data/lambda/urllib3/connectionpool.py deleted file mode 100644 index 96339e90a..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/connectionpool.py +++ /dev/null @@ -1,1110 +0,0 @@ -from __future__ import absolute_import - -import errno -import logging -import re -import socket -import sys -import warnings -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from .connection import ( - BaseSSLError, - BrokenPipeError, - DummyConnection, - HTTPConnection, - HTTPException, - HTTPSConnection, - VerifiedHTTPSConnection, - port_by_scheme, -) -from .exceptions import ( - ClosedPoolError, - EmptyPoolError, - HeaderParsingError, - HostChangedError, - InsecureRequestWarning, - LocationValueError, - MaxRetryError, - NewConnectionError, - ProtocolError, - ProxyError, - ReadTimeoutError, - SSLError, - TimeoutError, -) -from .packages import six -from .packages.six.moves import queue -from .request import RequestMethods -from .response import HTTPResponse -from .util.connection import is_connection_dropped -from .util.proxy import connection_requires_http_tunnel -from .util.queue import LifoQueue -from .util.request import set_file_position -from .util.response import assert_header_parsing -from .util.retry import Retry -from .util.ssl_match_hostname import CertificateError -from .util.timeout import Timeout -from .util.url import Url, _encode_target -from .util.url import _normalize_host as normalize_host -from .util.url import get_host, parse_url - -xrange = six.moves.xrange - -log = logging.getLogger(__name__) - -_Default = object() - - -# Pool objects -class ConnectionPool(object): - """ - Base class for all connection pools, such as - :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`. - - .. note:: - ConnectionPool.urlopen() does not normalize or percent-encode target URIs - which is useful if your target server doesn't support percent-encoded - target URIs. - """ - - scheme = None - QueueCls = LifoQueue - - def __init__(self, host, port=None): - if not host: - raise LocationValueError("No host specified.") - - self.host = _normalize_host(host, scheme=self.scheme) - self._proxy_host = host.lower() - self.port = port - - def __str__(self): - return "%s(host=%r, port=%r)" % (type(self).__name__, self.host, self.port) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - # Return False to re-raise any potential exceptions - return False - - def close(self): - """ - Close all pooled connections and disable the pool. - """ - pass - - -# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252 -_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK} - - -class HTTPConnectionPool(ConnectionPool, RequestMethods): - """ - Thread-safe connection pool for one host. - - :param host: - Host used for this HTTP Connection (e.g. "localhost"), passed into - :class:`http.client.HTTPConnection`. - - :param port: - Port used for this HTTP Connection (None is equivalent to 80), passed - into :class:`http.client.HTTPConnection`. - - :param strict: - Causes BadStatusLine to be raised if the status line can't be parsed - as a valid HTTP/1.0 or 1.1 status line, passed into - :class:`http.client.HTTPConnection`. - - .. note:: - Only works in Python 2. This parameter is ignored in Python 3. - - :param timeout: - Socket timeout in seconds for each individual connection. This can - be a float or integer, which sets the timeout for the HTTP request, - or an instance of :class:`urllib3.util.Timeout` which gives you more - fine-grained control over request timeouts. After the constructor has - been parsed, this is always a `urllib3.util.Timeout` object. - - :param maxsize: - Number of connections to save that can be reused. More than 1 is useful - in multithreaded situations. If ``block`` is set to False, more - connections will be created but they will not be saved once they've - been used. - - :param block: - If set to True, no more than ``maxsize`` connections will be used at - a time. When no free connections are available, the call will block - until a connection has been released. This is a useful side effect for - particular multithreaded situations where one does not want to use more - than maxsize connections per host to prevent flooding. - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - - :param retries: - Retry configuration to use by default with requests in this pool. - - :param _proxy: - Parsed proxy URL, should not be used directly, instead, see - :class:`urllib3.ProxyManager` - - :param _proxy_headers: - A dictionary with proxy headers, should not be used directly, - instead, see :class:`urllib3.ProxyManager` - - :param \\**conn_kw: - Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`, - :class:`urllib3.connection.HTTPSConnection` instances. - """ - - scheme = "http" - ConnectionCls = HTTPConnection - ResponseCls = HTTPResponse - - def __init__( - self, - host, - port=None, - strict=False, - timeout=Timeout.DEFAULT_TIMEOUT, - maxsize=1, - block=False, - headers=None, - retries=None, - _proxy=None, - _proxy_headers=None, - _proxy_config=None, - **conn_kw - ): - ConnectionPool.__init__(self, host, port) - RequestMethods.__init__(self, headers) - - self.strict = strict - - if not isinstance(timeout, Timeout): - timeout = Timeout.from_float(timeout) - - if retries is None: - retries = Retry.DEFAULT - - self.timeout = timeout - self.retries = retries - - self.pool = self.QueueCls(maxsize) - self.block = block - - self.proxy = _proxy - self.proxy_headers = _proxy_headers or {} - self.proxy_config = _proxy_config - - # Fill the queue up so that doing get() on it will block properly - for _ in xrange(maxsize): - self.pool.put(None) - - # These are mostly for testing and debugging purposes. - self.num_connections = 0 - self.num_requests = 0 - self.conn_kw = conn_kw - - if self.proxy: - # Enable Nagle's algorithm for proxies, to avoid packet fragmentation. - # We cannot know if the user has added default socket options, so we cannot replace the - # list. - self.conn_kw.setdefault("socket_options", []) - - self.conn_kw["proxy"] = self.proxy - self.conn_kw["proxy_config"] = self.proxy_config - - def _new_conn(self): - """ - Return a fresh :class:`HTTPConnection`. - """ - self.num_connections += 1 - log.debug( - "Starting new HTTP connection (%d): %s:%s", - self.num_connections, - self.host, - self.port or "80", - ) - - conn = self.ConnectionCls( - host=self.host, - port=self.port, - timeout=self.timeout.connect_timeout, - strict=self.strict, - **self.conn_kw - ) - return conn - - def _get_conn(self, timeout=None): - """ - Get a connection. Will return a pooled connection if one is available. - - If no connections are available and :prop:`.block` is ``False``, then a - fresh connection is returned. - - :param timeout: - Seconds to wait before giving up and raising - :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and - :prop:`.block` is ``True``. - """ - conn = None - try: - conn = self.pool.get(block=self.block, timeout=timeout) - - except AttributeError: # self.pool is None - raise ClosedPoolError(self, "Pool is closed.") - - except queue.Empty: - if self.block: - raise EmptyPoolError( - self, - "Pool reached maximum size and no more connections are allowed.", - ) - pass # Oh well, we'll create a new connection then - - # If this is a persistent connection, check if it got disconnected - if conn and is_connection_dropped(conn): - log.debug("Resetting dropped connection: %s", self.host) - conn.close() - if getattr(conn, "auto_open", 1) == 0: - # This is a proxied connection that has been mutated by - # http.client._tunnel() and cannot be reused (since it would - # attempt to bypass the proxy) - conn = None - - return conn or self._new_conn() - - def _put_conn(self, conn): - """ - Put a connection back into the pool. - - :param conn: - Connection object for the current host and port as returned by - :meth:`._new_conn` or :meth:`._get_conn`. - - If the pool is already full, the connection is closed and discarded - because we exceeded maxsize. If connections are discarded frequently, - then maxsize should be increased. - - If the pool is closed, then the connection will be closed and discarded. - """ - try: - self.pool.put(conn, block=False) - return # Everything is dandy, done. - except AttributeError: - # self.pool is None. - pass - except queue.Full: - # This should never happen if self.block == True - log.warning( - "Connection pool is full, discarding connection: %s. Connection pool size: %s", - self.host, - self.pool.qsize(), - ) - # Connection never got put back into the pool, close it. - if conn: - conn.close() - - def _validate_conn(self, conn): - """ - Called right before a request is made, after the socket is created. - """ - pass - - def _prepare_proxy(self, conn): - # Nothing to do for HTTP connections. - pass - - def _get_timeout(self, timeout): - """Helper that always returns a :class:`urllib3.util.Timeout`""" - if timeout is _Default: - return self.timeout.clone() - - if isinstance(timeout, Timeout): - return timeout.clone() - else: - # User passed us an int/float. This is for backwards compatibility, - # can be removed later - return Timeout.from_float(timeout) - - def _raise_timeout(self, err, url, timeout_value): - """Is the error actually a timeout? Will raise a ReadTimeout or pass""" - - if isinstance(err, SocketTimeout): - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - # See the above comment about EAGAIN in Python 3. In Python 2 we have - # to specifically catch it and throw the timeout error - if hasattr(err, "errno") and err.errno in _blocking_errnos: - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - # Catch possible read timeouts thrown as SSL errors. If not the - # case, rethrow the original. We need to do this because of: - # http://bugs.python.org/issue10272 - if "timed out" in str(err) or "did not complete (read)" in str( - err - ): # Python < 2.7.4 - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - def _make_request( - self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw - ): - """ - Perform a request on a given urllib connection object taken from our - pool. - - :param conn: - a connection from one of our connection pools - - :param timeout: - Socket timeout in seconds for the request. This can be a - float or integer, which will set the same timeout value for - the socket connect and the socket read, or an instance of - :class:`urllib3.util.Timeout`, which gives you more fine-grained - control over your timeouts. - """ - self.num_requests += 1 - - timeout_obj = self._get_timeout(timeout) - timeout_obj.start_connect() - conn.timeout = timeout_obj.connect_timeout - - # Trigger any extra validation we need to do. - try: - self._validate_conn(conn) - except (SocketTimeout, BaseSSLError) as e: - # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. - self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) - raise - - # conn.request() calls http.client.*.request, not the method in - # urllib3.request. It also calls makefile (recv) on the socket. - try: - if chunked: - conn.request_chunked(method, url, **httplib_request_kw) - else: - conn.request(method, url, **httplib_request_kw) - - # We are swallowing BrokenPipeError (errno.EPIPE) since the server is - # legitimately able to close the connection after sending a valid response. - # With this behaviour, the received response is still readable. - except BrokenPipeError: - # Python 3 - pass - except IOError as e: - # Python 2 and macOS/Linux - # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS - # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ - if e.errno not in { - errno.EPIPE, - errno.ESHUTDOWN, - errno.EPROTOTYPE, - }: - raise - - # Reset the timeout for the recv() on the socket - read_timeout = timeout_obj.read_timeout - - # App Engine doesn't have a sock attr - if getattr(conn, "sock", None): - # In Python 3 socket.py will catch EAGAIN and return None when you - # try and read into the file pointer created by http.client, which - # instead raises a BadStatusLine exception. Instead of catching - # the exception and assuming all BadStatusLine exceptions are read - # timeouts, check for a zero timeout before making the request. - if read_timeout == 0: - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % read_timeout - ) - if read_timeout is Timeout.DEFAULT_TIMEOUT: - conn.sock.settimeout(socket.getdefaulttimeout()) - else: # None or a value - conn.sock.settimeout(read_timeout) - - # Receive the response from the server - try: - try: - # Python 2.7, use buffering of HTTP responses - httplib_response = conn.getresponse(buffering=True) - except TypeError: - # Python 3 - try: - httplib_response = conn.getresponse() - except BaseException as e: - # Remove the TypeError from the exception chain in - # Python 3 (including for exceptions like SystemExit). - # Otherwise it looks like a bug in the code. - six.raise_from(e, None) - except (SocketTimeout, BaseSSLError, SocketError) as e: - self._raise_timeout(err=e, url=url, timeout_value=read_timeout) - raise - - # AppEngine doesn't have a version attr. - http_version = getattr(conn, "_http_vsn_str", "HTTP/?") - log.debug( - '%s://%s:%s "%s %s %s" %s %s', - self.scheme, - self.host, - self.port, - method, - url, - http_version, - httplib_response.status, - httplib_response.length, - ) - - try: - assert_header_parsing(httplib_response.msg) - except (HeaderParsingError, TypeError) as hpe: # Platform-specific: Python 3 - log.warning( - "Failed to parse headers (url=%s): %s", - self._absolute_url(url), - hpe, - exc_info=True, - ) - - return httplib_response - - def _absolute_url(self, path): - return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url - - def close(self): - """ - Close all pooled connections and disable the pool. - """ - if self.pool is None: - return - # Disable access to the pool - old_pool, self.pool = self.pool, None - - try: - while True: - conn = old_pool.get(block=False) - if conn: - conn.close() - - except queue.Empty: - pass # Done. - - def is_same_host(self, url): - """ - Check if the given ``url`` is a member of the same host as this - connection pool. - """ - if url.startswith("/"): - return True - - # TODO: Add optional support for socket.gethostbyname checking. - scheme, host, port = get_host(url) - if host is not None: - host = _normalize_host(host, scheme=scheme) - - # Use explicit default port for comparison when none is given - if self.port and not port: - port = port_by_scheme.get(scheme) - elif not self.port and port == port_by_scheme.get(scheme): - port = None - - return (scheme, host, port) == (self.scheme, self.host, self.port) - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=None, - redirect=True, - assert_same_host=True, - timeout=_Default, - pool_timeout=None, - release_conn=None, - chunked=False, - body_pos=None, - **response_kw - ): - """ - Get a connection from the pool and perform an HTTP request. This is the - lowest level call for making a request, so you'll need to specify all - the raw details. - - .. note:: - - More commonly, it's appropriate to use a convenience method provided - by :class:`.RequestMethods`, such as :meth:`request`. - - .. note:: - - `release_conn` will only behave as expected if - `preload_content=False` because we want to make - `preload_content=False` the default behaviour someday soon without - breaking backwards compatibility. - - :param method: - HTTP request method (such as GET, POST, PUT, etc.) - - :param url: - The URL to perform the request on. - - :param body: - Data to send in the request body, either :class:`str`, :class:`bytes`, - an iterable of :class:`str`/:class:`bytes`, or a file-like object. - - :param headers: - Dictionary of custom headers to send, such as User-Agent, - If-None-Match, etc. If None, pool headers are used. If provided, - these headers completely replace any pool-specific headers. - - :param retries: - Configure the number of retries to allow before raising a - :class:`~urllib3.exceptions.MaxRetryError` exception. - - Pass ``None`` to retry until you receive a response. Pass a - :class:`~urllib3.util.retry.Retry` object for fine-grained control - over different types of retries. - Pass an integer number to retry connection errors that many times, - but no other types of errors. Pass zero to never retry. - - If ``False``, then retries are disabled and any exception is raised - immediately. Also, instead of raising a MaxRetryError on redirects, - the redirect response will be returned. - - :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. - - :param redirect: - If True, automatically handle redirects (status codes 301, 302, - 303, 307, 308). Each redirect counts as a retry. Disabling retries - will disable redirect, too. - - :param assert_same_host: - If ``True``, will make sure that the host of the pool requests is - consistent else will raise HostChangedError. When ``False``, you can - use the pool on an HTTP proxy and request foreign hosts. - - :param timeout: - If specified, overrides the default timeout for this one - request. It may be a float (in seconds) or an instance of - :class:`urllib3.util.Timeout`. - - :param pool_timeout: - If set and the pool is set to block=True, then this method will - block for ``pool_timeout`` seconds and raise EmptyPoolError if no - connection is available within the time period. - - :param release_conn: - If False, then the urlopen call will not release the connection - back into the pool once a response is received (but will release if - you read the entire contents of the response such as when - `preload_content=True`). This is useful if you're not preloading - the response's content immediately. You will need to call - ``r.release_conn()`` on the response ``r`` to return the connection - back into the pool. If None, it takes the value of - ``response_kw.get('preload_content', True)``. - - :param chunked: - If True, urllib3 will send the body using chunked transfer - encoding. Otherwise, urllib3 will send the body using the standard - content-length form. Defaults to False. - - :param int body_pos: - Position to seek to in file-like body in the event of a retry or - redirect. Typically this won't need to be set because urllib3 will - auto-populate the value when needed. - - :param \\**response_kw: - Additional parameters are passed to - :meth:`urllib3.response.HTTPResponse.from_httplib` - """ - - parsed_url = parse_url(url) - destination_scheme = parsed_url.scheme - - if headers is None: - headers = self.headers - - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect, default=self.retries) - - if release_conn is None: - release_conn = response_kw.get("preload_content", True) - - # Check host - if assert_same_host and not self.is_same_host(url): - raise HostChangedError(self, url, retries) - - # Ensure that the URL we're connecting to is properly encoded - if url.startswith("/"): - url = six.ensure_str(_encode_target(url)) - else: - url = six.ensure_str(parsed_url.url) - - conn = None - - # Track whether `conn` needs to be released before - # returning/raising/recursing. Update this variable if necessary, and - # leave `release_conn` constant throughout the function. That way, if - # the function recurses, the original value of `release_conn` will be - # passed down into the recursive call, and its value will be respected. - # - # See issue #651 [1] for details. - # - # [1] - release_this_conn = release_conn - - http_tunnel_required = connection_requires_http_tunnel( - self.proxy, self.proxy_config, destination_scheme - ) - - # Merge the proxy headers. Only done when not using HTTP CONNECT. We - # have to copy the headers dict so we can safely change it without those - # changes being reflected in anyone else's copy. - if not http_tunnel_required: - headers = headers.copy() - headers.update(self.proxy_headers) - - # Must keep the exception bound to a separate variable or else Python 3 - # complains about UnboundLocalError. - err = None - - # Keep track of whether we cleanly exited the except block. This - # ensures we do proper cleanup in finally. - clean_exit = False - - # Rewind body position, if needed. Record current position - # for future rewinds in the event of a redirect/retry. - body_pos = set_file_position(body, body_pos) - - try: - # Request a connection from the queue. - timeout_obj = self._get_timeout(timeout) - conn = self._get_conn(timeout=pool_timeout) - - conn.timeout = timeout_obj.connect_timeout - - is_new_proxy_conn = self.proxy is not None and not getattr( - conn, "sock", None - ) - if is_new_proxy_conn and http_tunnel_required: - self._prepare_proxy(conn) - - # Make the request on the httplib connection object. - httplib_response = self._make_request( - conn, - method, - url, - timeout=timeout_obj, - body=body, - headers=headers, - chunked=chunked, - ) - - # If we're going to release the connection in ``finally:``, then - # the response doesn't need to know about the connection. Otherwise - # it will also try to release it and we'll have a double-release - # mess. - response_conn = conn if not release_conn else None - - # Pass method to Response for length checking - response_kw["request_method"] = method - - # Import httplib's response into our own wrapper object - response = self.ResponseCls.from_httplib( - httplib_response, - pool=self, - connection=response_conn, - retries=retries, - **response_kw - ) - - # Everything went great! - clean_exit = True - - except EmptyPoolError: - # Didn't get a connection from the pool, no need to clean up - clean_exit = True - release_this_conn = False - raise - - except ( - TimeoutError, - HTTPException, - SocketError, - ProtocolError, - BaseSSLError, - SSLError, - CertificateError, - ) as e: - # Discard the connection for these exceptions. It will be - # replaced during the next _get_conn() call. - clean_exit = False - - def _is_ssl_error_message_from_http_proxy(ssl_error): - # We're trying to detect the message 'WRONG_VERSION_NUMBER' but - # SSLErrors are kinda all over the place when it comes to the message, - # so we try to cover our bases here! - message = " ".join(re.split("[^a-z]", str(ssl_error).lower())) - return ( - "wrong version number" in message or "unknown protocol" in message - ) - - # Try to detect a common user error with proxies which is to - # set an HTTP proxy to be HTTPS when it should be 'http://' - # (ie {'http': 'http://proxy', 'https': 'https://proxy'}) - # Instead we add a nice error message and point to a URL. - if ( - isinstance(e, BaseSSLError) - and self.proxy - and _is_ssl_error_message_from_http_proxy(e) - and conn.proxy - and conn.proxy.scheme == "https" - ): - e = ProxyError( - "Your proxy appears to only use HTTP and not HTTPS, " - "try changing your proxy URL to be HTTP. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#https-proxy-error-http-proxy", - SSLError(e), - ) - elif isinstance(e, (BaseSSLError, CertificateError)): - e = SSLError(e) - elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: - e = ProxyError("Cannot connect to proxy.", e) - elif isinstance(e, (SocketError, HTTPException)): - e = ProtocolError("Connection aborted.", e) - - retries = retries.increment( - method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] - ) - retries.sleep() - - # Keep track of the error for the retry warning. - err = e - - finally: - if not clean_exit: - # We hit some kind of exception, handled or otherwise. We need - # to throw the connection away unless explicitly told not to. - # Close the connection, set the variable to None, and make sure - # we put the None back in the pool to avoid leaking it. - conn = conn and conn.close() - release_this_conn = True - - if release_this_conn: - # Put the connection back to be reused. If the connection is - # expired then it will be None, which will get replaced with a - # fresh connection during _get_conn. - self._put_conn(conn) - - if not conn: - # Try again - log.warning( - "Retrying (%r) after connection broken by '%r': %s", retries, err, url - ) - return self.urlopen( - method, - url, - body, - headers, - retries, - redirect, - assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - # Handle redirect? - redirect_location = redirect and response.get_redirect_location() - if redirect_location: - if response.status == 303: - method = "GET" - - try: - retries = retries.increment(method, url, response=response, _pool=self) - except MaxRetryError: - if retries.raise_on_redirect: - response.drain_conn() - raise - return response - - response.drain_conn() - retries.sleep_for_retry(response) - log.debug("Redirecting %s -> %s", url, redirect_location) - return self.urlopen( - method, - redirect_location, - body, - headers, - retries=retries, - redirect=redirect, - assert_same_host=assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - # Check if we should retry the HTTP response. - has_retry_after = bool(response.getheader("Retry-After")) - if retries.is_retry(method, response.status, has_retry_after): - try: - retries = retries.increment(method, url, response=response, _pool=self) - except MaxRetryError: - if retries.raise_on_status: - response.drain_conn() - raise - return response - - response.drain_conn() - retries.sleep(response) - log.debug("Retry: %s", url) - return self.urlopen( - method, - url, - body, - headers, - retries=retries, - redirect=redirect, - assert_same_host=assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - return response - - -class HTTPSConnectionPool(HTTPConnectionPool): - """ - Same as :class:`.HTTPConnectionPool`, but HTTPS. - - :class:`.HTTPSConnection` uses one of ``assert_fingerprint``, - ``assert_hostname`` and ``host`` in this order to verify connections. - If ``assert_hostname`` is False, no verification is done. - - The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``, - ``ca_cert_dir``, ``ssl_version``, ``key_password`` are only used if :mod:`ssl` - is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade - the connection socket into an SSL socket. - """ - - scheme = "https" - ConnectionCls = HTTPSConnection - - def __init__( - self, - host, - port=None, - strict=False, - timeout=Timeout.DEFAULT_TIMEOUT, - maxsize=1, - block=False, - headers=None, - retries=None, - _proxy=None, - _proxy_headers=None, - key_file=None, - cert_file=None, - cert_reqs=None, - key_password=None, - ca_certs=None, - ssl_version=None, - assert_hostname=None, - assert_fingerprint=None, - ca_cert_dir=None, - **conn_kw - ): - - HTTPConnectionPool.__init__( - self, - host, - port, - strict, - timeout, - maxsize, - block, - headers, - retries, - _proxy, - _proxy_headers, - **conn_kw - ) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.ca_certs = ca_certs - self.ca_cert_dir = ca_cert_dir - self.ssl_version = ssl_version - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - - def _prepare_conn(self, conn): - """ - Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket` - and establish the tunnel if proxy is used. - """ - - if isinstance(conn, VerifiedHTTPSConnection): - conn.set_cert( - key_file=self.key_file, - key_password=self.key_password, - cert_file=self.cert_file, - cert_reqs=self.cert_reqs, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - assert_hostname=self.assert_hostname, - assert_fingerprint=self.assert_fingerprint, - ) - conn.ssl_version = self.ssl_version - return conn - - def _prepare_proxy(self, conn): - """ - Establishes a tunnel connection through HTTP CONNECT. - - Tunnel connection is established early because otherwise httplib would - improperly set Host: header to proxy's IP:port. - """ - - conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers) - - if self.proxy.scheme == "https": - conn.tls_in_tls_required = True - - conn.connect() - - def _new_conn(self): - """ - Return a fresh :class:`http.client.HTTPSConnection`. - """ - self.num_connections += 1 - log.debug( - "Starting new HTTPS connection (%d): %s:%s", - self.num_connections, - self.host, - self.port or "443", - ) - - if not self.ConnectionCls or self.ConnectionCls is DummyConnection: - raise SSLError( - "Can't connect to HTTPS URL because the SSL module is not available." - ) - - actual_host = self.host - actual_port = self.port - if self.proxy is not None: - actual_host = self.proxy.host - actual_port = self.proxy.port - - conn = self.ConnectionCls( - host=actual_host, - port=actual_port, - timeout=self.timeout.connect_timeout, - strict=self.strict, - cert_file=self.cert_file, - key_file=self.key_file, - key_password=self.key_password, - **self.conn_kw - ) - - return self._prepare_conn(conn) - - def _validate_conn(self, conn): - """ - Called right before a request is made, after the socket is created. - """ - super(HTTPSConnectionPool, self)._validate_conn(conn) - - # Force connect early to allow us to validate the connection. - if not getattr(conn, "sock", None): # AppEngine might not have `.sock` - conn.connect() - - if not conn.is_verified: - warnings.warn( - ( - "Unverified HTTPS request is being made to host '%s'. " - "Adding certificate verification is strongly advised. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings" % conn.host - ), - InsecureRequestWarning, - ) - - if getattr(conn, "proxy_is_verified", None) is False: - warnings.warn( - ( - "Unverified HTTPS connection done to an HTTPS proxy. " - "Adding certificate verification is strongly advised. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings" - ), - InsecureRequestWarning, - ) - - -def connection_from_url(url, **kw): - """ - Given a url, return an :class:`.ConnectionPool` instance of its host. - - This is a shortcut for not having to parse out the scheme, host, and port - of the url before creating an :class:`.ConnectionPool` instance. - - :param url: - Absolute URL string that must include the scheme. Port is optional. - - :param \\**kw: - Passes additional parameters to the constructor of the appropriate - :class:`.ConnectionPool`. Useful for specifying things like - timeout, maxsize, headers, etc. - - Example:: - - >>> conn = connection_from_url('http://google.com/') - >>> r = conn.request('GET', '/') - """ - scheme, host, port = get_host(url) - port = port or port_by_scheme.get(scheme, 80) - if scheme == "https": - return HTTPSConnectionPool(host, port=port, **kw) - else: - return HTTPConnectionPool(host, port=port, **kw) - - -def _normalize_host(host, scheme): - """ - Normalize hosts for comparisons and use with sockets. - """ - - host = normalize_host(host, scheme) - - # httplib doesn't like it when we include brackets in IPv6 addresses - # Specifically, if we include brackets but also pass the port then - # httplib crazily doubles up the square brackets on the Host header. - # Instead, we need to make sure we never pass ``None`` as the port. - # However, for backward compatibility reasons we can't actually - # *assert* that. See http://bugs.python.org/issue28539 - if host.startswith("[") and host.endswith("]"): - host = host[1:-1] - return host diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__init__.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 7f61e4c6c..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/_appengine_environ.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/_appengine_environ.cpython-310.pyc deleted file mode 100644 index 45f9fe3a4..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/_appengine_environ.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/appengine.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/appengine.cpython-310.pyc deleted file mode 100644 index 490e6d6f1..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/appengine.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/ntlmpool.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/ntlmpool.cpython-310.pyc deleted file mode 100644 index fad14827b..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/ntlmpool.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/pyopenssl.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/pyopenssl.cpython-310.pyc deleted file mode 100644 index 7e4c0bac2..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/pyopenssl.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/securetransport.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/securetransport.cpython-310.pyc deleted file mode 100644 index 1c3658de4..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/securetransport.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/socks.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/socks.cpython-310.pyc deleted file mode 100644 index d41737e17..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/__pycache__/socks.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_appengine_environ.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_appengine_environ.py deleted file mode 100644 index 8765b907d..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_appengine_environ.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -This module provides means to detect the App Engine environment. -""" - -import os - - -def is_appengine(): - return is_local_appengine() or is_prod_appengine() - - -def is_appengine_sandbox(): - """Reports if the app is running in the first generation sandbox. - - The second generation runtimes are technically still in a sandbox, but it - is much less restrictive, so generally you shouldn't need to check for it. - see https://cloud.google.com/appengine/docs/standard/runtimes - """ - return is_appengine() and os.environ["APPENGINE_RUNTIME"] == "python27" - - -def is_local_appengine(): - return "APPENGINE_RUNTIME" in os.environ and os.environ.get( - "SERVER_SOFTWARE", "" - ).startswith("Development/") - - -def is_prod_appengine(): - return "APPENGINE_RUNTIME" in os.environ and os.environ.get( - "SERVER_SOFTWARE", "" - ).startswith("Google App Engine/") - - -def is_prod_appengine_mvms(): - """Deprecated.""" - return False diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__init__.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 440bc2833..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/bindings.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/bindings.cpython-310.pyc deleted file mode 100644 index 7c4cc482a..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/bindings.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/low_level.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/low_level.cpython-310.pyc deleted file mode 100644 index 6730e1f56..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/__pycache__/low_level.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/bindings.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/bindings.py deleted file mode 100644 index 264d564db..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/bindings.py +++ /dev/null @@ -1,519 +0,0 @@ -""" -This module uses ctypes to bind a whole bunch of functions and constants from -SecureTransport. The goal here is to provide the low-level API to -SecureTransport. These are essentially the C-level functions and constants, and -they're pretty gross to work with. - -This code is a bastardised version of the code found in Will Bond's oscrypto -library. An enormous debt is owed to him for blazing this trail for us. For -that reason, this code should be considered to be covered both by urllib3's -license and by oscrypto's: - - Copyright (c) 2015-2016 Will Bond - - Permission is hereby granted, free of charge, to any person obtaining a - copy of this software and associated documentation files (the "Software"), - to deal in the Software without restriction, including without limitation - the rights to use, copy, modify, merge, publish, distribute, sublicense, - and/or sell copies of the Software, and to permit persons to whom the - Software is furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in - all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - DEALINGS IN THE SOFTWARE. -""" -from __future__ import absolute_import - -import platform -from ctypes import ( - CDLL, - CFUNCTYPE, - POINTER, - c_bool, - c_byte, - c_char_p, - c_int32, - c_long, - c_size_t, - c_uint32, - c_ulong, - c_void_p, -) -from ctypes.util import find_library - -from ...packages.six import raise_from - -if platform.system() != "Darwin": - raise ImportError("Only macOS is supported") - -version = platform.mac_ver()[0] -version_info = tuple(map(int, version.split("."))) -if version_info < (10, 8): - raise OSError( - "Only OS X 10.8 and newer are supported, not %s.%s" - % (version_info[0], version_info[1]) - ) - - -def load_cdll(name, macos10_16_path): - """Loads a CDLL by name, falling back to known path on 10.16+""" - try: - # Big Sur is technically 11 but we use 10.16 due to the Big Sur - # beta being labeled as 10.16. - if version_info >= (10, 16): - path = macos10_16_path - else: - path = find_library(name) - if not path: - raise OSError # Caught and reraised as 'ImportError' - return CDLL(path, use_errno=True) - except OSError: - raise_from(ImportError("The library %s failed to load" % name), None) - - -Security = load_cdll( - "Security", "/System/Library/Frameworks/Security.framework/Security" -) -CoreFoundation = load_cdll( - "CoreFoundation", - "/System/Library/Frameworks/CoreFoundation.framework/CoreFoundation", -) - - -Boolean = c_bool -CFIndex = c_long -CFStringEncoding = c_uint32 -CFData = c_void_p -CFString = c_void_p -CFArray = c_void_p -CFMutableArray = c_void_p -CFDictionary = c_void_p -CFError = c_void_p -CFType = c_void_p -CFTypeID = c_ulong - -CFTypeRef = POINTER(CFType) -CFAllocatorRef = c_void_p - -OSStatus = c_int32 - -CFDataRef = POINTER(CFData) -CFStringRef = POINTER(CFString) -CFArrayRef = POINTER(CFArray) -CFMutableArrayRef = POINTER(CFMutableArray) -CFDictionaryRef = POINTER(CFDictionary) -CFArrayCallBacks = c_void_p -CFDictionaryKeyCallBacks = c_void_p -CFDictionaryValueCallBacks = c_void_p - -SecCertificateRef = POINTER(c_void_p) -SecExternalFormat = c_uint32 -SecExternalItemType = c_uint32 -SecIdentityRef = POINTER(c_void_p) -SecItemImportExportFlags = c_uint32 -SecItemImportExportKeyParameters = c_void_p -SecKeychainRef = POINTER(c_void_p) -SSLProtocol = c_uint32 -SSLCipherSuite = c_uint32 -SSLContextRef = POINTER(c_void_p) -SecTrustRef = POINTER(c_void_p) -SSLConnectionRef = c_uint32 -SecTrustResultType = c_uint32 -SecTrustOptionFlags = c_uint32 -SSLProtocolSide = c_uint32 -SSLConnectionType = c_uint32 -SSLSessionOption = c_uint32 - - -try: - Security.SecItemImport.argtypes = [ - CFDataRef, - CFStringRef, - POINTER(SecExternalFormat), - POINTER(SecExternalItemType), - SecItemImportExportFlags, - POINTER(SecItemImportExportKeyParameters), - SecKeychainRef, - POINTER(CFArrayRef), - ] - Security.SecItemImport.restype = OSStatus - - Security.SecCertificateGetTypeID.argtypes = [] - Security.SecCertificateGetTypeID.restype = CFTypeID - - Security.SecIdentityGetTypeID.argtypes = [] - Security.SecIdentityGetTypeID.restype = CFTypeID - - Security.SecKeyGetTypeID.argtypes = [] - Security.SecKeyGetTypeID.restype = CFTypeID - - Security.SecCertificateCreateWithData.argtypes = [CFAllocatorRef, CFDataRef] - Security.SecCertificateCreateWithData.restype = SecCertificateRef - - Security.SecCertificateCopyData.argtypes = [SecCertificateRef] - Security.SecCertificateCopyData.restype = CFDataRef - - Security.SecCopyErrorMessageString.argtypes = [OSStatus, c_void_p] - Security.SecCopyErrorMessageString.restype = CFStringRef - - Security.SecIdentityCreateWithCertificate.argtypes = [ - CFTypeRef, - SecCertificateRef, - POINTER(SecIdentityRef), - ] - Security.SecIdentityCreateWithCertificate.restype = OSStatus - - Security.SecKeychainCreate.argtypes = [ - c_char_p, - c_uint32, - c_void_p, - Boolean, - c_void_p, - POINTER(SecKeychainRef), - ] - Security.SecKeychainCreate.restype = OSStatus - - Security.SecKeychainDelete.argtypes = [SecKeychainRef] - Security.SecKeychainDelete.restype = OSStatus - - Security.SecPKCS12Import.argtypes = [ - CFDataRef, - CFDictionaryRef, - POINTER(CFArrayRef), - ] - Security.SecPKCS12Import.restype = OSStatus - - SSLReadFunc = CFUNCTYPE(OSStatus, SSLConnectionRef, c_void_p, POINTER(c_size_t)) - SSLWriteFunc = CFUNCTYPE( - OSStatus, SSLConnectionRef, POINTER(c_byte), POINTER(c_size_t) - ) - - Security.SSLSetIOFuncs.argtypes = [SSLContextRef, SSLReadFunc, SSLWriteFunc] - Security.SSLSetIOFuncs.restype = OSStatus - - Security.SSLSetPeerID.argtypes = [SSLContextRef, c_char_p, c_size_t] - Security.SSLSetPeerID.restype = OSStatus - - Security.SSLSetCertificate.argtypes = [SSLContextRef, CFArrayRef] - Security.SSLSetCertificate.restype = OSStatus - - Security.SSLSetCertificateAuthorities.argtypes = [SSLContextRef, CFTypeRef, Boolean] - Security.SSLSetCertificateAuthorities.restype = OSStatus - - Security.SSLSetConnection.argtypes = [SSLContextRef, SSLConnectionRef] - Security.SSLSetConnection.restype = OSStatus - - Security.SSLSetPeerDomainName.argtypes = [SSLContextRef, c_char_p, c_size_t] - Security.SSLSetPeerDomainName.restype = OSStatus - - Security.SSLHandshake.argtypes = [SSLContextRef] - Security.SSLHandshake.restype = OSStatus - - Security.SSLRead.argtypes = [SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t)] - Security.SSLRead.restype = OSStatus - - Security.SSLWrite.argtypes = [SSLContextRef, c_char_p, c_size_t, POINTER(c_size_t)] - Security.SSLWrite.restype = OSStatus - - Security.SSLClose.argtypes = [SSLContextRef] - Security.SSLClose.restype = OSStatus - - Security.SSLGetNumberSupportedCiphers.argtypes = [SSLContextRef, POINTER(c_size_t)] - Security.SSLGetNumberSupportedCiphers.restype = OSStatus - - Security.SSLGetSupportedCiphers.argtypes = [ - SSLContextRef, - POINTER(SSLCipherSuite), - POINTER(c_size_t), - ] - Security.SSLGetSupportedCiphers.restype = OSStatus - - Security.SSLSetEnabledCiphers.argtypes = [ - SSLContextRef, - POINTER(SSLCipherSuite), - c_size_t, - ] - Security.SSLSetEnabledCiphers.restype = OSStatus - - Security.SSLGetNumberEnabledCiphers.argtype = [SSLContextRef, POINTER(c_size_t)] - Security.SSLGetNumberEnabledCiphers.restype = OSStatus - - Security.SSLGetEnabledCiphers.argtypes = [ - SSLContextRef, - POINTER(SSLCipherSuite), - POINTER(c_size_t), - ] - Security.SSLGetEnabledCiphers.restype = OSStatus - - Security.SSLGetNegotiatedCipher.argtypes = [SSLContextRef, POINTER(SSLCipherSuite)] - Security.SSLGetNegotiatedCipher.restype = OSStatus - - Security.SSLGetNegotiatedProtocolVersion.argtypes = [ - SSLContextRef, - POINTER(SSLProtocol), - ] - Security.SSLGetNegotiatedProtocolVersion.restype = OSStatus - - Security.SSLCopyPeerTrust.argtypes = [SSLContextRef, POINTER(SecTrustRef)] - Security.SSLCopyPeerTrust.restype = OSStatus - - Security.SecTrustSetAnchorCertificates.argtypes = [SecTrustRef, CFArrayRef] - Security.SecTrustSetAnchorCertificates.restype = OSStatus - - Security.SecTrustSetAnchorCertificatesOnly.argstypes = [SecTrustRef, Boolean] - Security.SecTrustSetAnchorCertificatesOnly.restype = OSStatus - - Security.SecTrustEvaluate.argtypes = [SecTrustRef, POINTER(SecTrustResultType)] - Security.SecTrustEvaluate.restype = OSStatus - - Security.SecTrustGetCertificateCount.argtypes = [SecTrustRef] - Security.SecTrustGetCertificateCount.restype = CFIndex - - Security.SecTrustGetCertificateAtIndex.argtypes = [SecTrustRef, CFIndex] - Security.SecTrustGetCertificateAtIndex.restype = SecCertificateRef - - Security.SSLCreateContext.argtypes = [ - CFAllocatorRef, - SSLProtocolSide, - SSLConnectionType, - ] - Security.SSLCreateContext.restype = SSLContextRef - - Security.SSLSetSessionOption.argtypes = [SSLContextRef, SSLSessionOption, Boolean] - Security.SSLSetSessionOption.restype = OSStatus - - Security.SSLSetProtocolVersionMin.argtypes = [SSLContextRef, SSLProtocol] - Security.SSLSetProtocolVersionMin.restype = OSStatus - - Security.SSLSetProtocolVersionMax.argtypes = [SSLContextRef, SSLProtocol] - Security.SSLSetProtocolVersionMax.restype = OSStatus - - try: - Security.SSLSetALPNProtocols.argtypes = [SSLContextRef, CFArrayRef] - Security.SSLSetALPNProtocols.restype = OSStatus - except AttributeError: - # Supported only in 10.12+ - pass - - Security.SecCopyErrorMessageString.argtypes = [OSStatus, c_void_p] - Security.SecCopyErrorMessageString.restype = CFStringRef - - Security.SSLReadFunc = SSLReadFunc - Security.SSLWriteFunc = SSLWriteFunc - Security.SSLContextRef = SSLContextRef - Security.SSLProtocol = SSLProtocol - Security.SSLCipherSuite = SSLCipherSuite - Security.SecIdentityRef = SecIdentityRef - Security.SecKeychainRef = SecKeychainRef - Security.SecTrustRef = SecTrustRef - Security.SecTrustResultType = SecTrustResultType - Security.SecExternalFormat = SecExternalFormat - Security.OSStatus = OSStatus - - Security.kSecImportExportPassphrase = CFStringRef.in_dll( - Security, "kSecImportExportPassphrase" - ) - Security.kSecImportItemIdentity = CFStringRef.in_dll( - Security, "kSecImportItemIdentity" - ) - - # CoreFoundation time! - CoreFoundation.CFRetain.argtypes = [CFTypeRef] - CoreFoundation.CFRetain.restype = CFTypeRef - - CoreFoundation.CFRelease.argtypes = [CFTypeRef] - CoreFoundation.CFRelease.restype = None - - CoreFoundation.CFGetTypeID.argtypes = [CFTypeRef] - CoreFoundation.CFGetTypeID.restype = CFTypeID - - CoreFoundation.CFStringCreateWithCString.argtypes = [ - CFAllocatorRef, - c_char_p, - CFStringEncoding, - ] - CoreFoundation.CFStringCreateWithCString.restype = CFStringRef - - CoreFoundation.CFStringGetCStringPtr.argtypes = [CFStringRef, CFStringEncoding] - CoreFoundation.CFStringGetCStringPtr.restype = c_char_p - - CoreFoundation.CFStringGetCString.argtypes = [ - CFStringRef, - c_char_p, - CFIndex, - CFStringEncoding, - ] - CoreFoundation.CFStringGetCString.restype = c_bool - - CoreFoundation.CFDataCreate.argtypes = [CFAllocatorRef, c_char_p, CFIndex] - CoreFoundation.CFDataCreate.restype = CFDataRef - - CoreFoundation.CFDataGetLength.argtypes = [CFDataRef] - CoreFoundation.CFDataGetLength.restype = CFIndex - - CoreFoundation.CFDataGetBytePtr.argtypes = [CFDataRef] - CoreFoundation.CFDataGetBytePtr.restype = c_void_p - - CoreFoundation.CFDictionaryCreate.argtypes = [ - CFAllocatorRef, - POINTER(CFTypeRef), - POINTER(CFTypeRef), - CFIndex, - CFDictionaryKeyCallBacks, - CFDictionaryValueCallBacks, - ] - CoreFoundation.CFDictionaryCreate.restype = CFDictionaryRef - - CoreFoundation.CFDictionaryGetValue.argtypes = [CFDictionaryRef, CFTypeRef] - CoreFoundation.CFDictionaryGetValue.restype = CFTypeRef - - CoreFoundation.CFArrayCreate.argtypes = [ - CFAllocatorRef, - POINTER(CFTypeRef), - CFIndex, - CFArrayCallBacks, - ] - CoreFoundation.CFArrayCreate.restype = CFArrayRef - - CoreFoundation.CFArrayCreateMutable.argtypes = [ - CFAllocatorRef, - CFIndex, - CFArrayCallBacks, - ] - CoreFoundation.CFArrayCreateMutable.restype = CFMutableArrayRef - - CoreFoundation.CFArrayAppendValue.argtypes = [CFMutableArrayRef, c_void_p] - CoreFoundation.CFArrayAppendValue.restype = None - - CoreFoundation.CFArrayGetCount.argtypes = [CFArrayRef] - CoreFoundation.CFArrayGetCount.restype = CFIndex - - CoreFoundation.CFArrayGetValueAtIndex.argtypes = [CFArrayRef, CFIndex] - CoreFoundation.CFArrayGetValueAtIndex.restype = c_void_p - - CoreFoundation.kCFAllocatorDefault = CFAllocatorRef.in_dll( - CoreFoundation, "kCFAllocatorDefault" - ) - CoreFoundation.kCFTypeArrayCallBacks = c_void_p.in_dll( - CoreFoundation, "kCFTypeArrayCallBacks" - ) - CoreFoundation.kCFTypeDictionaryKeyCallBacks = c_void_p.in_dll( - CoreFoundation, "kCFTypeDictionaryKeyCallBacks" - ) - CoreFoundation.kCFTypeDictionaryValueCallBacks = c_void_p.in_dll( - CoreFoundation, "kCFTypeDictionaryValueCallBacks" - ) - - CoreFoundation.CFTypeRef = CFTypeRef - CoreFoundation.CFArrayRef = CFArrayRef - CoreFoundation.CFStringRef = CFStringRef - CoreFoundation.CFDictionaryRef = CFDictionaryRef - -except (AttributeError): - raise ImportError("Error initializing ctypes") - - -class CFConst(object): - """ - A class object that acts as essentially a namespace for CoreFoundation - constants. - """ - - kCFStringEncodingUTF8 = CFStringEncoding(0x08000100) - - -class SecurityConst(object): - """ - A class object that acts as essentially a namespace for Security constants. - """ - - kSSLSessionOptionBreakOnServerAuth = 0 - - kSSLProtocol2 = 1 - kSSLProtocol3 = 2 - kTLSProtocol1 = 4 - kTLSProtocol11 = 7 - kTLSProtocol12 = 8 - # SecureTransport does not support TLS 1.3 even if there's a constant for it - kTLSProtocol13 = 10 - kTLSProtocolMaxSupported = 999 - - kSSLClientSide = 1 - kSSLStreamType = 0 - - kSecFormatPEMSequence = 10 - - kSecTrustResultInvalid = 0 - kSecTrustResultProceed = 1 - # This gap is present on purpose: this was kSecTrustResultConfirm, which - # is deprecated. - kSecTrustResultDeny = 3 - kSecTrustResultUnspecified = 4 - kSecTrustResultRecoverableTrustFailure = 5 - kSecTrustResultFatalTrustFailure = 6 - kSecTrustResultOtherError = 7 - - errSSLProtocol = -9800 - errSSLWouldBlock = -9803 - errSSLClosedGraceful = -9805 - errSSLClosedNoNotify = -9816 - errSSLClosedAbort = -9806 - - errSSLXCertChainInvalid = -9807 - errSSLCrypto = -9809 - errSSLInternal = -9810 - errSSLCertExpired = -9814 - errSSLCertNotYetValid = -9815 - errSSLUnknownRootCert = -9812 - errSSLNoRootCert = -9813 - errSSLHostNameMismatch = -9843 - errSSLPeerHandshakeFail = -9824 - errSSLPeerUserCancelled = -9839 - errSSLWeakPeerEphemeralDHKey = -9850 - errSSLServerAuthCompleted = -9841 - errSSLRecordOverflow = -9847 - - errSecVerifyFailed = -67808 - errSecNoTrustSettings = -25263 - errSecItemNotFound = -25300 - errSecInvalidTrustSettings = -25262 - - # Cipher suites. We only pick the ones our default cipher string allows. - # Source: https://developer.apple.com/documentation/security/1550981-ssl_cipher_suite_values - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 = 0xC02C - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = 0xC030 - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = 0xC02B - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = 0xC02F - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA9 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCCA8 - TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = 0x009F - TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = 0x009E - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 0xC024 - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 = 0xC028 - TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA = 0xC00A - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA = 0xC014 - TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 = 0x006B - TLS_DHE_RSA_WITH_AES_256_CBC_SHA = 0x0039 - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 = 0xC023 - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 = 0xC027 - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA = 0xC009 - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA = 0xC013 - TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 = 0x0067 - TLS_DHE_RSA_WITH_AES_128_CBC_SHA = 0x0033 - TLS_RSA_WITH_AES_256_GCM_SHA384 = 0x009D - TLS_RSA_WITH_AES_128_GCM_SHA256 = 0x009C - TLS_RSA_WITH_AES_256_CBC_SHA256 = 0x003D - TLS_RSA_WITH_AES_128_CBC_SHA256 = 0x003C - TLS_RSA_WITH_AES_256_CBC_SHA = 0x0035 - TLS_RSA_WITH_AES_128_CBC_SHA = 0x002F - TLS_AES_128_GCM_SHA256 = 0x1301 - TLS_AES_256_GCM_SHA384 = 0x1302 - TLS_AES_128_CCM_8_SHA256 = 0x1305 - TLS_AES_128_CCM_SHA256 = 0x1304 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/low_level.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/low_level.py deleted file mode 100644 index fa0b245d2..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/_securetransport/low_level.py +++ /dev/null @@ -1,397 +0,0 @@ -""" -Low-level helpers for the SecureTransport bindings. - -These are Python functions that are not directly related to the high-level APIs -but are necessary to get them to work. They include a whole bunch of low-level -CoreFoundation messing about and memory management. The concerns in this module -are almost entirely about trying to avoid memory leaks and providing -appropriate and useful assistance to the higher-level code. -""" -import base64 -import ctypes -import itertools -import os -import re -import ssl -import struct -import tempfile - -from .bindings import CFConst, CoreFoundation, Security - -# This regular expression is used to grab PEM data out of a PEM bundle. -_PEM_CERTS_RE = re.compile( - b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL -) - - -def _cf_data_from_bytes(bytestring): - """ - Given a bytestring, create a CFData object from it. This CFData object must - be CFReleased by the caller. - """ - return CoreFoundation.CFDataCreate( - CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring) - ) - - -def _cf_dictionary_from_tuples(tuples): - """ - Given a list of Python tuples, create an associated CFDictionary. - """ - dictionary_size = len(tuples) - - # We need to get the dictionary keys and values out in the same order. - keys = (t[0] for t in tuples) - values = (t[1] for t in tuples) - cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys) - cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values) - - return CoreFoundation.CFDictionaryCreate( - CoreFoundation.kCFAllocatorDefault, - cf_keys, - cf_values, - dictionary_size, - CoreFoundation.kCFTypeDictionaryKeyCallBacks, - CoreFoundation.kCFTypeDictionaryValueCallBacks, - ) - - -def _cfstr(py_bstr): - """ - Given a Python binary data, create a CFString. - The string must be CFReleased by the caller. - """ - c_str = ctypes.c_char_p(py_bstr) - cf_str = CoreFoundation.CFStringCreateWithCString( - CoreFoundation.kCFAllocatorDefault, - c_str, - CFConst.kCFStringEncodingUTF8, - ) - return cf_str - - -def _create_cfstring_array(lst): - """ - Given a list of Python binary data, create an associated CFMutableArray. - The array must be CFReleased by the caller. - - Raises an ssl.SSLError on failure. - """ - cf_arr = None - try: - cf_arr = CoreFoundation.CFArrayCreateMutable( - CoreFoundation.kCFAllocatorDefault, - 0, - ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), - ) - if not cf_arr: - raise MemoryError("Unable to allocate memory!") - for item in lst: - cf_str = _cfstr(item) - if not cf_str: - raise MemoryError("Unable to allocate memory!") - try: - CoreFoundation.CFArrayAppendValue(cf_arr, cf_str) - finally: - CoreFoundation.CFRelease(cf_str) - except BaseException as e: - if cf_arr: - CoreFoundation.CFRelease(cf_arr) - raise ssl.SSLError("Unable to allocate array: %s" % (e,)) - return cf_arr - - -def _cf_string_to_unicode(value): - """ - Creates a Unicode string from a CFString object. Used entirely for error - reporting. - - Yes, it annoys me quite a lot that this function is this complex. - """ - value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p)) - - string = CoreFoundation.CFStringGetCStringPtr( - value_as_void_p, CFConst.kCFStringEncodingUTF8 - ) - if string is None: - buffer = ctypes.create_string_buffer(1024) - result = CoreFoundation.CFStringGetCString( - value_as_void_p, buffer, 1024, CFConst.kCFStringEncodingUTF8 - ) - if not result: - raise OSError("Error copying C string from CFStringRef") - string = buffer.value - if string is not None: - string = string.decode("utf-8") - return string - - -def _assert_no_error(error, exception_class=None): - """ - Checks the return code and throws an exception if there is an error to - report - """ - if error == 0: - return - - cf_error_string = Security.SecCopyErrorMessageString(error, None) - output = _cf_string_to_unicode(cf_error_string) - CoreFoundation.CFRelease(cf_error_string) - - if output is None or output == u"": - output = u"OSStatus %s" % error - - if exception_class is None: - exception_class = ssl.SSLError - - raise exception_class(output) - - -def _cert_array_from_pem(pem_bundle): - """ - Given a bundle of certs in PEM format, turns them into a CFArray of certs - that can be used to validate a cert chain. - """ - # Normalize the PEM bundle's line endings. - pem_bundle = pem_bundle.replace(b"\r\n", b"\n") - - der_certs = [ - base64.b64decode(match.group(1)) for match in _PEM_CERTS_RE.finditer(pem_bundle) - ] - if not der_certs: - raise ssl.SSLError("No root certificates specified") - - cert_array = CoreFoundation.CFArrayCreateMutable( - CoreFoundation.kCFAllocatorDefault, - 0, - ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), - ) - if not cert_array: - raise ssl.SSLError("Unable to allocate memory!") - - try: - for der_bytes in der_certs: - certdata = _cf_data_from_bytes(der_bytes) - if not certdata: - raise ssl.SSLError("Unable to allocate memory!") - cert = Security.SecCertificateCreateWithData( - CoreFoundation.kCFAllocatorDefault, certdata - ) - CoreFoundation.CFRelease(certdata) - if not cert: - raise ssl.SSLError("Unable to build cert object!") - - CoreFoundation.CFArrayAppendValue(cert_array, cert) - CoreFoundation.CFRelease(cert) - except Exception: - # We need to free the array before the exception bubbles further. - # We only want to do that if an error occurs: otherwise, the caller - # should free. - CoreFoundation.CFRelease(cert_array) - raise - - return cert_array - - -def _is_cert(item): - """ - Returns True if a given CFTypeRef is a certificate. - """ - expected = Security.SecCertificateGetTypeID() - return CoreFoundation.CFGetTypeID(item) == expected - - -def _is_identity(item): - """ - Returns True if a given CFTypeRef is an identity. - """ - expected = Security.SecIdentityGetTypeID() - return CoreFoundation.CFGetTypeID(item) == expected - - -def _temporary_keychain(): - """ - This function creates a temporary Mac keychain that we can use to work with - credentials. This keychain uses a one-time password and a temporary file to - store the data. We expect to have one keychain per socket. The returned - SecKeychainRef must be freed by the caller, including calling - SecKeychainDelete. - - Returns a tuple of the SecKeychainRef and the path to the temporary - directory that contains it. - """ - # Unfortunately, SecKeychainCreate requires a path to a keychain. This - # means we cannot use mkstemp to use a generic temporary file. Instead, - # we're going to create a temporary directory and a filename to use there. - # This filename will be 8 random bytes expanded into base64. We also need - # some random bytes to password-protect the keychain we're creating, so we - # ask for 40 random bytes. - random_bytes = os.urandom(40) - filename = base64.b16encode(random_bytes[:8]).decode("utf-8") - password = base64.b16encode(random_bytes[8:]) # Must be valid UTF-8 - tempdirectory = tempfile.mkdtemp() - - keychain_path = os.path.join(tempdirectory, filename).encode("utf-8") - - # We now want to create the keychain itself. - keychain = Security.SecKeychainRef() - status = Security.SecKeychainCreate( - keychain_path, len(password), password, False, None, ctypes.byref(keychain) - ) - _assert_no_error(status) - - # Having created the keychain, we want to pass it off to the caller. - return keychain, tempdirectory - - -def _load_items_from_file(keychain, path): - """ - Given a single file, loads all the trust objects from it into arrays and - the keychain. - Returns a tuple of lists: the first list is a list of identities, the - second a list of certs. - """ - certificates = [] - identities = [] - result_array = None - - with open(path, "rb") as f: - raw_filedata = f.read() - - try: - filedata = CoreFoundation.CFDataCreate( - CoreFoundation.kCFAllocatorDefault, raw_filedata, len(raw_filedata) - ) - result_array = CoreFoundation.CFArrayRef() - result = Security.SecItemImport( - filedata, # cert data - None, # Filename, leaving it out for now - None, # What the type of the file is, we don't care - None, # what's in the file, we don't care - 0, # import flags - None, # key params, can include passphrase in the future - keychain, # The keychain to insert into - ctypes.byref(result_array), # Results - ) - _assert_no_error(result) - - # A CFArray is not very useful to us as an intermediary - # representation, so we are going to extract the objects we want - # and then free the array. We don't need to keep hold of keys: the - # keychain already has them! - result_count = CoreFoundation.CFArrayGetCount(result_array) - for index in range(result_count): - item = CoreFoundation.CFArrayGetValueAtIndex(result_array, index) - item = ctypes.cast(item, CoreFoundation.CFTypeRef) - - if _is_cert(item): - CoreFoundation.CFRetain(item) - certificates.append(item) - elif _is_identity(item): - CoreFoundation.CFRetain(item) - identities.append(item) - finally: - if result_array: - CoreFoundation.CFRelease(result_array) - - CoreFoundation.CFRelease(filedata) - - return (identities, certificates) - - -def _load_client_cert_chain(keychain, *paths): - """ - Load certificates and maybe keys from a number of files. Has the end goal - of returning a CFArray containing one SecIdentityRef, and then zero or more - SecCertificateRef objects, suitable for use as a client certificate trust - chain. - """ - # Ok, the strategy. - # - # This relies on knowing that macOS will not give you a SecIdentityRef - # unless you have imported a key into a keychain. This is a somewhat - # artificial limitation of macOS (for example, it doesn't necessarily - # affect iOS), but there is nothing inside Security.framework that lets you - # get a SecIdentityRef without having a key in a keychain. - # - # So the policy here is we take all the files and iterate them in order. - # Each one will use SecItemImport to have one or more objects loaded from - # it. We will also point at a keychain that macOS can use to work with the - # private key. - # - # Once we have all the objects, we'll check what we actually have. If we - # already have a SecIdentityRef in hand, fab: we'll use that. Otherwise, - # we'll take the first certificate (which we assume to be our leaf) and - # ask the keychain to give us a SecIdentityRef with that cert's associated - # key. - # - # We'll then return a CFArray containing the trust chain: one - # SecIdentityRef and then zero-or-more SecCertificateRef objects. The - # responsibility for freeing this CFArray will be with the caller. This - # CFArray must remain alive for the entire connection, so in practice it - # will be stored with a single SSLSocket, along with the reference to the - # keychain. - certificates = [] - identities = [] - - # Filter out bad paths. - paths = (path for path in paths if path) - - try: - for file_path in paths: - new_identities, new_certs = _load_items_from_file(keychain, file_path) - identities.extend(new_identities) - certificates.extend(new_certs) - - # Ok, we have everything. The question is: do we have an identity? If - # not, we want to grab one from the first cert we have. - if not identities: - new_identity = Security.SecIdentityRef() - status = Security.SecIdentityCreateWithCertificate( - keychain, certificates[0], ctypes.byref(new_identity) - ) - _assert_no_error(status) - identities.append(new_identity) - - # We now want to release the original certificate, as we no longer - # need it. - CoreFoundation.CFRelease(certificates.pop(0)) - - # We now need to build a new CFArray that holds the trust chain. - trust_chain = CoreFoundation.CFArrayCreateMutable( - CoreFoundation.kCFAllocatorDefault, - 0, - ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks), - ) - for item in itertools.chain(identities, certificates): - # ArrayAppendValue does a CFRetain on the item. That's fine, - # because the finally block will release our other refs to them. - CoreFoundation.CFArrayAppendValue(trust_chain, item) - - return trust_chain - finally: - for obj in itertools.chain(identities, certificates): - CoreFoundation.CFRelease(obj) - - -TLS_PROTOCOL_VERSIONS = { - "SSLv2": (0, 2), - "SSLv3": (3, 0), - "TLSv1": (3, 1), - "TLSv1.1": (3, 2), - "TLSv1.2": (3, 3), -} - - -def _build_tls_unknown_ca_alert(version): - """ - Builds a TLS alert record for an unknown CA. - """ - ver_maj, ver_min = TLS_PROTOCOL_VERSIONS[version] - severity_fatal = 0x02 - description_unknown_ca = 0x30 - msg = struct.pack(">BB", severity_fatal, description_unknown_ca) - msg_len = len(msg) - record_type_alert = 0x15 - record = struct.pack(">BBBH", record_type_alert, ver_maj, ver_min, msg_len) + msg - return record diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/appengine.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/appengine.py deleted file mode 100644 index f91bdd6e7..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/appengine.py +++ /dev/null @@ -1,314 +0,0 @@ -""" -This module provides a pool manager that uses Google App Engine's -`URLFetch Service `_. - -Example usage:: - - from urllib3 import PoolManager - from urllib3.contrib.appengine import AppEngineManager, is_appengine_sandbox - - if is_appengine_sandbox(): - # AppEngineManager uses AppEngine's URLFetch API behind the scenes - http = AppEngineManager() - else: - # PoolManager uses a socket-level API behind the scenes - http = PoolManager() - - r = http.request('GET', 'https://google.com/') - -There are `limitations `_ to the URLFetch service and it may not be -the best choice for your application. There are three options for using -urllib3 on Google App Engine: - -1. You can use :class:`AppEngineManager` with URLFetch. URLFetch is - cost-effective in many circumstances as long as your usage is within the - limitations. -2. You can use a normal :class:`~urllib3.PoolManager` by enabling sockets. - Sockets also have `limitations and restrictions - `_ and have a lower free quota than URLFetch. - To use sockets, be sure to specify the following in your ``app.yaml``:: - - env_variables: - GAE_USE_SOCKETS_HTTPLIB : 'true' - -3. If you are using `App Engine Flexible -`_, you can use the standard -:class:`PoolManager` without any configuration or special environment variables. -""" - -from __future__ import absolute_import - -import io -import logging -import warnings - -from ..exceptions import ( - HTTPError, - HTTPWarning, - MaxRetryError, - ProtocolError, - SSLError, - TimeoutError, -) -from ..packages.six.moves.urllib.parse import urljoin -from ..request import RequestMethods -from ..response import HTTPResponse -from ..util.retry import Retry -from ..util.timeout import Timeout -from . import _appengine_environ - -try: - from google.appengine.api import urlfetch -except ImportError: - urlfetch = None - - -log = logging.getLogger(__name__) - - -class AppEnginePlatformWarning(HTTPWarning): - pass - - -class AppEnginePlatformError(HTTPError): - pass - - -class AppEngineManager(RequestMethods): - """ - Connection manager for Google App Engine sandbox applications. - - This manager uses the URLFetch service directly instead of using the - emulated httplib, and is subject to URLFetch limitations as described in - the App Engine documentation `here - `_. - - Notably it will raise an :class:`AppEnginePlatformError` if: - * URLFetch is not available. - * If you attempt to use this on App Engine Flexible, as full socket - support is available. - * If a request size is more than 10 megabytes. - * If a response size is more than 32 megabytes. - * If you use an unsupported request method such as OPTIONS. - - Beyond those cases, it will raise normal urllib3 errors. - """ - - def __init__( - self, - headers=None, - retries=None, - validate_certificate=True, - urlfetch_retries=True, - ): - if not urlfetch: - raise AppEnginePlatformError( - "URLFetch is not available in this environment." - ) - - warnings.warn( - "urllib3 is using URLFetch on Google App Engine sandbox instead " - "of sockets. To use sockets directly instead of URLFetch see " - "https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html.", - AppEnginePlatformWarning, - ) - - RequestMethods.__init__(self, headers) - self.validate_certificate = validate_certificate - self.urlfetch_retries = urlfetch_retries - - self.retries = retries or Retry.DEFAULT - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - # Return False to re-raise any potential exceptions - return False - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=None, - redirect=True, - timeout=Timeout.DEFAULT_TIMEOUT, - **response_kw - ): - - retries = self._get_retries(retries, redirect) - - try: - follow_redirects = redirect and retries.redirect != 0 and retries.total - response = urlfetch.fetch( - url, - payload=body, - method=method, - headers=headers or {}, - allow_truncated=False, - follow_redirects=self.urlfetch_retries and follow_redirects, - deadline=self._get_absolute_timeout(timeout), - validate_certificate=self.validate_certificate, - ) - except urlfetch.DeadlineExceededError as e: - raise TimeoutError(self, e) - - except urlfetch.InvalidURLError as e: - if "too large" in str(e): - raise AppEnginePlatformError( - "URLFetch request too large, URLFetch only " - "supports requests up to 10mb in size.", - e, - ) - raise ProtocolError(e) - - except urlfetch.DownloadError as e: - if "Too many redirects" in str(e): - raise MaxRetryError(self, url, reason=e) - raise ProtocolError(e) - - except urlfetch.ResponseTooLargeError as e: - raise AppEnginePlatformError( - "URLFetch response too large, URLFetch only supports" - "responses up to 32mb in size.", - e, - ) - - except urlfetch.SSLCertificateError as e: - raise SSLError(e) - - except urlfetch.InvalidMethodError as e: - raise AppEnginePlatformError( - "URLFetch does not support method: %s" % method, e - ) - - http_response = self._urlfetch_response_to_http_response( - response, retries=retries, **response_kw - ) - - # Handle redirect? - redirect_location = redirect and http_response.get_redirect_location() - if redirect_location: - # Check for redirect response - if self.urlfetch_retries and retries.raise_on_redirect: - raise MaxRetryError(self, url, "too many redirects") - else: - if http_response.status == 303: - method = "GET" - - try: - retries = retries.increment( - method, url, response=http_response, _pool=self - ) - except MaxRetryError: - if retries.raise_on_redirect: - raise MaxRetryError(self, url, "too many redirects") - return http_response - - retries.sleep_for_retry(http_response) - log.debug("Redirecting %s -> %s", url, redirect_location) - redirect_url = urljoin(url, redirect_location) - return self.urlopen( - method, - redirect_url, - body, - headers, - retries=retries, - redirect=redirect, - timeout=timeout, - **response_kw - ) - - # Check if we should retry the HTTP response. - has_retry_after = bool(http_response.getheader("Retry-After")) - if retries.is_retry(method, http_response.status, has_retry_after): - retries = retries.increment(method, url, response=http_response, _pool=self) - log.debug("Retry: %s", url) - retries.sleep(http_response) - return self.urlopen( - method, - url, - body=body, - headers=headers, - retries=retries, - redirect=redirect, - timeout=timeout, - **response_kw - ) - - return http_response - - def _urlfetch_response_to_http_response(self, urlfetch_resp, **response_kw): - - if is_prod_appengine(): - # Production GAE handles deflate encoding automatically, but does - # not remove the encoding header. - content_encoding = urlfetch_resp.headers.get("content-encoding") - - if content_encoding == "deflate": - del urlfetch_resp.headers["content-encoding"] - - transfer_encoding = urlfetch_resp.headers.get("transfer-encoding") - # We have a full response's content, - # so let's make sure we don't report ourselves as chunked data. - if transfer_encoding == "chunked": - encodings = transfer_encoding.split(",") - encodings.remove("chunked") - urlfetch_resp.headers["transfer-encoding"] = ",".join(encodings) - - original_response = HTTPResponse( - # In order for decoding to work, we must present the content as - # a file-like object. - body=io.BytesIO(urlfetch_resp.content), - msg=urlfetch_resp.header_msg, - headers=urlfetch_resp.headers, - status=urlfetch_resp.status_code, - **response_kw - ) - - return HTTPResponse( - body=io.BytesIO(urlfetch_resp.content), - headers=urlfetch_resp.headers, - status=urlfetch_resp.status_code, - original_response=original_response, - **response_kw - ) - - def _get_absolute_timeout(self, timeout): - if timeout is Timeout.DEFAULT_TIMEOUT: - return None # Defer to URLFetch's default. - if isinstance(timeout, Timeout): - if timeout._read is not None or timeout._connect is not None: - warnings.warn( - "URLFetch does not support granular timeout settings, " - "reverting to total or default URLFetch timeout.", - AppEnginePlatformWarning, - ) - return timeout.total - return timeout - - def _get_retries(self, retries, redirect): - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect, default=self.retries) - - if retries.connect or retries.read or retries.redirect: - warnings.warn( - "URLFetch only supports total retries and does not " - "recognize connect, read, or redirect retry parameters.", - AppEnginePlatformWarning, - ) - - return retries - - -# Alias methods from _appengine_environ to maintain public API interface. - -is_appengine = _appengine_environ.is_appengine -is_appengine_sandbox = _appengine_environ.is_appengine_sandbox -is_local_appengine = _appengine_environ.is_local_appengine -is_prod_appengine = _appengine_environ.is_prod_appengine -is_prod_appengine_mvms = _appengine_environ.is_prod_appengine_mvms diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/ntlmpool.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/ntlmpool.py deleted file mode 100644 index 41a8fd174..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/ntlmpool.py +++ /dev/null @@ -1,130 +0,0 @@ -""" -NTLM authenticating pool, contributed by erikcederstran - -Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10 -""" -from __future__ import absolute_import - -import warnings -from logging import getLogger - -from ntlm import ntlm - -from .. import HTTPSConnectionPool -from ..packages.six.moves.http_client import HTTPSConnection - -warnings.warn( - "The 'urllib3.contrib.ntlmpool' module is deprecated and will be removed " - "in urllib3 v2.0 release, urllib3 is not able to support it properly due " - "to reasons listed in issue: https://github.com/urllib3/urllib3/issues/2282. " - "If you are a user of this module please comment in the mentioned issue.", - DeprecationWarning, -) - -log = getLogger(__name__) - - -class NTLMConnectionPool(HTTPSConnectionPool): - """ - Implements an NTLM authentication version of an urllib3 connection pool - """ - - scheme = "https" - - def __init__(self, user, pw, authurl, *args, **kwargs): - """ - authurl is a random URL on the server that is protected by NTLM. - user is the Windows user, probably in the DOMAIN\\username format. - pw is the password for the user. - """ - super(NTLMConnectionPool, self).__init__(*args, **kwargs) - self.authurl = authurl - self.rawuser = user - user_parts = user.split("\\", 1) - self.domain = user_parts[0].upper() - self.user = user_parts[1] - self.pw = pw - - def _new_conn(self): - # Performs the NTLM handshake that secures the connection. The socket - # must be kept open while requests are performed. - self.num_connections += 1 - log.debug( - "Starting NTLM HTTPS connection no. %d: https://%s%s", - self.num_connections, - self.host, - self.authurl, - ) - - headers = {"Connection": "Keep-Alive"} - req_header = "Authorization" - resp_header = "www-authenticate" - - conn = HTTPSConnection(host=self.host, port=self.port) - - # Send negotiation message - headers[req_header] = "NTLM %s" % ntlm.create_NTLM_NEGOTIATE_MESSAGE( - self.rawuser - ) - log.debug("Request headers: %s", headers) - conn.request("GET", self.authurl, None, headers) - res = conn.getresponse() - reshdr = dict(res.getheaders()) - log.debug("Response status: %s %s", res.status, res.reason) - log.debug("Response headers: %s", reshdr) - log.debug("Response data: %s [...]", res.read(100)) - - # Remove the reference to the socket, so that it can not be closed by - # the response object (we want to keep the socket open) - res.fp = None - - # Server should respond with a challenge message - auth_header_values = reshdr[resp_header].split(", ") - auth_header_value = None - for s in auth_header_values: - if s[:5] == "NTLM ": - auth_header_value = s[5:] - if auth_header_value is None: - raise Exception( - "Unexpected %s response header: %s" % (resp_header, reshdr[resp_header]) - ) - - # Send authentication message - ServerChallenge, NegotiateFlags = ntlm.parse_NTLM_CHALLENGE_MESSAGE( - auth_header_value - ) - auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE( - ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags - ) - headers[req_header] = "NTLM %s" % auth_msg - log.debug("Request headers: %s", headers) - conn.request("GET", self.authurl, None, headers) - res = conn.getresponse() - log.debug("Response status: %s %s", res.status, res.reason) - log.debug("Response headers: %s", dict(res.getheaders())) - log.debug("Response data: %s [...]", res.read()[:100]) - if res.status != 200: - if res.status == 401: - raise Exception("Server rejected request: wrong username or password") - raise Exception("Wrong server response: %s %s" % (res.status, res.reason)) - - res.fp = None - log.debug("Connection established") - return conn - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=3, - redirect=True, - assert_same_host=True, - ): - if headers is None: - headers = {} - headers["Connection"] = "Keep-Alive" - return super(NTLMConnectionPool, self).urlopen( - method, url, body, headers, retries, redirect, assert_same_host - ) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/pyopenssl.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/pyopenssl.py deleted file mode 100644 index 50a07d596..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/pyopenssl.py +++ /dev/null @@ -1,519 +0,0 @@ -""" -TLS with SNI_-support for Python 2. Follow these instructions if you would -like to verify TLS certificates in Python 2. Note, the default libraries do -*not* do certificate checking; you need to do additional work to validate -certificates yourself. - -This needs the following packages installed: - -* `pyOpenSSL`_ (tested with 16.0.0) -* `cryptography`_ (minimum 1.3.4, from pyopenssl) -* `idna`_ (minimum 2.0, from cryptography) - -However, pyopenssl depends on cryptography, which depends on idna, so while we -use all three directly here we end up having relatively few packages required. - -You can install them with the following command: - -.. code-block:: bash - - $ python -m pip install pyopenssl cryptography idna - -To activate certificate checking, call -:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code -before you begin making HTTP requests. This can be done in a ``sitecustomize`` -module, or at any other time before your application begins using ``urllib3``, -like this: - -.. code-block:: python - - try: - import urllib3.contrib.pyopenssl - urllib3.contrib.pyopenssl.inject_into_urllib3() - except ImportError: - pass - -Now you can use :mod:`urllib3` as you normally would, and it will support SNI -when the required modules are installed. - -Activating this module also has the positive side effect of disabling SSL/TLS -compression in Python 2 (see `CRIME attack`_). - -.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication -.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit) -.. _pyopenssl: https://www.pyopenssl.org -.. _cryptography: https://cryptography.io -.. _idna: https://github.com/kjd/idna -""" -from __future__ import absolute_import - -import OpenSSL.SSL -from cryptography import x509 -from cryptography.hazmat.backends.openssl import backend as openssl_backend -from cryptography.hazmat.backends.openssl.x509 import _Certificate - -try: - from cryptography.x509 import UnsupportedExtension -except ImportError: - # UnsupportedExtension is gone in cryptography >= 2.1.0 - class UnsupportedExtension(Exception): - pass - - -from io import BytesIO -from socket import error as SocketError -from socket import timeout - -try: # Platform-specific: Python 2 - from socket import _fileobject -except ImportError: # Platform-specific: Python 3 - _fileobject = None - from ..packages.backports.makefile import backport_makefile - -import logging -import ssl -import sys -import warnings - -from .. import util -from ..packages import six -from ..util.ssl_ import PROTOCOL_TLS_CLIENT - -warnings.warn( - "'urllib3.contrib.pyopenssl' module is deprecated and will be removed " - "in a future release of urllib3 2.x. Read more in this issue: " - "https://github.com/urllib3/urllib3/issues/2680", - category=DeprecationWarning, - stacklevel=2, -) - -__all__ = ["inject_into_urllib3", "extract_from_urllib3"] - -# SNI always works. -HAS_SNI = True - -# Map from urllib3 to PyOpenSSL compatible parameter-values. -_openssl_versions = { - util.PROTOCOL_TLS: OpenSSL.SSL.SSLv23_METHOD, - PROTOCOL_TLS_CLIENT: OpenSSL.SSL.SSLv23_METHOD, - ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD, -} - -if hasattr(ssl, "PROTOCOL_SSLv3") and hasattr(OpenSSL.SSL, "SSLv3_METHOD"): - _openssl_versions[ssl.PROTOCOL_SSLv3] = OpenSSL.SSL.SSLv3_METHOD - -if hasattr(ssl, "PROTOCOL_TLSv1_1") and hasattr(OpenSSL.SSL, "TLSv1_1_METHOD"): - _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD - -if hasattr(ssl, "PROTOCOL_TLSv1_2") and hasattr(OpenSSL.SSL, "TLSv1_2_METHOD"): - _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD - - -_stdlib_to_openssl_verify = { - ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE, - ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER, - ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER - + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT, -} -_openssl_to_stdlib_verify = dict((v, k) for k, v in _stdlib_to_openssl_verify.items()) - -# OpenSSL will only write 16K at a time -SSL_WRITE_BLOCKSIZE = 16384 - -orig_util_HAS_SNI = util.HAS_SNI -orig_util_SSLContext = util.ssl_.SSLContext - - -log = logging.getLogger(__name__) - - -def inject_into_urllib3(): - "Monkey-patch urllib3 with PyOpenSSL-backed SSL-support." - - _validate_dependencies_met() - - util.SSLContext = PyOpenSSLContext - util.ssl_.SSLContext = PyOpenSSLContext - util.HAS_SNI = HAS_SNI - util.ssl_.HAS_SNI = HAS_SNI - util.IS_PYOPENSSL = True - util.ssl_.IS_PYOPENSSL = True - - -def extract_from_urllib3(): - "Undo monkey-patching by :func:`inject_into_urllib3`." - - util.SSLContext = orig_util_SSLContext - util.ssl_.SSLContext = orig_util_SSLContext - util.HAS_SNI = orig_util_HAS_SNI - util.ssl_.HAS_SNI = orig_util_HAS_SNI - util.IS_PYOPENSSL = False - util.ssl_.IS_PYOPENSSL = False - - -def _validate_dependencies_met(): - """ - Verifies that PyOpenSSL's package-level dependencies have been met. - Throws `ImportError` if they are not met. - """ - # Method added in `cryptography==1.1`; not available in older versions - from cryptography.x509.extensions import Extensions - - if getattr(Extensions, "get_extension_for_class", None) is None: - raise ImportError( - "'cryptography' module missing required functionality. " - "Try upgrading to v1.3.4 or newer." - ) - - # pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509 - # attribute is only present on those versions. - from OpenSSL.crypto import X509 - - x509 = X509() - if getattr(x509, "_x509", None) is None: - raise ImportError( - "'pyOpenSSL' module missing required functionality. " - "Try upgrading to v0.14 or newer." - ) - - -def _dnsname_to_stdlib(name): - """ - Converts a dNSName SubjectAlternativeName field to the form used by the - standard library on the given Python version. - - Cryptography produces a dNSName as a unicode string that was idna-decoded - from ASCII bytes. We need to idna-encode that string to get it back, and - then on Python 3 we also need to convert to unicode via UTF-8 (the stdlib - uses PyUnicode_FromStringAndSize on it, which decodes via UTF-8). - - If the name cannot be idna-encoded then we return None signalling that - the name given should be skipped. - """ - - def idna_encode(name): - """ - Borrowed wholesale from the Python Cryptography Project. It turns out - that we can't just safely call `idna.encode`: it can explode for - wildcard names. This avoids that problem. - """ - import idna - - try: - for prefix in [u"*.", u"."]: - if name.startswith(prefix): - name = name[len(prefix) :] - return prefix.encode("ascii") + idna.encode(name) - return idna.encode(name) - except idna.core.IDNAError: - return None - - # Don't send IPv6 addresses through the IDNA encoder. - if ":" in name: - return name - - name = idna_encode(name) - if name is None: - return None - elif sys.version_info >= (3, 0): - name = name.decode("utf-8") - return name - - -def get_subj_alt_name(peer_cert): - """ - Given an PyOpenSSL certificate, provides all the subject alternative names. - """ - # Pass the cert to cryptography, which has much better APIs for this. - if hasattr(peer_cert, "to_cryptography"): - cert = peer_cert.to_cryptography() - else: - # This is technically using private APIs, but should work across all - # relevant versions before PyOpenSSL got a proper API for this. - cert = _Certificate(openssl_backend, peer_cert._x509) - - # We want to find the SAN extension. Ask Cryptography to locate it (it's - # faster than looping in Python) - try: - ext = cert.extensions.get_extension_for_class(x509.SubjectAlternativeName).value - except x509.ExtensionNotFound: - # No such extension, return the empty list. - return [] - except ( - x509.DuplicateExtension, - UnsupportedExtension, - x509.UnsupportedGeneralNameType, - UnicodeError, - ) as e: - # A problem has been found with the quality of the certificate. Assume - # no SAN field is present. - log.warning( - "A problem was encountered with the certificate that prevented " - "urllib3 from finding the SubjectAlternativeName field. This can " - "affect certificate validation. The error was %s", - e, - ) - return [] - - # We want to return dNSName and iPAddress fields. We need to cast the IPs - # back to strings because the match_hostname function wants them as - # strings. - # Sadly the DNS names need to be idna encoded and then, on Python 3, UTF-8 - # decoded. This is pretty frustrating, but that's what the standard library - # does with certificates, and so we need to attempt to do the same. - # We also want to skip over names which cannot be idna encoded. - names = [ - ("DNS", name) - for name in map(_dnsname_to_stdlib, ext.get_values_for_type(x509.DNSName)) - if name is not None - ] - names.extend( - ("IP Address", str(name)) for name in ext.get_values_for_type(x509.IPAddress) - ) - - return names - - -class WrappedSocket(object): - """API-compatibility wrapper for Python OpenSSL's Connection-class. - - Note: _makefile_refs, _drop() and _reuse() are needed for the garbage - collector of pypy. - """ - - def __init__(self, connection, socket, suppress_ragged_eofs=True): - self.connection = connection - self.socket = socket - self.suppress_ragged_eofs = suppress_ragged_eofs - self._makefile_refs = 0 - self._closed = False - - def fileno(self): - return self.socket.fileno() - - # Copy-pasted from Python 3.5 source code - def _decref_socketios(self): - if self._makefile_refs > 0: - self._makefile_refs -= 1 - if self._closed: - self.close() - - def recv(self, *args, **kwargs): - try: - data = self.connection.recv(*args, **kwargs) - except OpenSSL.SSL.SysCallError as e: - if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"): - return b"" - else: - raise SocketError(str(e)) - except OpenSSL.SSL.ZeroReturnError: - if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: - return b"" - else: - raise - except OpenSSL.SSL.WantReadError: - if not util.wait_for_read(self.socket, self.socket.gettimeout()): - raise timeout("The read operation timed out") - else: - return self.recv(*args, **kwargs) - - # TLS 1.3 post-handshake authentication - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("read error: %r" % e) - else: - return data - - def recv_into(self, *args, **kwargs): - try: - return self.connection.recv_into(*args, **kwargs) - except OpenSSL.SSL.SysCallError as e: - if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"): - return 0 - else: - raise SocketError(str(e)) - except OpenSSL.SSL.ZeroReturnError: - if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: - return 0 - else: - raise - except OpenSSL.SSL.WantReadError: - if not util.wait_for_read(self.socket, self.socket.gettimeout()): - raise timeout("The read operation timed out") - else: - return self.recv_into(*args, **kwargs) - - # TLS 1.3 post-handshake authentication - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("read error: %r" % e) - - def settimeout(self, timeout): - return self.socket.settimeout(timeout) - - def _send_until_done(self, data): - while True: - try: - return self.connection.send(data) - except OpenSSL.SSL.WantWriteError: - if not util.wait_for_write(self.socket, self.socket.gettimeout()): - raise timeout() - continue - except OpenSSL.SSL.SysCallError as e: - raise SocketError(str(e)) - - def sendall(self, data): - total_sent = 0 - while total_sent < len(data): - sent = self._send_until_done( - data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE] - ) - total_sent += sent - - def shutdown(self): - # FIXME rethrow compatible exceptions should we ever use this - self.connection.shutdown() - - def close(self): - if self._makefile_refs < 1: - try: - self._closed = True - return self.connection.close() - except OpenSSL.SSL.Error: - return - else: - self._makefile_refs -= 1 - - def getpeercert(self, binary_form=False): - x509 = self.connection.get_peer_certificate() - - if not x509: - return x509 - - if binary_form: - return OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_ASN1, x509) - - return { - "subject": ((("commonName", x509.get_subject().CN),),), - "subjectAltName": get_subj_alt_name(x509), - } - - def version(self): - return self.connection.get_protocol_version_name() - - def _reuse(self): - self._makefile_refs += 1 - - def _drop(self): - if self._makefile_refs < 1: - self.close() - else: - self._makefile_refs -= 1 - - -if _fileobject: # Platform-specific: Python 2 - - def makefile(self, mode, bufsize=-1): - self._makefile_refs += 1 - return _fileobject(self, mode, bufsize, close=True) - -else: # Platform-specific: Python 3 - makefile = backport_makefile - -WrappedSocket.makefile = makefile - - -class PyOpenSSLContext(object): - """ - I am a wrapper class for the PyOpenSSL ``Context`` object. I am responsible - for translating the interface of the standard library ``SSLContext`` object - to calls into PyOpenSSL. - """ - - def __init__(self, protocol): - self.protocol = _openssl_versions[protocol] - self._ctx = OpenSSL.SSL.Context(self.protocol) - self._options = 0 - self.check_hostname = False - - @property - def options(self): - return self._options - - @options.setter - def options(self, value): - self._options = value - self._ctx.set_options(value) - - @property - def verify_mode(self): - return _openssl_to_stdlib_verify[self._ctx.get_verify_mode()] - - @verify_mode.setter - def verify_mode(self, value): - self._ctx.set_verify(_stdlib_to_openssl_verify[value], _verify_callback) - - def set_default_verify_paths(self): - self._ctx.set_default_verify_paths() - - def set_ciphers(self, ciphers): - if isinstance(ciphers, six.text_type): - ciphers = ciphers.encode("utf-8") - self._ctx.set_cipher_list(ciphers) - - def load_verify_locations(self, cafile=None, capath=None, cadata=None): - if cafile is not None: - cafile = cafile.encode("utf-8") - if capath is not None: - capath = capath.encode("utf-8") - try: - self._ctx.load_verify_locations(cafile, capath) - if cadata is not None: - self._ctx.load_verify_locations(BytesIO(cadata)) - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("unable to load trusted certificates: %r" % e) - - def load_cert_chain(self, certfile, keyfile=None, password=None): - self._ctx.use_certificate_chain_file(certfile) - if password is not None: - if not isinstance(password, six.binary_type): - password = password.encode("utf-8") - self._ctx.set_passwd_cb(lambda *_: password) - self._ctx.use_privatekey_file(keyfile or certfile) - - def set_alpn_protocols(self, protocols): - protocols = [six.ensure_binary(p) for p in protocols] - return self._ctx.set_alpn_protos(protocols) - - def wrap_socket( - self, - sock, - server_side=False, - do_handshake_on_connect=True, - suppress_ragged_eofs=True, - server_hostname=None, - ): - cnx = OpenSSL.SSL.Connection(self._ctx, sock) - - if isinstance(server_hostname, six.text_type): # Platform-specific: Python 3 - server_hostname = server_hostname.encode("utf-8") - - if server_hostname is not None: - cnx.set_tlsext_host_name(server_hostname) - - cnx.set_connect_state() - - while True: - try: - cnx.do_handshake() - except OpenSSL.SSL.WantReadError: - if not util.wait_for_read(sock, sock.gettimeout()): - raise timeout("select timed out") - continue - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("bad handshake: %r" % e) - break - - return WrappedSocket(cnx, sock) - - -def _verify_callback(cnx, x509, err_no, err_depth, return_code): - return err_no == 0 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/securetransport.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/securetransport.py deleted file mode 100644 index 6c46a3b9f..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/securetransport.py +++ /dev/null @@ -1,921 +0,0 @@ -""" -SecureTranport support for urllib3 via ctypes. - -This makes platform-native TLS available to urllib3 users on macOS without the -use of a compiler. This is an important feature because the Python Package -Index is moving to become a TLSv1.2-or-higher server, and the default OpenSSL -that ships with macOS is not capable of doing TLSv1.2. The only way to resolve -this is to give macOS users an alternative solution to the problem, and that -solution is to use SecureTransport. - -We use ctypes here because this solution must not require a compiler. That's -because pip is not allowed to require a compiler either. - -This is not intended to be a seriously long-term solution to this problem. -The hope is that PEP 543 will eventually solve this issue for us, at which -point we can retire this contrib module. But in the short term, we need to -solve the impending tire fire that is Python on Mac without this kind of -contrib module. So...here we are. - -To use this module, simply import and inject it:: - - import urllib3.contrib.securetransport - urllib3.contrib.securetransport.inject_into_urllib3() - -Happy TLSing! - -This code is a bastardised version of the code found in Will Bond's oscrypto -library. An enormous debt is owed to him for blazing this trail for us. For -that reason, this code should be considered to be covered both by urllib3's -license and by oscrypto's: - -.. code-block:: - - Copyright (c) 2015-2016 Will Bond - - Permission is hereby granted, free of charge, to any person obtaining a - copy of this software and associated documentation files (the "Software"), - to deal in the Software without restriction, including without limitation - the rights to use, copy, modify, merge, publish, distribute, sublicense, - and/or sell copies of the Software, and to permit persons to whom the - Software is furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in - all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - DEALINGS IN THE SOFTWARE. -""" -from __future__ import absolute_import - -import contextlib -import ctypes -import errno -import os.path -import shutil -import socket -import ssl -import struct -import threading -import weakref - -import six - -from .. import util -from ..util.ssl_ import PROTOCOL_TLS_CLIENT -from ._securetransport.bindings import CoreFoundation, Security, SecurityConst -from ._securetransport.low_level import ( - _assert_no_error, - _build_tls_unknown_ca_alert, - _cert_array_from_pem, - _create_cfstring_array, - _load_client_cert_chain, - _temporary_keychain, -) - -try: # Platform-specific: Python 2 - from socket import _fileobject -except ImportError: # Platform-specific: Python 3 - _fileobject = None - from ..packages.backports.makefile import backport_makefile - -__all__ = ["inject_into_urllib3", "extract_from_urllib3"] - -# SNI always works -HAS_SNI = True - -orig_util_HAS_SNI = util.HAS_SNI -orig_util_SSLContext = util.ssl_.SSLContext - -# This dictionary is used by the read callback to obtain a handle to the -# calling wrapped socket. This is a pretty silly approach, but for now it'll -# do. I feel like I should be able to smuggle a handle to the wrapped socket -# directly in the SSLConnectionRef, but for now this approach will work I -# guess. -# -# We need to lock around this structure for inserts, but we don't do it for -# reads/writes in the callbacks. The reasoning here goes as follows: -# -# 1. It is not possible to call into the callbacks before the dictionary is -# populated, so once in the callback the id must be in the dictionary. -# 2. The callbacks don't mutate the dictionary, they only read from it, and -# so cannot conflict with any of the insertions. -# -# This is good: if we had to lock in the callbacks we'd drastically slow down -# the performance of this code. -_connection_refs = weakref.WeakValueDictionary() -_connection_ref_lock = threading.Lock() - -# Limit writes to 16kB. This is OpenSSL's limit, but we'll cargo-cult it over -# for no better reason than we need *a* limit, and this one is right there. -SSL_WRITE_BLOCKSIZE = 16384 - -# This is our equivalent of util.ssl_.DEFAULT_CIPHERS, but expanded out to -# individual cipher suites. We need to do this because this is how -# SecureTransport wants them. -CIPHER_SUITES = [ - SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, - SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, - SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, - SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, - SecurityConst.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, - SecurityConst.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, - SecurityConst.TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, - SecurityConst.TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, - SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, - SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, - SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, - SecurityConst.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, - SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, - SecurityConst.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, - SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, - SecurityConst.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, - SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, - SecurityConst.TLS_DHE_RSA_WITH_AES_256_CBC_SHA, - SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, - SecurityConst.TLS_DHE_RSA_WITH_AES_128_CBC_SHA, - SecurityConst.TLS_AES_256_GCM_SHA384, - SecurityConst.TLS_AES_128_GCM_SHA256, - SecurityConst.TLS_RSA_WITH_AES_256_GCM_SHA384, - SecurityConst.TLS_RSA_WITH_AES_128_GCM_SHA256, - SecurityConst.TLS_AES_128_CCM_8_SHA256, - SecurityConst.TLS_AES_128_CCM_SHA256, - SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA256, - SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA256, - SecurityConst.TLS_RSA_WITH_AES_256_CBC_SHA, - SecurityConst.TLS_RSA_WITH_AES_128_CBC_SHA, -] - -# Basically this is simple: for PROTOCOL_SSLv23 we turn it into a low of -# TLSv1 and a high of TLSv1.2. For everything else, we pin to that version. -# TLSv1 to 1.2 are supported on macOS 10.8+ -_protocol_to_min_max = { - util.PROTOCOL_TLS: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), - PROTOCOL_TLS_CLIENT: (SecurityConst.kTLSProtocol1, SecurityConst.kTLSProtocol12), -} - -if hasattr(ssl, "PROTOCOL_SSLv2"): - _protocol_to_min_max[ssl.PROTOCOL_SSLv2] = ( - SecurityConst.kSSLProtocol2, - SecurityConst.kSSLProtocol2, - ) -if hasattr(ssl, "PROTOCOL_SSLv3"): - _protocol_to_min_max[ssl.PROTOCOL_SSLv3] = ( - SecurityConst.kSSLProtocol3, - SecurityConst.kSSLProtocol3, - ) -if hasattr(ssl, "PROTOCOL_TLSv1"): - _protocol_to_min_max[ssl.PROTOCOL_TLSv1] = ( - SecurityConst.kTLSProtocol1, - SecurityConst.kTLSProtocol1, - ) -if hasattr(ssl, "PROTOCOL_TLSv1_1"): - _protocol_to_min_max[ssl.PROTOCOL_TLSv1_1] = ( - SecurityConst.kTLSProtocol11, - SecurityConst.kTLSProtocol11, - ) -if hasattr(ssl, "PROTOCOL_TLSv1_2"): - _protocol_to_min_max[ssl.PROTOCOL_TLSv1_2] = ( - SecurityConst.kTLSProtocol12, - SecurityConst.kTLSProtocol12, - ) - - -def inject_into_urllib3(): - """ - Monkey-patch urllib3 with SecureTransport-backed SSL-support. - """ - util.SSLContext = SecureTransportContext - util.ssl_.SSLContext = SecureTransportContext - util.HAS_SNI = HAS_SNI - util.ssl_.HAS_SNI = HAS_SNI - util.IS_SECURETRANSPORT = True - util.ssl_.IS_SECURETRANSPORT = True - - -def extract_from_urllib3(): - """ - Undo monkey-patching by :func:`inject_into_urllib3`. - """ - util.SSLContext = orig_util_SSLContext - util.ssl_.SSLContext = orig_util_SSLContext - util.HAS_SNI = orig_util_HAS_SNI - util.ssl_.HAS_SNI = orig_util_HAS_SNI - util.IS_SECURETRANSPORT = False - util.ssl_.IS_SECURETRANSPORT = False - - -def _read_callback(connection_id, data_buffer, data_length_pointer): - """ - SecureTransport read callback. This is called by ST to request that data - be returned from the socket. - """ - wrapped_socket = None - try: - wrapped_socket = _connection_refs.get(connection_id) - if wrapped_socket is None: - return SecurityConst.errSSLInternal - base_socket = wrapped_socket.socket - - requested_length = data_length_pointer[0] - - timeout = wrapped_socket.gettimeout() - error = None - read_count = 0 - - try: - while read_count < requested_length: - if timeout is None or timeout >= 0: - if not util.wait_for_read(base_socket, timeout): - raise socket.error(errno.EAGAIN, "timed out") - - remaining = requested_length - read_count - buffer = (ctypes.c_char * remaining).from_address( - data_buffer + read_count - ) - chunk_size = base_socket.recv_into(buffer, remaining) - read_count += chunk_size - if not chunk_size: - if not read_count: - return SecurityConst.errSSLClosedGraceful - break - except (socket.error) as e: - error = e.errno - - if error is not None and error != errno.EAGAIN: - data_length_pointer[0] = read_count - if error == errno.ECONNRESET or error == errno.EPIPE: - return SecurityConst.errSSLClosedAbort - raise - - data_length_pointer[0] = read_count - - if read_count != requested_length: - return SecurityConst.errSSLWouldBlock - - return 0 - except Exception as e: - if wrapped_socket is not None: - wrapped_socket._exception = e - return SecurityConst.errSSLInternal - - -def _write_callback(connection_id, data_buffer, data_length_pointer): - """ - SecureTransport write callback. This is called by ST to request that data - actually be sent on the network. - """ - wrapped_socket = None - try: - wrapped_socket = _connection_refs.get(connection_id) - if wrapped_socket is None: - return SecurityConst.errSSLInternal - base_socket = wrapped_socket.socket - - bytes_to_write = data_length_pointer[0] - data = ctypes.string_at(data_buffer, bytes_to_write) - - timeout = wrapped_socket.gettimeout() - error = None - sent = 0 - - try: - while sent < bytes_to_write: - if timeout is None or timeout >= 0: - if not util.wait_for_write(base_socket, timeout): - raise socket.error(errno.EAGAIN, "timed out") - chunk_sent = base_socket.send(data) - sent += chunk_sent - - # This has some needless copying here, but I'm not sure there's - # much value in optimising this data path. - data = data[chunk_sent:] - except (socket.error) as e: - error = e.errno - - if error is not None and error != errno.EAGAIN: - data_length_pointer[0] = sent - if error == errno.ECONNRESET or error == errno.EPIPE: - return SecurityConst.errSSLClosedAbort - raise - - data_length_pointer[0] = sent - - if sent != bytes_to_write: - return SecurityConst.errSSLWouldBlock - - return 0 - except Exception as e: - if wrapped_socket is not None: - wrapped_socket._exception = e - return SecurityConst.errSSLInternal - - -# We need to keep these two objects references alive: if they get GC'd while -# in use then SecureTransport could attempt to call a function that is in freed -# memory. That would be...uh...bad. Yeah, that's the word. Bad. -_read_callback_pointer = Security.SSLReadFunc(_read_callback) -_write_callback_pointer = Security.SSLWriteFunc(_write_callback) - - -class WrappedSocket(object): - """ - API-compatibility wrapper for Python's OpenSSL wrapped socket object. - - Note: _makefile_refs, _drop(), and _reuse() are needed for the garbage - collector of PyPy. - """ - - def __init__(self, socket): - self.socket = socket - self.context = None - self._makefile_refs = 0 - self._closed = False - self._exception = None - self._keychain = None - self._keychain_dir = None - self._client_cert_chain = None - - # We save off the previously-configured timeout and then set it to - # zero. This is done because we use select and friends to handle the - # timeouts, but if we leave the timeout set on the lower socket then - # Python will "kindly" call select on that socket again for us. Avoid - # that by forcing the timeout to zero. - self._timeout = self.socket.gettimeout() - self.socket.settimeout(0) - - @contextlib.contextmanager - def _raise_on_error(self): - """ - A context manager that can be used to wrap calls that do I/O from - SecureTransport. If any of the I/O callbacks hit an exception, this - context manager will correctly propagate the exception after the fact. - This avoids silently swallowing those exceptions. - - It also correctly forces the socket closed. - """ - self._exception = None - - # We explicitly don't catch around this yield because in the unlikely - # event that an exception was hit in the block we don't want to swallow - # it. - yield - if self._exception is not None: - exception, self._exception = self._exception, None - self.close() - raise exception - - def _set_ciphers(self): - """ - Sets up the allowed ciphers. By default this matches the set in - util.ssl_.DEFAULT_CIPHERS, at least as supported by macOS. This is done - custom and doesn't allow changing at this time, mostly because parsing - OpenSSL cipher strings is going to be a freaking nightmare. - """ - ciphers = (Security.SSLCipherSuite * len(CIPHER_SUITES))(*CIPHER_SUITES) - result = Security.SSLSetEnabledCiphers( - self.context, ciphers, len(CIPHER_SUITES) - ) - _assert_no_error(result) - - def _set_alpn_protocols(self, protocols): - """ - Sets up the ALPN protocols on the context. - """ - if not protocols: - return - protocols_arr = _create_cfstring_array(protocols) - try: - result = Security.SSLSetALPNProtocols(self.context, protocols_arr) - _assert_no_error(result) - finally: - CoreFoundation.CFRelease(protocols_arr) - - def _custom_validate(self, verify, trust_bundle): - """ - Called when we have set custom validation. We do this in two cases: - first, when cert validation is entirely disabled; and second, when - using a custom trust DB. - Raises an SSLError if the connection is not trusted. - """ - # If we disabled cert validation, just say: cool. - if not verify: - return - - successes = ( - SecurityConst.kSecTrustResultUnspecified, - SecurityConst.kSecTrustResultProceed, - ) - try: - trust_result = self._evaluate_trust(trust_bundle) - if trust_result in successes: - return - reason = "error code: %d" % (trust_result,) - except Exception as e: - # Do not trust on error - reason = "exception: %r" % (e,) - - # SecureTransport does not send an alert nor shuts down the connection. - rec = _build_tls_unknown_ca_alert(self.version()) - self.socket.sendall(rec) - # close the connection immediately - # l_onoff = 1, activate linger - # l_linger = 0, linger for 0 seoncds - opts = struct.pack("ii", 1, 0) - self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, opts) - self.close() - raise ssl.SSLError("certificate verify failed, %s" % reason) - - def _evaluate_trust(self, trust_bundle): - # We want data in memory, so load it up. - if os.path.isfile(trust_bundle): - with open(trust_bundle, "rb") as f: - trust_bundle = f.read() - - cert_array = None - trust = Security.SecTrustRef() - - try: - # Get a CFArray that contains the certs we want. - cert_array = _cert_array_from_pem(trust_bundle) - - # Ok, now the hard part. We want to get the SecTrustRef that ST has - # created for this connection, shove our CAs into it, tell ST to - # ignore everything else it knows, and then ask if it can build a - # chain. This is a buuuunch of code. - result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust)) - _assert_no_error(result) - if not trust: - raise ssl.SSLError("Failed to copy trust reference") - - result = Security.SecTrustSetAnchorCertificates(trust, cert_array) - _assert_no_error(result) - - result = Security.SecTrustSetAnchorCertificatesOnly(trust, True) - _assert_no_error(result) - - trust_result = Security.SecTrustResultType() - result = Security.SecTrustEvaluate(trust, ctypes.byref(trust_result)) - _assert_no_error(result) - finally: - if trust: - CoreFoundation.CFRelease(trust) - - if cert_array is not None: - CoreFoundation.CFRelease(cert_array) - - return trust_result.value - - def handshake( - self, - server_hostname, - verify, - trust_bundle, - min_version, - max_version, - client_cert, - client_key, - client_key_passphrase, - alpn_protocols, - ): - """ - Actually performs the TLS handshake. This is run automatically by - wrapped socket, and shouldn't be needed in user code. - """ - # First, we do the initial bits of connection setup. We need to create - # a context, set its I/O funcs, and set the connection reference. - self.context = Security.SSLCreateContext( - None, SecurityConst.kSSLClientSide, SecurityConst.kSSLStreamType - ) - result = Security.SSLSetIOFuncs( - self.context, _read_callback_pointer, _write_callback_pointer - ) - _assert_no_error(result) - - # Here we need to compute the handle to use. We do this by taking the - # id of self modulo 2**31 - 1. If this is already in the dictionary, we - # just keep incrementing by one until we find a free space. - with _connection_ref_lock: - handle = id(self) % 2147483647 - while handle in _connection_refs: - handle = (handle + 1) % 2147483647 - _connection_refs[handle] = self - - result = Security.SSLSetConnection(self.context, handle) - _assert_no_error(result) - - # If we have a server hostname, we should set that too. - if server_hostname: - if not isinstance(server_hostname, bytes): - server_hostname = server_hostname.encode("utf-8") - - result = Security.SSLSetPeerDomainName( - self.context, server_hostname, len(server_hostname) - ) - _assert_no_error(result) - - # Setup the ciphers. - self._set_ciphers() - - # Setup the ALPN protocols. - self._set_alpn_protocols(alpn_protocols) - - # Set the minimum and maximum TLS versions. - result = Security.SSLSetProtocolVersionMin(self.context, min_version) - _assert_no_error(result) - - result = Security.SSLSetProtocolVersionMax(self.context, max_version) - _assert_no_error(result) - - # If there's a trust DB, we need to use it. We do that by telling - # SecureTransport to break on server auth. We also do that if we don't - # want to validate the certs at all: we just won't actually do any - # authing in that case. - if not verify or trust_bundle is not None: - result = Security.SSLSetSessionOption( - self.context, SecurityConst.kSSLSessionOptionBreakOnServerAuth, True - ) - _assert_no_error(result) - - # If there's a client cert, we need to use it. - if client_cert: - self._keychain, self._keychain_dir = _temporary_keychain() - self._client_cert_chain = _load_client_cert_chain( - self._keychain, client_cert, client_key - ) - result = Security.SSLSetCertificate(self.context, self._client_cert_chain) - _assert_no_error(result) - - while True: - with self._raise_on_error(): - result = Security.SSLHandshake(self.context) - - if result == SecurityConst.errSSLWouldBlock: - raise socket.timeout("handshake timed out") - elif result == SecurityConst.errSSLServerAuthCompleted: - self._custom_validate(verify, trust_bundle) - continue - else: - _assert_no_error(result) - break - - def fileno(self): - return self.socket.fileno() - - # Copy-pasted from Python 3.5 source code - def _decref_socketios(self): - if self._makefile_refs > 0: - self._makefile_refs -= 1 - if self._closed: - self.close() - - def recv(self, bufsiz): - buffer = ctypes.create_string_buffer(bufsiz) - bytes_read = self.recv_into(buffer, bufsiz) - data = buffer[:bytes_read] - return data - - def recv_into(self, buffer, nbytes=None): - # Read short on EOF. - if self._closed: - return 0 - - if nbytes is None: - nbytes = len(buffer) - - buffer = (ctypes.c_char * nbytes).from_buffer(buffer) - processed_bytes = ctypes.c_size_t(0) - - with self._raise_on_error(): - result = Security.SSLRead( - self.context, buffer, nbytes, ctypes.byref(processed_bytes) - ) - - # There are some result codes that we want to treat as "not always - # errors". Specifically, those are errSSLWouldBlock, - # errSSLClosedGraceful, and errSSLClosedNoNotify. - if result == SecurityConst.errSSLWouldBlock: - # If we didn't process any bytes, then this was just a time out. - # However, we can get errSSLWouldBlock in situations when we *did* - # read some data, and in those cases we should just read "short" - # and return. - if processed_bytes.value == 0: - # Timed out, no data read. - raise socket.timeout("recv timed out") - elif result in ( - SecurityConst.errSSLClosedGraceful, - SecurityConst.errSSLClosedNoNotify, - ): - # The remote peer has closed this connection. We should do so as - # well. Note that we don't actually return here because in - # principle this could actually be fired along with return data. - # It's unlikely though. - self.close() - else: - _assert_no_error(result) - - # Ok, we read and probably succeeded. We should return whatever data - # was actually read. - return processed_bytes.value - - def settimeout(self, timeout): - self._timeout = timeout - - def gettimeout(self): - return self._timeout - - def send(self, data): - processed_bytes = ctypes.c_size_t(0) - - with self._raise_on_error(): - result = Security.SSLWrite( - self.context, data, len(data), ctypes.byref(processed_bytes) - ) - - if result == SecurityConst.errSSLWouldBlock and processed_bytes.value == 0: - # Timed out - raise socket.timeout("send timed out") - else: - _assert_no_error(result) - - # We sent, and probably succeeded. Tell them how much we sent. - return processed_bytes.value - - def sendall(self, data): - total_sent = 0 - while total_sent < len(data): - sent = self.send(data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE]) - total_sent += sent - - def shutdown(self): - with self._raise_on_error(): - Security.SSLClose(self.context) - - def close(self): - # TODO: should I do clean shutdown here? Do I have to? - if self._makefile_refs < 1: - self._closed = True - if self.context: - CoreFoundation.CFRelease(self.context) - self.context = None - if self._client_cert_chain: - CoreFoundation.CFRelease(self._client_cert_chain) - self._client_cert_chain = None - if self._keychain: - Security.SecKeychainDelete(self._keychain) - CoreFoundation.CFRelease(self._keychain) - shutil.rmtree(self._keychain_dir) - self._keychain = self._keychain_dir = None - return self.socket.close() - else: - self._makefile_refs -= 1 - - def getpeercert(self, binary_form=False): - # Urgh, annoying. - # - # Here's how we do this: - # - # 1. Call SSLCopyPeerTrust to get hold of the trust object for this - # connection. - # 2. Call SecTrustGetCertificateAtIndex for index 0 to get the leaf. - # 3. To get the CN, call SecCertificateCopyCommonName and process that - # string so that it's of the appropriate type. - # 4. To get the SAN, we need to do something a bit more complex: - # a. Call SecCertificateCopyValues to get the data, requesting - # kSecOIDSubjectAltName. - # b. Mess about with this dictionary to try to get the SANs out. - # - # This is gross. Really gross. It's going to be a few hundred LoC extra - # just to repeat something that SecureTransport can *already do*. So my - # operating assumption at this time is that what we want to do is - # instead to just flag to urllib3 that it shouldn't do its own hostname - # validation when using SecureTransport. - if not binary_form: - raise ValueError("SecureTransport only supports dumping binary certs") - trust = Security.SecTrustRef() - certdata = None - der_bytes = None - - try: - # Grab the trust store. - result = Security.SSLCopyPeerTrust(self.context, ctypes.byref(trust)) - _assert_no_error(result) - if not trust: - # Probably we haven't done the handshake yet. No biggie. - return None - - cert_count = Security.SecTrustGetCertificateCount(trust) - if not cert_count: - # Also a case that might happen if we haven't handshaked. - # Handshook? Handshaken? - return None - - leaf = Security.SecTrustGetCertificateAtIndex(trust, 0) - assert leaf - - # Ok, now we want the DER bytes. - certdata = Security.SecCertificateCopyData(leaf) - assert certdata - - data_length = CoreFoundation.CFDataGetLength(certdata) - data_buffer = CoreFoundation.CFDataGetBytePtr(certdata) - der_bytes = ctypes.string_at(data_buffer, data_length) - finally: - if certdata: - CoreFoundation.CFRelease(certdata) - if trust: - CoreFoundation.CFRelease(trust) - - return der_bytes - - def version(self): - protocol = Security.SSLProtocol() - result = Security.SSLGetNegotiatedProtocolVersion( - self.context, ctypes.byref(protocol) - ) - _assert_no_error(result) - if protocol.value == SecurityConst.kTLSProtocol13: - raise ssl.SSLError("SecureTransport does not support TLS 1.3") - elif protocol.value == SecurityConst.kTLSProtocol12: - return "TLSv1.2" - elif protocol.value == SecurityConst.kTLSProtocol11: - return "TLSv1.1" - elif protocol.value == SecurityConst.kTLSProtocol1: - return "TLSv1" - elif protocol.value == SecurityConst.kSSLProtocol3: - return "SSLv3" - elif protocol.value == SecurityConst.kSSLProtocol2: - return "SSLv2" - else: - raise ssl.SSLError("Unknown TLS version: %r" % protocol) - - def _reuse(self): - self._makefile_refs += 1 - - def _drop(self): - if self._makefile_refs < 1: - self.close() - else: - self._makefile_refs -= 1 - - -if _fileobject: # Platform-specific: Python 2 - - def makefile(self, mode, bufsize=-1): - self._makefile_refs += 1 - return _fileobject(self, mode, bufsize, close=True) - -else: # Platform-specific: Python 3 - - def makefile(self, mode="r", buffering=None, *args, **kwargs): - # We disable buffering with SecureTransport because it conflicts with - # the buffering that ST does internally (see issue #1153 for more). - buffering = 0 - return backport_makefile(self, mode, buffering, *args, **kwargs) - - -WrappedSocket.makefile = makefile - - -class SecureTransportContext(object): - """ - I am a wrapper class for the SecureTransport library, to translate the - interface of the standard library ``SSLContext`` object to calls into - SecureTransport. - """ - - def __init__(self, protocol): - self._min_version, self._max_version = _protocol_to_min_max[protocol] - self._options = 0 - self._verify = False - self._trust_bundle = None - self._client_cert = None - self._client_key = None - self._client_key_passphrase = None - self._alpn_protocols = None - - @property - def check_hostname(self): - """ - SecureTransport cannot have its hostname checking disabled. For more, - see the comment on getpeercert() in this file. - """ - return True - - @check_hostname.setter - def check_hostname(self, value): - """ - SecureTransport cannot have its hostname checking disabled. For more, - see the comment on getpeercert() in this file. - """ - pass - - @property - def options(self): - # TODO: Well, crap. - # - # So this is the bit of the code that is the most likely to cause us - # trouble. Essentially we need to enumerate all of the SSL options that - # users might want to use and try to see if we can sensibly translate - # them, or whether we should just ignore them. - return self._options - - @options.setter - def options(self, value): - # TODO: Update in line with above. - self._options = value - - @property - def verify_mode(self): - return ssl.CERT_REQUIRED if self._verify else ssl.CERT_NONE - - @verify_mode.setter - def verify_mode(self, value): - self._verify = True if value == ssl.CERT_REQUIRED else False - - def set_default_verify_paths(self): - # So, this has to do something a bit weird. Specifically, what it does - # is nothing. - # - # This means that, if we had previously had load_verify_locations - # called, this does not undo that. We need to do that because it turns - # out that the rest of the urllib3 code will attempt to load the - # default verify paths if it hasn't been told about any paths, even if - # the context itself was sometime earlier. We resolve that by just - # ignoring it. - pass - - def load_default_certs(self): - return self.set_default_verify_paths() - - def set_ciphers(self, ciphers): - # For now, we just require the default cipher string. - if ciphers != util.ssl_.DEFAULT_CIPHERS: - raise ValueError("SecureTransport doesn't support custom cipher strings") - - def load_verify_locations(self, cafile=None, capath=None, cadata=None): - # OK, we only really support cadata and cafile. - if capath is not None: - raise ValueError("SecureTransport does not support cert directories") - - # Raise if cafile does not exist. - if cafile is not None: - with open(cafile): - pass - - self._trust_bundle = cafile or cadata - - def load_cert_chain(self, certfile, keyfile=None, password=None): - self._client_cert = certfile - self._client_key = keyfile - self._client_cert_passphrase = password - - def set_alpn_protocols(self, protocols): - """ - Sets the ALPN protocols that will later be set on the context. - - Raises a NotImplementedError if ALPN is not supported. - """ - if not hasattr(Security, "SSLSetALPNProtocols"): - raise NotImplementedError( - "SecureTransport supports ALPN only in macOS 10.12+" - ) - self._alpn_protocols = [six.ensure_binary(p) for p in protocols] - - def wrap_socket( - self, - sock, - server_side=False, - do_handshake_on_connect=True, - suppress_ragged_eofs=True, - server_hostname=None, - ): - # So, what do we do here? Firstly, we assert some properties. This is a - # stripped down shim, so there is some functionality we don't support. - # See PEP 543 for the real deal. - assert not server_side - assert do_handshake_on_connect - assert suppress_ragged_eofs - - # Ok, we're good to go. Now we want to create the wrapped socket object - # and store it in the appropriate place. - wrapped_socket = WrappedSocket(sock) - - # Now we can handshake - wrapped_socket.handshake( - server_hostname, - self._verify, - self._trust_bundle, - self._min_version, - self._max_version, - self._client_cert, - self._client_key, - self._client_key_passphrase, - self._alpn_protocols, - ) - return wrapped_socket diff --git a/infrastructure/sandbox/Data/lambda/urllib3/contrib/socks.py b/infrastructure/sandbox/Data/lambda/urllib3/contrib/socks.py deleted file mode 100644 index c326e80dd..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/contrib/socks.py +++ /dev/null @@ -1,216 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module contains provisional support for SOCKS proxies from within -urllib3. This module supports SOCKS4, SOCKS4A (an extension of SOCKS4), and -SOCKS5. To enable its functionality, either install PySocks or install this -module with the ``socks`` extra. - -The SOCKS implementation supports the full range of urllib3 features. It also -supports the following SOCKS features: - -- SOCKS4A (``proxy_url='socks4a://...``) -- SOCKS4 (``proxy_url='socks4://...``) -- SOCKS5 with remote DNS (``proxy_url='socks5h://...``) -- SOCKS5 with local DNS (``proxy_url='socks5://...``) -- Usernames and passwords for the SOCKS proxy - -.. note:: - It is recommended to use ``socks5h://`` or ``socks4a://`` schemes in - your ``proxy_url`` to ensure that DNS resolution is done from the remote - server instead of client-side when connecting to a domain name. - -SOCKS4 supports IPv4 and domain names with the SOCKS4A extension. SOCKS5 -supports IPv4, IPv6, and domain names. - -When connecting to a SOCKS4 proxy the ``username`` portion of the ``proxy_url`` -will be sent as the ``userid`` section of the SOCKS request: - -.. code-block:: python - - proxy_url="socks4a://@proxy-host" - -When connecting to a SOCKS5 proxy the ``username`` and ``password`` portion -of the ``proxy_url`` will be sent as the username/password to authenticate -with the proxy: - -.. code-block:: python - - proxy_url="socks5h://:@proxy-host" - -""" -from __future__ import absolute_import - -try: - import socks -except ImportError: - import warnings - - from ..exceptions import DependencyWarning - - warnings.warn( - ( - "SOCKS support in urllib3 requires the installation of optional " - "dependencies: specifically, PySocks. For more information, see " - "https://urllib3.readthedocs.io/en/1.26.x/contrib.html#socks-proxies" - ), - DependencyWarning, - ) - raise - -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from ..connection import HTTPConnection, HTTPSConnection -from ..connectionpool import HTTPConnectionPool, HTTPSConnectionPool -from ..exceptions import ConnectTimeoutError, NewConnectionError -from ..poolmanager import PoolManager -from ..util.url import parse_url - -try: - import ssl -except ImportError: - ssl = None - - -class SOCKSConnection(HTTPConnection): - """ - A plain-text HTTP connection that connects via a SOCKS proxy. - """ - - def __init__(self, *args, **kwargs): - self._socks_options = kwargs.pop("_socks_options") - super(SOCKSConnection, self).__init__(*args, **kwargs) - - def _new_conn(self): - """ - Establish a new connection via the SOCKS proxy. - """ - extra_kw = {} - if self.source_address: - extra_kw["source_address"] = self.source_address - - if self.socket_options: - extra_kw["socket_options"] = self.socket_options - - try: - conn = socks.create_connection( - (self.host, self.port), - proxy_type=self._socks_options["socks_version"], - proxy_addr=self._socks_options["proxy_host"], - proxy_port=self._socks_options["proxy_port"], - proxy_username=self._socks_options["username"], - proxy_password=self._socks_options["password"], - proxy_rdns=self._socks_options["rdns"], - timeout=self.timeout, - **extra_kw - ) - - except SocketTimeout: - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - - except socks.ProxyError as e: - # This is fragile as hell, but it seems to be the only way to raise - # useful errors here. - if e.socket_err: - error = e.socket_err - if isinstance(error, SocketTimeout): - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - else: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % error - ) - else: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - except SocketError as e: # Defensive: PySocks should catch all these. - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - return conn - - -# We don't need to duplicate the Verified/Unverified distinction from -# urllib3/connection.py here because the HTTPSConnection will already have been -# correctly set to either the Verified or Unverified form by that module. This -# means the SOCKSHTTPSConnection will automatically be the correct type. -class SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection): - pass - - -class SOCKSHTTPConnectionPool(HTTPConnectionPool): - ConnectionCls = SOCKSConnection - - -class SOCKSHTTPSConnectionPool(HTTPSConnectionPool): - ConnectionCls = SOCKSHTTPSConnection - - -class SOCKSProxyManager(PoolManager): - """ - A version of the urllib3 ProxyManager that routes connections via the - defined SOCKS proxy. - """ - - pool_classes_by_scheme = { - "http": SOCKSHTTPConnectionPool, - "https": SOCKSHTTPSConnectionPool, - } - - def __init__( - self, - proxy_url, - username=None, - password=None, - num_pools=10, - headers=None, - **connection_pool_kw - ): - parsed = parse_url(proxy_url) - - if username is None and password is None and parsed.auth is not None: - split = parsed.auth.split(":") - if len(split) == 2: - username, password = split - if parsed.scheme == "socks5": - socks_version = socks.PROXY_TYPE_SOCKS5 - rdns = False - elif parsed.scheme == "socks5h": - socks_version = socks.PROXY_TYPE_SOCKS5 - rdns = True - elif parsed.scheme == "socks4": - socks_version = socks.PROXY_TYPE_SOCKS4 - rdns = False - elif parsed.scheme == "socks4a": - socks_version = socks.PROXY_TYPE_SOCKS4 - rdns = True - else: - raise ValueError("Unable to determine SOCKS version from %s" % proxy_url) - - self.proxy_url = proxy_url - - socks_options = { - "socks_version": socks_version, - "proxy_host": parsed.host, - "proxy_port": parsed.port, - "username": username, - "password": password, - "rdns": rdns, - } - connection_pool_kw["_socks_options"] = socks_options - - super(SOCKSProxyManager, self).__init__( - num_pools, headers, **connection_pool_kw - ) - - self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme diff --git a/infrastructure/sandbox/Data/lambda/urllib3/exceptions.py b/infrastructure/sandbox/Data/lambda/urllib3/exceptions.py deleted file mode 100644 index cba6f3f56..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/exceptions.py +++ /dev/null @@ -1,323 +0,0 @@ -from __future__ import absolute_import - -from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead - -# Base Exceptions - - -class HTTPError(Exception): - """Base exception used by this module.""" - - pass - - -class HTTPWarning(Warning): - """Base warning used by this module.""" - - pass - - -class PoolError(HTTPError): - """Base exception for errors caused within a pool.""" - - def __init__(self, pool, message): - self.pool = pool - HTTPError.__init__(self, "%s: %s" % (pool, message)) - - def __reduce__(self): - # For pickling purposes. - return self.__class__, (None, None) - - -class RequestError(PoolError): - """Base exception for PoolErrors that have associated URLs.""" - - def __init__(self, pool, url, message): - self.url = url - PoolError.__init__(self, pool, message) - - def __reduce__(self): - # For pickling purposes. - return self.__class__, (None, self.url, None) - - -class SSLError(HTTPError): - """Raised when SSL certificate fails in an HTTPS connection.""" - - pass - - -class ProxyError(HTTPError): - """Raised when the connection to a proxy fails.""" - - def __init__(self, message, error, *args): - super(ProxyError, self).__init__(message, error, *args) - self.original_error = error - - -class DecodeError(HTTPError): - """Raised when automatic decoding based on Content-Type fails.""" - - pass - - -class ProtocolError(HTTPError): - """Raised when something unexpected happens mid-request/response.""" - - pass - - -#: Renamed to ProtocolError but aliased for backwards compatibility. -ConnectionError = ProtocolError - - -# Leaf Exceptions - - -class MaxRetryError(RequestError): - """Raised when the maximum number of retries is exceeded. - - :param pool: The connection pool - :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool` - :param string url: The requested Url - :param exceptions.Exception reason: The underlying error - - """ - - def __init__(self, pool, url, reason=None): - self.reason = reason - - message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason) - - RequestError.__init__(self, pool, url, message) - - -class HostChangedError(RequestError): - """Raised when an existing pool gets a request for a foreign host.""" - - def __init__(self, pool, url, retries=3): - message = "Tried to open a foreign host with url: %s" % url - RequestError.__init__(self, pool, url, message) - self.retries = retries - - -class TimeoutStateError(HTTPError): - """Raised when passing an invalid state to a timeout""" - - pass - - -class TimeoutError(HTTPError): - """Raised when a socket timeout error occurs. - - Catching this error will catch both :exc:`ReadTimeoutErrors - ` and :exc:`ConnectTimeoutErrors `. - """ - - pass - - -class ReadTimeoutError(TimeoutError, RequestError): - """Raised when a socket timeout occurs while receiving data from a server""" - - pass - - -# This timeout error does not have a URL attached and needs to inherit from the -# base HTTPError -class ConnectTimeoutError(TimeoutError): - """Raised when a socket timeout occurs while connecting to a server""" - - pass - - -class NewConnectionError(ConnectTimeoutError, PoolError): - """Raised when we fail to establish a new connection. Usually ECONNREFUSED.""" - - pass - - -class EmptyPoolError(PoolError): - """Raised when a pool runs out of connections and no more are allowed.""" - - pass - - -class ClosedPoolError(PoolError): - """Raised when a request enters a pool after the pool has been closed.""" - - pass - - -class LocationValueError(ValueError, HTTPError): - """Raised when there is something wrong with a given URL input.""" - - pass - - -class LocationParseError(LocationValueError): - """Raised when get_host or similar fails to parse the URL input.""" - - def __init__(self, location): - message = "Failed to parse: %s" % location - HTTPError.__init__(self, message) - - self.location = location - - -class URLSchemeUnknown(LocationValueError): - """Raised when a URL input has an unsupported scheme.""" - - def __init__(self, scheme): - message = "Not supported URL scheme %s" % scheme - super(URLSchemeUnknown, self).__init__(message) - - self.scheme = scheme - - -class ResponseError(HTTPError): - """Used as a container for an error reason supplied in a MaxRetryError.""" - - GENERIC_ERROR = "too many error responses" - SPECIFIC_ERROR = "too many {status_code} error responses" - - -class SecurityWarning(HTTPWarning): - """Warned when performing security reducing actions""" - - pass - - -class SubjectAltNameWarning(SecurityWarning): - """Warned when connecting to a host with a certificate missing a SAN.""" - - pass - - -class InsecureRequestWarning(SecurityWarning): - """Warned when making an unverified HTTPS request.""" - - pass - - -class SystemTimeWarning(SecurityWarning): - """Warned when system time is suspected to be wrong""" - - pass - - -class InsecurePlatformWarning(SecurityWarning): - """Warned when certain TLS/SSL configuration is not available on a platform.""" - - pass - - -class SNIMissingWarning(HTTPWarning): - """Warned when making a HTTPS request without SNI available.""" - - pass - - -class DependencyWarning(HTTPWarning): - """ - Warned when an attempt is made to import a module with missing optional - dependencies. - """ - - pass - - -class ResponseNotChunked(ProtocolError, ValueError): - """Response needs to be chunked in order to read it as chunks.""" - - pass - - -class BodyNotHttplibCompatible(HTTPError): - """ - Body should be :class:`http.client.HTTPResponse` like - (have an fp attribute which returns raw chunks) for read_chunked(). - """ - - pass - - -class IncompleteRead(HTTPError, httplib_IncompleteRead): - """ - Response length doesn't match expected Content-Length - - Subclass of :class:`http.client.IncompleteRead` to allow int value - for ``partial`` to avoid creating large objects on streamed reads. - """ - - def __init__(self, partial, expected): - super(IncompleteRead, self).__init__(partial, expected) - - def __repr__(self): - return "IncompleteRead(%i bytes read, %i more expected)" % ( - self.partial, - self.expected, - ) - - -class InvalidChunkLength(HTTPError, httplib_IncompleteRead): - """Invalid chunk length in a chunked response.""" - - def __init__(self, response, length): - super(InvalidChunkLength, self).__init__( - response.tell(), response.length_remaining - ) - self.response = response - self.length = length - - def __repr__(self): - return "InvalidChunkLength(got length %r, %i bytes read)" % ( - self.length, - self.partial, - ) - - -class InvalidHeader(HTTPError): - """The header provided was somehow invalid.""" - - pass - - -class ProxySchemeUnknown(AssertionError, URLSchemeUnknown): - """ProxyManager does not support the supplied scheme""" - - # TODO(t-8ch): Stop inheriting from AssertionError in v2.0. - - def __init__(self, scheme): - # 'localhost' is here because our URL parser parses - # localhost:8080 -> scheme=localhost, remove if we fix this. - if scheme == "localhost": - scheme = None - if scheme is None: - message = "Proxy URL had no scheme, should start with http:// or https://" - else: - message = ( - "Proxy URL had unsupported scheme %s, should use http:// or https://" - % scheme - ) - super(ProxySchemeUnknown, self).__init__(message) - - -class ProxySchemeUnsupported(ValueError): - """Fetching HTTPS resources through HTTPS proxies is unsupported""" - - pass - - -class HeaderParsingError(HTTPError): - """Raised by assert_header_parsing, but we convert it to a log.warning statement.""" - - def __init__(self, defects, unparsed_data): - message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data) - super(HeaderParsingError, self).__init__(message) - - -class UnrewindableBodyError(HTTPError): - """urllib3 encountered an error when trying to rewind a body""" - - pass diff --git a/infrastructure/sandbox/Data/lambda/urllib3/fields.py b/infrastructure/sandbox/Data/lambda/urllib3/fields.py deleted file mode 100644 index 9d630f491..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/fields.py +++ /dev/null @@ -1,274 +0,0 @@ -from __future__ import absolute_import - -import email.utils -import mimetypes -import re - -from .packages import six - - -def guess_content_type(filename, default="application/octet-stream"): - """ - Guess the "Content-Type" of a file. - - :param filename: - The filename to guess the "Content-Type" of using :mod:`mimetypes`. - :param default: - If no "Content-Type" can be guessed, default to `default`. - """ - if filename: - return mimetypes.guess_type(filename)[0] or default - return default - - -def format_header_param_rfc2231(name, value): - """ - Helper function to format and quote a single header parameter using the - strategy defined in RFC 2231. - - Particularly useful for header parameters which might contain - non-ASCII values, like file names. This follows - `RFC 2388 Section 4.4 `_. - - :param name: - The name of the parameter, a string expected to be ASCII only. - :param value: - The value of the parameter, provided as ``bytes`` or `str``. - :ret: - An RFC-2231-formatted unicode string. - """ - if isinstance(value, six.binary_type): - value = value.decode("utf-8") - - if not any(ch in value for ch in '"\\\r\n'): - result = u'%s="%s"' % (name, value) - try: - result.encode("ascii") - except (UnicodeEncodeError, UnicodeDecodeError): - pass - else: - return result - - if six.PY2: # Python 2: - value = value.encode("utf-8") - - # encode_rfc2231 accepts an encoded string and returns an ascii-encoded - # string in Python 2 but accepts and returns unicode strings in Python 3 - value = email.utils.encode_rfc2231(value, "utf-8") - value = "%s*=%s" % (name, value) - - if six.PY2: # Python 2: - value = value.decode("utf-8") - - return value - - -_HTML5_REPLACEMENTS = { - u"\u0022": u"%22", - # Replace "\" with "\\". - u"\u005C": u"\u005C\u005C", -} - -# All control characters from 0x00 to 0x1F *except* 0x1B. -_HTML5_REPLACEMENTS.update( - { - six.unichr(cc): u"%{:02X}".format(cc) - for cc in range(0x00, 0x1F + 1) - if cc not in (0x1B,) - } -) - - -def _replace_multiple(value, needles_and_replacements): - def replacer(match): - return needles_and_replacements[match.group(0)] - - pattern = re.compile( - r"|".join([re.escape(needle) for needle in needles_and_replacements.keys()]) - ) - - result = pattern.sub(replacer, value) - - return result - - -def format_header_param_html5(name, value): - """ - Helper function to format and quote a single header parameter using the - HTML5 strategy. - - Particularly useful for header parameters which might contain - non-ASCII values, like file names. This follows the `HTML5 Working Draft - Section 4.10.22.7`_ and matches the behavior of curl and modern browsers. - - .. _HTML5 Working Draft Section 4.10.22.7: - https://w3c.github.io/html/sec-forms.html#multipart-form-data - - :param name: - The name of the parameter, a string expected to be ASCII only. - :param value: - The value of the parameter, provided as ``bytes`` or `str``. - :ret: - A unicode string, stripped of troublesome characters. - """ - if isinstance(value, six.binary_type): - value = value.decode("utf-8") - - value = _replace_multiple(value, _HTML5_REPLACEMENTS) - - return u'%s="%s"' % (name, value) - - -# For backwards-compatibility. -format_header_param = format_header_param_html5 - - -class RequestField(object): - """ - A data container for request body parameters. - - :param name: - The name of this request field. Must be unicode. - :param data: - The data/value body. - :param filename: - An optional filename of the request field. Must be unicode. - :param headers: - An optional dict-like object of headers to initially use for the field. - :param header_formatter: - An optional callable that is used to encode and format the headers. By - default, this is :func:`format_header_param_html5`. - """ - - def __init__( - self, - name, - data, - filename=None, - headers=None, - header_formatter=format_header_param_html5, - ): - self._name = name - self._filename = filename - self.data = data - self.headers = {} - if headers: - self.headers = dict(headers) - self.header_formatter = header_formatter - - @classmethod - def from_tuples(cls, fieldname, value, header_formatter=format_header_param_html5): - """ - A :class:`~urllib3.fields.RequestField` factory from old-style tuple parameters. - - Supports constructing :class:`~urllib3.fields.RequestField` from - parameter of key/value strings AND key/filetuple. A filetuple is a - (filename, data, MIME type) tuple where the MIME type is optional. - For example:: - - 'foo': 'bar', - 'fakefile': ('foofile.txt', 'contents of foofile'), - 'realfile': ('barfile.txt', open('realfile').read()), - 'typedfile': ('bazfile.bin', open('bazfile').read(), 'image/jpeg'), - 'nonamefile': 'contents of nonamefile field', - - Field names and filenames must be unicode. - """ - if isinstance(value, tuple): - if len(value) == 3: - filename, data, content_type = value - else: - filename, data = value - content_type = guess_content_type(filename) - else: - filename = None - content_type = None - data = value - - request_param = cls( - fieldname, data, filename=filename, header_formatter=header_formatter - ) - request_param.make_multipart(content_type=content_type) - - return request_param - - def _render_part(self, name, value): - """ - Overridable helper function to format a single header parameter. By - default, this calls ``self.header_formatter``. - - :param name: - The name of the parameter, a string expected to be ASCII only. - :param value: - The value of the parameter, provided as a unicode string. - """ - - return self.header_formatter(name, value) - - def _render_parts(self, header_parts): - """ - Helper function to format and quote a single header. - - Useful for single headers that are composed of multiple items. E.g., - 'Content-Disposition' fields. - - :param header_parts: - A sequence of (k, v) tuples or a :class:`dict` of (k, v) to format - as `k1="v1"; k2="v2"; ...`. - """ - parts = [] - iterable = header_parts - if isinstance(header_parts, dict): - iterable = header_parts.items() - - for name, value in iterable: - if value is not None: - parts.append(self._render_part(name, value)) - - return u"; ".join(parts) - - def render_headers(self): - """ - Renders the headers for this request field. - """ - lines = [] - - sort_keys = ["Content-Disposition", "Content-Type", "Content-Location"] - for sort_key in sort_keys: - if self.headers.get(sort_key, False): - lines.append(u"%s: %s" % (sort_key, self.headers[sort_key])) - - for header_name, header_value in self.headers.items(): - if header_name not in sort_keys: - if header_value: - lines.append(u"%s: %s" % (header_name, header_value)) - - lines.append(u"\r\n") - return u"\r\n".join(lines) - - def make_multipart( - self, content_disposition=None, content_type=None, content_location=None - ): - """ - Makes this request field into a multipart request field. - - This method overrides "Content-Disposition", "Content-Type" and - "Content-Location" headers to the request parameter. - - :param content_type: - The 'Content-Type' of the request body. - :param content_location: - The 'Content-Location' of the request body. - - """ - self.headers["Content-Disposition"] = content_disposition or u"form-data" - self.headers["Content-Disposition"] += u"; ".join( - [ - u"", - self._render_parts( - ((u"name", self._name), (u"filename", self._filename)) - ), - ] - ) - self.headers["Content-Type"] = content_type - self.headers["Content-Location"] = content_location diff --git a/infrastructure/sandbox/Data/lambda/urllib3/filepost.py b/infrastructure/sandbox/Data/lambda/urllib3/filepost.py deleted file mode 100644 index 36c9252c6..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/filepost.py +++ /dev/null @@ -1,98 +0,0 @@ -from __future__ import absolute_import - -import binascii -import codecs -import os -from io import BytesIO - -from .fields import RequestField -from .packages import six -from .packages.six import b - -writer = codecs.lookup("utf-8")[3] - - -def choose_boundary(): - """ - Our embarrassingly-simple replacement for mimetools.choose_boundary. - """ - boundary = binascii.hexlify(os.urandom(16)) - if not six.PY2: - boundary = boundary.decode("ascii") - return boundary - - -def iter_field_objects(fields): - """ - Iterate over fields. - - Supports list of (k, v) tuples and dicts, and lists of - :class:`~urllib3.fields.RequestField`. - - """ - if isinstance(fields, dict): - i = six.iteritems(fields) - else: - i = iter(fields) - - for field in i: - if isinstance(field, RequestField): - yield field - else: - yield RequestField.from_tuples(*field) - - -def iter_fields(fields): - """ - .. deprecated:: 1.6 - - Iterate over fields. - - The addition of :class:`~urllib3.fields.RequestField` makes this function - obsolete. Instead, use :func:`iter_field_objects`, which returns - :class:`~urllib3.fields.RequestField` objects. - - Supports list of (k, v) tuples and dicts. - """ - if isinstance(fields, dict): - return ((k, v) for k, v in six.iteritems(fields)) - - return ((k, v) for k, v in fields) - - -def encode_multipart_formdata(fields, boundary=None): - """ - Encode a dictionary of ``fields`` using the multipart/form-data MIME format. - - :param fields: - Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`). - - :param boundary: - If not specified, then a random boundary will be generated using - :func:`urllib3.filepost.choose_boundary`. - """ - body = BytesIO() - if boundary is None: - boundary = choose_boundary() - - for field in iter_field_objects(fields): - body.write(b("--%s\r\n" % (boundary))) - - writer(body).write(field.render_headers()) - data = field.data - - if isinstance(data, int): - data = str(data) # Backwards compatibility - - if isinstance(data, six.text_type): - writer(body).write(data) - else: - body.write(data) - - body.write(b"\r\n") - - body.write(b("--%s--\r\n" % (boundary))) - - content_type = str("multipart/form-data; boundary=%s" % boundary) - - return body.getvalue(), content_type diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/__init__.py b/infrastructure/sandbox/Data/lambda/urllib3/packages/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/packages/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index d02336eea..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/packages/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/__pycache__/six.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/packages/__pycache__/six.cpython-310.pyc deleted file mode 100644 index 70cf2ca7c..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/packages/__pycache__/six.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__init__.py b/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 263e8050d..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__pycache__/makefile.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__pycache__/makefile.cpython-310.pyc deleted file mode 100644 index 2331bbcf4..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/__pycache__/makefile.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/makefile.py b/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/makefile.py deleted file mode 100644 index b8fb2154b..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/packages/backports/makefile.py +++ /dev/null @@ -1,51 +0,0 @@ -# -*- coding: utf-8 -*- -""" -backports.makefile -~~~~~~~~~~~~~~~~~~ - -Backports the Python 3 ``socket.makefile`` method for use with anything that -wants to create a "fake" socket object. -""" -import io -from socket import SocketIO - - -def backport_makefile( - self, mode="r", buffering=None, encoding=None, errors=None, newline=None -): - """ - Backport of ``socket.makefile`` from Python 3.5. - """ - if not set(mode) <= {"r", "w", "b"}: - raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,)) - writing = "w" in mode - reading = "r" in mode or not writing - assert reading or writing - binary = "b" in mode - rawmode = "" - if reading: - rawmode += "r" - if writing: - rawmode += "w" - raw = SocketIO(self, rawmode) - self._makefile_refs += 1 - if buffering is None: - buffering = -1 - if buffering < 0: - buffering = io.DEFAULT_BUFFER_SIZE - if buffering == 0: - if not binary: - raise ValueError("unbuffered streams must be binary") - return raw - if reading and writing: - buffer = io.BufferedRWPair(raw, raw, buffering) - elif reading: - buffer = io.BufferedReader(raw, buffering) - else: - assert writing - buffer = io.BufferedWriter(raw, buffering) - if binary: - return buffer - text = io.TextIOWrapper(buffer, encoding, errors, newline) - text.mode = mode - return text diff --git a/infrastructure/sandbox/Data/lambda/urllib3/packages/six.py b/infrastructure/sandbox/Data/lambda/urllib3/packages/six.py deleted file mode 100644 index f099a3dcd..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/packages/six.py +++ /dev/null @@ -1,1076 +0,0 @@ -# Copyright (c) 2010-2020 Benjamin Peterson -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -"""Utilities for writing code that runs on Python 2 and 3""" - -from __future__ import absolute_import - -import functools -import itertools -import operator -import sys -import types - -__author__ = "Benjamin Peterson " -__version__ = "1.16.0" - - -# Useful for very coarse version differentiation. -PY2 = sys.version_info[0] == 2 -PY3 = sys.version_info[0] == 3 -PY34 = sys.version_info[0:2] >= (3, 4) - -if PY3: - string_types = (str,) - integer_types = (int,) - class_types = (type,) - text_type = str - binary_type = bytes - - MAXSIZE = sys.maxsize -else: - string_types = (basestring,) - integer_types = (int, long) - class_types = (type, types.ClassType) - text_type = unicode - binary_type = str - - if sys.platform.startswith("java"): - # Jython always uses 32 bits. - MAXSIZE = int((1 << 31) - 1) - else: - # It's possible to have sizeof(long) != sizeof(Py_ssize_t). - class X(object): - def __len__(self): - return 1 << 31 - - try: - len(X()) - except OverflowError: - # 32-bit - MAXSIZE = int((1 << 31) - 1) - else: - # 64-bit - MAXSIZE = int((1 << 63) - 1) - del X - -if PY34: - from importlib.util import spec_from_loader -else: - spec_from_loader = None - - -def _add_doc(func, doc): - """Add documentation to a function.""" - func.__doc__ = doc - - -def _import_module(name): - """Import module, returning the module after the last dot.""" - __import__(name) - return sys.modules[name] - - -class _LazyDescr(object): - def __init__(self, name): - self.name = name - - def __get__(self, obj, tp): - result = self._resolve() - setattr(obj, self.name, result) # Invokes __set__. - try: - # This is a bit ugly, but it avoids running this again by - # removing this descriptor. - delattr(obj.__class__, self.name) - except AttributeError: - pass - return result - - -class MovedModule(_LazyDescr): - def __init__(self, name, old, new=None): - super(MovedModule, self).__init__(name) - if PY3: - if new is None: - new = name - self.mod = new - else: - self.mod = old - - def _resolve(self): - return _import_module(self.mod) - - def __getattr__(self, attr): - _module = self._resolve() - value = getattr(_module, attr) - setattr(self, attr, value) - return value - - -class _LazyModule(types.ModuleType): - def __init__(self, name): - super(_LazyModule, self).__init__(name) - self.__doc__ = self.__class__.__doc__ - - def __dir__(self): - attrs = ["__doc__", "__name__"] - attrs += [attr.name for attr in self._moved_attributes] - return attrs - - # Subclasses should override this - _moved_attributes = [] - - -class MovedAttribute(_LazyDescr): - def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): - super(MovedAttribute, self).__init__(name) - if PY3: - if new_mod is None: - new_mod = name - self.mod = new_mod - if new_attr is None: - if old_attr is None: - new_attr = name - else: - new_attr = old_attr - self.attr = new_attr - else: - self.mod = old_mod - if old_attr is None: - old_attr = name - self.attr = old_attr - - def _resolve(self): - module = _import_module(self.mod) - return getattr(module, self.attr) - - -class _SixMetaPathImporter(object): - - """ - A meta path importer to import six.moves and its submodules. - - This class implements a PEP302 finder and loader. It should be compatible - with Python 2.5 and all existing versions of Python3 - """ - - def __init__(self, six_module_name): - self.name = six_module_name - self.known_modules = {} - - def _add_module(self, mod, *fullnames): - for fullname in fullnames: - self.known_modules[self.name + "." + fullname] = mod - - def _get_module(self, fullname): - return self.known_modules[self.name + "." + fullname] - - def find_module(self, fullname, path=None): - if fullname in self.known_modules: - return self - return None - - def find_spec(self, fullname, path, target=None): - if fullname in self.known_modules: - return spec_from_loader(fullname, self) - return None - - def __get_module(self, fullname): - try: - return self.known_modules[fullname] - except KeyError: - raise ImportError("This loader does not know module " + fullname) - - def load_module(self, fullname): - try: - # in case of a reload - return sys.modules[fullname] - except KeyError: - pass - mod = self.__get_module(fullname) - if isinstance(mod, MovedModule): - mod = mod._resolve() - else: - mod.__loader__ = self - sys.modules[fullname] = mod - return mod - - def is_package(self, fullname): - """ - Return true, if the named module is a package. - - We need this method to get correct spec objects with - Python 3.4 (see PEP451) - """ - return hasattr(self.__get_module(fullname), "__path__") - - def get_code(self, fullname): - """Return None - - Required, if is_package is implemented""" - self.__get_module(fullname) # eventually raises ImportError - return None - - get_source = get_code # same as get_code - - def create_module(self, spec): - return self.load_module(spec.name) - - def exec_module(self, module): - pass - - -_importer = _SixMetaPathImporter(__name__) - - -class _MovedItems(_LazyModule): - - """Lazy loading of moved objects""" - - __path__ = [] # mark as package - - -_moved_attributes = [ - MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), - MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), - MovedAttribute( - "filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse" - ), - MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), - MovedAttribute("intern", "__builtin__", "sys"), - MovedAttribute("map", "itertools", "builtins", "imap", "map"), - MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), - MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), - MovedAttribute("getoutput", "commands", "subprocess"), - MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute( - "reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload" - ), - MovedAttribute("reduce", "__builtin__", "functools"), - MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), - MovedAttribute("StringIO", "StringIO", "io"), - MovedAttribute("UserDict", "UserDict", "collections"), - MovedAttribute("UserList", "UserList", "collections"), - MovedAttribute("UserString", "UserString", "collections"), - MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), - MovedAttribute( - "zip_longest", "itertools", "itertools", "izip_longest", "zip_longest" - ), - MovedModule("builtins", "__builtin__"), - MovedModule("configparser", "ConfigParser"), - MovedModule( - "collections_abc", - "collections", - "collections.abc" if sys.version_info >= (3, 3) else "collections", - ), - MovedModule("copyreg", "copy_reg"), - MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), - MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), - MovedModule( - "_dummy_thread", - "dummy_thread", - "_dummy_thread" if sys.version_info < (3, 9) else "_thread", - ), - MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), - MovedModule("http_cookies", "Cookie", "http.cookies"), - MovedModule("html_entities", "htmlentitydefs", "html.entities"), - MovedModule("html_parser", "HTMLParser", "html.parser"), - MovedModule("http_client", "httplib", "http.client"), - MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), - MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), - MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), - MovedModule( - "email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart" - ), - MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), - MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), - MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), - MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), - MovedModule("cPickle", "cPickle", "pickle"), - MovedModule("queue", "Queue"), - MovedModule("reprlib", "repr"), - MovedModule("socketserver", "SocketServer"), - MovedModule("_thread", "thread", "_thread"), - MovedModule("tkinter", "Tkinter"), - MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), - MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), - MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), - MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), - MovedModule("tkinter_tix", "Tix", "tkinter.tix"), - MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), - MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), - MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), - MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), - MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), - MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), - MovedModule("tkinter_font", "tkFont", "tkinter.font"), - MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), - MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), - MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), - MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), - MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), - MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), - MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), - MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), -] -# Add windows specific modules. -if sys.platform == "win32": - _moved_attributes += [ - MovedModule("winreg", "_winreg"), - ] - -for attr in _moved_attributes: - setattr(_MovedItems, attr.name, attr) - if isinstance(attr, MovedModule): - _importer._add_module(attr, "moves." + attr.name) -del attr - -_MovedItems._moved_attributes = _moved_attributes - -moves = _MovedItems(__name__ + ".moves") -_importer._add_module(moves, "moves") - - -class Module_six_moves_urllib_parse(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_parse""" - - -_urllib_parse_moved_attributes = [ - MovedAttribute("ParseResult", "urlparse", "urllib.parse"), - MovedAttribute("SplitResult", "urlparse", "urllib.parse"), - MovedAttribute("parse_qs", "urlparse", "urllib.parse"), - MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), - MovedAttribute("urldefrag", "urlparse", "urllib.parse"), - MovedAttribute("urljoin", "urlparse", "urllib.parse"), - MovedAttribute("urlparse", "urlparse", "urllib.parse"), - MovedAttribute("urlsplit", "urlparse", "urllib.parse"), - MovedAttribute("urlunparse", "urlparse", "urllib.parse"), - MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), - MovedAttribute("quote", "urllib", "urllib.parse"), - MovedAttribute("quote_plus", "urllib", "urllib.parse"), - MovedAttribute("unquote", "urllib", "urllib.parse"), - MovedAttribute("unquote_plus", "urllib", "urllib.parse"), - MovedAttribute( - "unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes" - ), - MovedAttribute("urlencode", "urllib", "urllib.parse"), - MovedAttribute("splitquery", "urllib", "urllib.parse"), - MovedAttribute("splittag", "urllib", "urllib.parse"), - MovedAttribute("splituser", "urllib", "urllib.parse"), - MovedAttribute("splitvalue", "urllib", "urllib.parse"), - MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), - MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), - MovedAttribute("uses_params", "urlparse", "urllib.parse"), - MovedAttribute("uses_query", "urlparse", "urllib.parse"), - MovedAttribute("uses_relative", "urlparse", "urllib.parse"), -] -for attr in _urllib_parse_moved_attributes: - setattr(Module_six_moves_urllib_parse, attr.name, attr) -del attr - -Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), - "moves.urllib_parse", - "moves.urllib.parse", -) - - -class Module_six_moves_urllib_error(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_error""" - - -_urllib_error_moved_attributes = [ - MovedAttribute("URLError", "urllib2", "urllib.error"), - MovedAttribute("HTTPError", "urllib2", "urllib.error"), - MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), -] -for attr in _urllib_error_moved_attributes: - setattr(Module_six_moves_urllib_error, attr.name, attr) -del attr - -Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), - "moves.urllib_error", - "moves.urllib.error", -) - - -class Module_six_moves_urllib_request(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_request""" - - -_urllib_request_moved_attributes = [ - MovedAttribute("urlopen", "urllib2", "urllib.request"), - MovedAttribute("install_opener", "urllib2", "urllib.request"), - MovedAttribute("build_opener", "urllib2", "urllib.request"), - MovedAttribute("pathname2url", "urllib", "urllib.request"), - MovedAttribute("url2pathname", "urllib", "urllib.request"), - MovedAttribute("getproxies", "urllib", "urllib.request"), - MovedAttribute("Request", "urllib2", "urllib.request"), - MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), - MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), - MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), - MovedAttribute("BaseHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), - MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), - MovedAttribute("FileHandler", "urllib2", "urllib.request"), - MovedAttribute("FTPHandler", "urllib2", "urllib.request"), - MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), - MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), - MovedAttribute("urlretrieve", "urllib", "urllib.request"), - MovedAttribute("urlcleanup", "urllib", "urllib.request"), - MovedAttribute("URLopener", "urllib", "urllib.request"), - MovedAttribute("FancyURLopener", "urllib", "urllib.request"), - MovedAttribute("proxy_bypass", "urllib", "urllib.request"), - MovedAttribute("parse_http_list", "urllib2", "urllib.request"), - MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), -] -for attr in _urllib_request_moved_attributes: - setattr(Module_six_moves_urllib_request, attr.name, attr) -del attr - -Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), - "moves.urllib_request", - "moves.urllib.request", -) - - -class Module_six_moves_urllib_response(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_response""" - - -_urllib_response_moved_attributes = [ - MovedAttribute("addbase", "urllib", "urllib.response"), - MovedAttribute("addclosehook", "urllib", "urllib.response"), - MovedAttribute("addinfo", "urllib", "urllib.response"), - MovedAttribute("addinfourl", "urllib", "urllib.response"), -] -for attr in _urllib_response_moved_attributes: - setattr(Module_six_moves_urllib_response, attr.name, attr) -del attr - -Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), - "moves.urllib_response", - "moves.urllib.response", -) - - -class Module_six_moves_urllib_robotparser(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_robotparser""" - - -_urllib_robotparser_moved_attributes = [ - MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), -] -for attr in _urllib_robotparser_moved_attributes: - setattr(Module_six_moves_urllib_robotparser, attr.name, attr) -del attr - -Module_six_moves_urllib_robotparser._moved_attributes = ( - _urllib_robotparser_moved_attributes -) - -_importer._add_module( - Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), - "moves.urllib_robotparser", - "moves.urllib.robotparser", -) - - -class Module_six_moves_urllib(types.ModuleType): - - """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" - - __path__ = [] # mark as package - parse = _importer._get_module("moves.urllib_parse") - error = _importer._get_module("moves.urllib_error") - request = _importer._get_module("moves.urllib_request") - response = _importer._get_module("moves.urllib_response") - robotparser = _importer._get_module("moves.urllib_robotparser") - - def __dir__(self): - return ["parse", "error", "request", "response", "robotparser"] - - -_importer._add_module( - Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib" -) - - -def add_move(move): - """Add an item to six.moves.""" - setattr(_MovedItems, move.name, move) - - -def remove_move(name): - """Remove item from six.moves.""" - try: - delattr(_MovedItems, name) - except AttributeError: - try: - del moves.__dict__[name] - except KeyError: - raise AttributeError("no such move, %r" % (name,)) - - -if PY3: - _meth_func = "__func__" - _meth_self = "__self__" - - _func_closure = "__closure__" - _func_code = "__code__" - _func_defaults = "__defaults__" - _func_globals = "__globals__" -else: - _meth_func = "im_func" - _meth_self = "im_self" - - _func_closure = "func_closure" - _func_code = "func_code" - _func_defaults = "func_defaults" - _func_globals = "func_globals" - - -try: - advance_iterator = next -except NameError: - - def advance_iterator(it): - return it.next() - - -next = advance_iterator - - -try: - callable = callable -except NameError: - - def callable(obj): - return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) - - -if PY3: - - def get_unbound_function(unbound): - return unbound - - create_bound_method = types.MethodType - - def create_unbound_method(func, cls): - return func - - Iterator = object -else: - - def get_unbound_function(unbound): - return unbound.im_func - - def create_bound_method(func, obj): - return types.MethodType(func, obj, obj.__class__) - - def create_unbound_method(func, cls): - return types.MethodType(func, None, cls) - - class Iterator(object): - def next(self): - return type(self).__next__(self) - - callable = callable -_add_doc( - get_unbound_function, """Get the function out of a possibly unbound function""" -) - - -get_method_function = operator.attrgetter(_meth_func) -get_method_self = operator.attrgetter(_meth_self) -get_function_closure = operator.attrgetter(_func_closure) -get_function_code = operator.attrgetter(_func_code) -get_function_defaults = operator.attrgetter(_func_defaults) -get_function_globals = operator.attrgetter(_func_globals) - - -if PY3: - - def iterkeys(d, **kw): - return iter(d.keys(**kw)) - - def itervalues(d, **kw): - return iter(d.values(**kw)) - - def iteritems(d, **kw): - return iter(d.items(**kw)) - - def iterlists(d, **kw): - return iter(d.lists(**kw)) - - viewkeys = operator.methodcaller("keys") - - viewvalues = operator.methodcaller("values") - - viewitems = operator.methodcaller("items") -else: - - def iterkeys(d, **kw): - return d.iterkeys(**kw) - - def itervalues(d, **kw): - return d.itervalues(**kw) - - def iteritems(d, **kw): - return d.iteritems(**kw) - - def iterlists(d, **kw): - return d.iterlists(**kw) - - viewkeys = operator.methodcaller("viewkeys") - - viewvalues = operator.methodcaller("viewvalues") - - viewitems = operator.methodcaller("viewitems") - -_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") -_add_doc(itervalues, "Return an iterator over the values of a dictionary.") -_add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") -_add_doc( - iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary." -) - - -if PY3: - - def b(s): - return s.encode("latin-1") - - def u(s): - return s - - unichr = chr - import struct - - int2byte = struct.Struct(">B").pack - del struct - byte2int = operator.itemgetter(0) - indexbytes = operator.getitem - iterbytes = iter - import io - - StringIO = io.StringIO - BytesIO = io.BytesIO - del io - _assertCountEqual = "assertCountEqual" - if sys.version_info[1] <= 1: - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" - else: - _assertRaisesRegex = "assertRaisesRegex" - _assertRegex = "assertRegex" - _assertNotRegex = "assertNotRegex" -else: - - def b(s): - return s - - # Workaround for standalone backslash - - def u(s): - return unicode(s.replace(r"\\", r"\\\\"), "unicode_escape") - - unichr = unichr - int2byte = chr - - def byte2int(bs): - return ord(bs[0]) - - def indexbytes(buf, i): - return ord(buf[i]) - - iterbytes = functools.partial(itertools.imap, ord) - import StringIO - - StringIO = BytesIO = StringIO.StringIO - _assertCountEqual = "assertItemsEqual" - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" -_add_doc(b, """Byte literal""") -_add_doc(u, """Text literal""") - - -def assertCountEqual(self, *args, **kwargs): - return getattr(self, _assertCountEqual)(*args, **kwargs) - - -def assertRaisesRegex(self, *args, **kwargs): - return getattr(self, _assertRaisesRegex)(*args, **kwargs) - - -def assertRegex(self, *args, **kwargs): - return getattr(self, _assertRegex)(*args, **kwargs) - - -def assertNotRegex(self, *args, **kwargs): - return getattr(self, _assertNotRegex)(*args, **kwargs) - - -if PY3: - exec_ = getattr(moves.builtins, "exec") - - def reraise(tp, value, tb=None): - try: - if value is None: - value = tp() - if value.__traceback__ is not tb: - raise value.with_traceback(tb) - raise value - finally: - value = None - tb = None - -else: - - def exec_(_code_, _globs_=None, _locs_=None): - """Execute code in a namespace.""" - if _globs_ is None: - frame = sys._getframe(1) - _globs_ = frame.f_globals - if _locs_ is None: - _locs_ = frame.f_locals - del frame - elif _locs_ is None: - _locs_ = _globs_ - exec ("""exec _code_ in _globs_, _locs_""") - - exec_( - """def reraise(tp, value, tb=None): - try: - raise tp, value, tb - finally: - tb = None -""" - ) - - -if sys.version_info[:2] > (3,): - exec_( - """def raise_from(value, from_value): - try: - raise value from from_value - finally: - value = None -""" - ) -else: - - def raise_from(value, from_value): - raise value - - -print_ = getattr(moves.builtins, "print", None) -if print_ is None: - - def print_(*args, **kwargs): - """The new-style print function for Python 2.4 and 2.5.""" - fp = kwargs.pop("file", sys.stdout) - if fp is None: - return - - def write(data): - if not isinstance(data, basestring): - data = str(data) - # If the file has an encoding, encode unicode with it. - if ( - isinstance(fp, file) - and isinstance(data, unicode) - and fp.encoding is not None - ): - errors = getattr(fp, "errors", None) - if errors is None: - errors = "strict" - data = data.encode(fp.encoding, errors) - fp.write(data) - - want_unicode = False - sep = kwargs.pop("sep", None) - if sep is not None: - if isinstance(sep, unicode): - want_unicode = True - elif not isinstance(sep, str): - raise TypeError("sep must be None or a string") - end = kwargs.pop("end", None) - if end is not None: - if isinstance(end, unicode): - want_unicode = True - elif not isinstance(end, str): - raise TypeError("end must be None or a string") - if kwargs: - raise TypeError("invalid keyword arguments to print()") - if not want_unicode: - for arg in args: - if isinstance(arg, unicode): - want_unicode = True - break - if want_unicode: - newline = unicode("\n") - space = unicode(" ") - else: - newline = "\n" - space = " " - if sep is None: - sep = space - if end is None: - end = newline - for i, arg in enumerate(args): - if i: - write(sep) - write(arg) - write(end) - - -if sys.version_info[:2] < (3, 3): - _print = print_ - - def print_(*args, **kwargs): - fp = kwargs.get("file", sys.stdout) - flush = kwargs.pop("flush", False) - _print(*args, **kwargs) - if flush and fp is not None: - fp.flush() - - -_add_doc(reraise, """Reraise an exception.""") - -if sys.version_info[0:2] < (3, 4): - # This does exactly the same what the :func:`py3:functools.update_wrapper` - # function does on Python versions after 3.2. It sets the ``__wrapped__`` - # attribute on ``wrapper`` object and it doesn't raise an error if any of - # the attributes mentioned in ``assigned`` and ``updated`` are missing on - # ``wrapped`` object. - def _update_wrapper( - wrapper, - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - for attr in assigned: - try: - value = getattr(wrapped, attr) - except AttributeError: - continue - else: - setattr(wrapper, attr, value) - for attr in updated: - getattr(wrapper, attr).update(getattr(wrapped, attr, {})) - wrapper.__wrapped__ = wrapped - return wrapper - - _update_wrapper.__doc__ = functools.update_wrapper.__doc__ - - def wraps( - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - return functools.partial( - _update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated - ) - - wraps.__doc__ = functools.wraps.__doc__ - -else: - wraps = functools.wraps - - -def with_metaclass(meta, *bases): - """Create a base class with a metaclass.""" - # This requires a bit of explanation: the basic idea is to make a dummy - # metaclass for one level of class instantiation that replaces itself with - # the actual metaclass. - class metaclass(type): - def __new__(cls, name, this_bases, d): - if sys.version_info[:2] >= (3, 7): - # This version introduced PEP 560 that requires a bit - # of extra care (we mimic what is done by __build_class__). - resolved_bases = types.resolve_bases(bases) - if resolved_bases is not bases: - d["__orig_bases__"] = bases - else: - resolved_bases = bases - return meta(name, resolved_bases, d) - - @classmethod - def __prepare__(cls, name, this_bases): - return meta.__prepare__(name, bases) - - return type.__new__(metaclass, "temporary_class", (), {}) - - -def add_metaclass(metaclass): - """Class decorator for creating a class with a metaclass.""" - - def wrapper(cls): - orig_vars = cls.__dict__.copy() - slots = orig_vars.get("__slots__") - if slots is not None: - if isinstance(slots, str): - slots = [slots] - for slots_var in slots: - orig_vars.pop(slots_var) - orig_vars.pop("__dict__", None) - orig_vars.pop("__weakref__", None) - if hasattr(cls, "__qualname__"): - orig_vars["__qualname__"] = cls.__qualname__ - return metaclass(cls.__name__, cls.__bases__, orig_vars) - - return wrapper - - -def ensure_binary(s, encoding="utf-8", errors="strict"): - """Coerce **s** to six.binary_type. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> encoded to `bytes` - - `bytes` -> `bytes` - """ - if isinstance(s, binary_type): - return s - if isinstance(s, text_type): - return s.encode(encoding, errors) - raise TypeError("not expecting type '%s'" % type(s)) - - -def ensure_str(s, encoding="utf-8", errors="strict"): - """Coerce *s* to `str`. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - # Optimization: Fast return for the common case. - if type(s) is str: - return s - if PY2 and isinstance(s, text_type): - return s.encode(encoding, errors) - elif PY3 and isinstance(s, binary_type): - return s.decode(encoding, errors) - elif not isinstance(s, (text_type, binary_type)): - raise TypeError("not expecting type '%s'" % type(s)) - return s - - -def ensure_text(s, encoding="utf-8", errors="strict"): - """Coerce *s* to six.text_type. - - For Python 2: - - `unicode` -> `unicode` - - `str` -> `unicode` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - if isinstance(s, binary_type): - return s.decode(encoding, errors) - elif isinstance(s, text_type): - return s - else: - raise TypeError("not expecting type '%s'" % type(s)) - - -def python_2_unicode_compatible(klass): - """ - A class decorator that defines __unicode__ and __str__ methods under Python 2. - Under Python 3 it does nothing. - - To support Python 2 and 3 with a single code base, define a __str__ method - returning text and apply this decorator to the class. - """ - if PY2: - if "__str__" not in klass.__dict__: - raise ValueError( - "@python_2_unicode_compatible cannot be applied " - "to %s because it doesn't define __str__()." % klass.__name__ - ) - klass.__unicode__ = klass.__str__ - klass.__str__ = lambda self: self.__unicode__().encode("utf-8") - return klass - - -# Complete the moves implementation. -# This code is at the end of this module to speed up module loading. -# Turn this module into a package. -__path__ = [] # required for PEP 302 and PEP 451 -__package__ = __name__ # see PEP 366 @ReservedAssignment -if globals().get("__spec__") is not None: - __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable -# Remove other six meta path importers, since they cause problems. This can -# happen if six is removed from sys.modules and then reloaded. (Setuptools does -# this for some reason.) -if sys.meta_path: - for i, importer in enumerate(sys.meta_path): - # Here's some real nastiness: Another "instance" of the six module might - # be floating around. Therefore, we can't use isinstance() to check for - # the six meta path importer, since the other six instance will have - # inserted an importer with different class. - if ( - type(importer).__name__ == "_SixMetaPathImporter" - and importer.name == __name__ - ): - del sys.meta_path[i] - break - del i, importer -# Finally, add the importer to the meta path import hook. -sys.meta_path.append(_importer) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/poolmanager.py b/infrastructure/sandbox/Data/lambda/urllib3/poolmanager.py deleted file mode 100644 index ca4ec3411..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/poolmanager.py +++ /dev/null @@ -1,537 +0,0 @@ -from __future__ import absolute_import - -import collections -import functools -import logging - -from ._collections import RecentlyUsedContainer -from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, port_by_scheme -from .exceptions import ( - LocationValueError, - MaxRetryError, - ProxySchemeUnknown, - ProxySchemeUnsupported, - URLSchemeUnknown, -) -from .packages import six -from .packages.six.moves.urllib.parse import urljoin -from .request import RequestMethods -from .util.proxy import connection_requires_http_tunnel -from .util.retry import Retry -from .util.url import parse_url - -__all__ = ["PoolManager", "ProxyManager", "proxy_from_url"] - - -log = logging.getLogger(__name__) - -SSL_KEYWORDS = ( - "key_file", - "cert_file", - "cert_reqs", - "ca_certs", - "ssl_version", - "ca_cert_dir", - "ssl_context", - "key_password", - "server_hostname", -) - -# All known keyword arguments that could be provided to the pool manager, its -# pools, or the underlying connections. This is used to construct a pool key. -_key_fields = ( - "key_scheme", # str - "key_host", # str - "key_port", # int - "key_timeout", # int or float or Timeout - "key_retries", # int or Retry - "key_strict", # bool - "key_block", # bool - "key_source_address", # str - "key_key_file", # str - "key_key_password", # str - "key_cert_file", # str - "key_cert_reqs", # str - "key_ca_certs", # str - "key_ssl_version", # str - "key_ca_cert_dir", # str - "key_ssl_context", # instance of ssl.SSLContext or urllib3.util.ssl_.SSLContext - "key_maxsize", # int - "key_headers", # dict - "key__proxy", # parsed proxy url - "key__proxy_headers", # dict - "key__proxy_config", # class - "key_socket_options", # list of (level (int), optname (int), value (int or str)) tuples - "key__socks_options", # dict - "key_assert_hostname", # bool or string - "key_assert_fingerprint", # str - "key_server_hostname", # str -) - -#: The namedtuple class used to construct keys for the connection pool. -#: All custom key schemes should include the fields in this key at a minimum. -PoolKey = collections.namedtuple("PoolKey", _key_fields) - -_proxy_config_fields = ("ssl_context", "use_forwarding_for_https") -ProxyConfig = collections.namedtuple("ProxyConfig", _proxy_config_fields) - - -def _default_key_normalizer(key_class, request_context): - """ - Create a pool key out of a request context dictionary. - - According to RFC 3986, both the scheme and host are case-insensitive. - Therefore, this function normalizes both before constructing the pool - key for an HTTPS request. If you wish to change this behaviour, provide - alternate callables to ``key_fn_by_scheme``. - - :param key_class: - The class to use when constructing the key. This should be a namedtuple - with the ``scheme`` and ``host`` keys at a minimum. - :type key_class: namedtuple - :param request_context: - A dictionary-like object that contain the context for a request. - :type request_context: dict - - :return: A namedtuple that can be used as a connection pool key. - :rtype: PoolKey - """ - # Since we mutate the dictionary, make a copy first - context = request_context.copy() - context["scheme"] = context["scheme"].lower() - context["host"] = context["host"].lower() - - # These are both dictionaries and need to be transformed into frozensets - for key in ("headers", "_proxy_headers", "_socks_options"): - if key in context and context[key] is not None: - context[key] = frozenset(context[key].items()) - - # The socket_options key may be a list and needs to be transformed into a - # tuple. - socket_opts = context.get("socket_options") - if socket_opts is not None: - context["socket_options"] = tuple(socket_opts) - - # Map the kwargs to the names in the namedtuple - this is necessary since - # namedtuples can't have fields starting with '_'. - for key in list(context.keys()): - context["key_" + key] = context.pop(key) - - # Default to ``None`` for keys missing from the context - for field in key_class._fields: - if field not in context: - context[field] = None - - return key_class(**context) - - -#: A dictionary that maps a scheme to a callable that creates a pool key. -#: This can be used to alter the way pool keys are constructed, if desired. -#: Each PoolManager makes a copy of this dictionary so they can be configured -#: globally here, or individually on the instance. -key_fn_by_scheme = { - "http": functools.partial(_default_key_normalizer, PoolKey), - "https": functools.partial(_default_key_normalizer, PoolKey), -} - -pool_classes_by_scheme = {"http": HTTPConnectionPool, "https": HTTPSConnectionPool} - - -class PoolManager(RequestMethods): - """ - Allows for arbitrary requests while transparently keeping track of - necessary connection pools for you. - - :param num_pools: - Number of connection pools to cache before discarding the least - recently used pool. - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - - :param \\**connection_pool_kw: - Additional parameters are used to create fresh - :class:`urllib3.connectionpool.ConnectionPool` instances. - - Example:: - - >>> manager = PoolManager(num_pools=2) - >>> r = manager.request('GET', 'http://google.com/') - >>> r = manager.request('GET', 'http://google.com/mail') - >>> r = manager.request('GET', 'http://yahoo.com/') - >>> len(manager.pools) - 2 - - """ - - proxy = None - proxy_config = None - - def __init__(self, num_pools=10, headers=None, **connection_pool_kw): - RequestMethods.__init__(self, headers) - self.connection_pool_kw = connection_pool_kw - self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close()) - - # Locally set the pool classes and keys so other PoolManagers can - # override them. - self.pool_classes_by_scheme = pool_classes_by_scheme - self.key_fn_by_scheme = key_fn_by_scheme.copy() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.clear() - # Return False to re-raise any potential exceptions - return False - - def _new_pool(self, scheme, host, port, request_context=None): - """ - Create a new :class:`urllib3.connectionpool.ConnectionPool` based on host, port, scheme, and - any additional pool keyword arguments. - - If ``request_context`` is provided, it is provided as keyword arguments - to the pool class used. This method is used to actually create the - connection pools handed out by :meth:`connection_from_url` and - companion methods. It is intended to be overridden for customization. - """ - pool_cls = self.pool_classes_by_scheme[scheme] - if request_context is None: - request_context = self.connection_pool_kw.copy() - - # Although the context has everything necessary to create the pool, - # this function has historically only used the scheme, host, and port - # in the positional args. When an API change is acceptable these can - # be removed. - for key in ("scheme", "host", "port"): - request_context.pop(key, None) - - if scheme == "http": - for kw in SSL_KEYWORDS: - request_context.pop(kw, None) - - return pool_cls(host, port, **request_context) - - def clear(self): - """ - Empty our store of pools and direct them all to close. - - This will not affect in-flight connections, but they will not be - re-used after completion. - """ - self.pools.clear() - - def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None): - """ - Get a :class:`urllib3.connectionpool.ConnectionPool` based on the host, port, and scheme. - - If ``port`` isn't given, it will be derived from the ``scheme`` using - ``urllib3.connectionpool.port_by_scheme``. If ``pool_kwargs`` is - provided, it is merged with the instance's ``connection_pool_kw`` - variable and used to create the new connection pool, if one is - needed. - """ - - if not host: - raise LocationValueError("No host specified.") - - request_context = self._merge_pool_kwargs(pool_kwargs) - request_context["scheme"] = scheme or "http" - if not port: - port = port_by_scheme.get(request_context["scheme"].lower(), 80) - request_context["port"] = port - request_context["host"] = host - - return self.connection_from_context(request_context) - - def connection_from_context(self, request_context): - """ - Get a :class:`urllib3.connectionpool.ConnectionPool` based on the request context. - - ``request_context`` must at least contain the ``scheme`` key and its - value must be a key in ``key_fn_by_scheme`` instance variable. - """ - scheme = request_context["scheme"].lower() - pool_key_constructor = self.key_fn_by_scheme.get(scheme) - if not pool_key_constructor: - raise URLSchemeUnknown(scheme) - pool_key = pool_key_constructor(request_context) - - return self.connection_from_pool_key(pool_key, request_context=request_context) - - def connection_from_pool_key(self, pool_key, request_context=None): - """ - Get a :class:`urllib3.connectionpool.ConnectionPool` based on the provided pool key. - - ``pool_key`` should be a namedtuple that only contains immutable - objects. At a minimum it must have the ``scheme``, ``host``, and - ``port`` fields. - """ - with self.pools.lock: - # If the scheme, host, or port doesn't match existing open - # connections, open a new ConnectionPool. - pool = self.pools.get(pool_key) - if pool: - return pool - - # Make a fresh ConnectionPool of the desired type - scheme = request_context["scheme"] - host = request_context["host"] - port = request_context["port"] - pool = self._new_pool(scheme, host, port, request_context=request_context) - self.pools[pool_key] = pool - - return pool - - def connection_from_url(self, url, pool_kwargs=None): - """ - Similar to :func:`urllib3.connectionpool.connection_from_url`. - - If ``pool_kwargs`` is not provided and a new pool needs to be - constructed, ``self.connection_pool_kw`` is used to initialize - the :class:`urllib3.connectionpool.ConnectionPool`. If ``pool_kwargs`` - is provided, it is used instead. Note that if a new pool does not - need to be created for the request, the provided ``pool_kwargs`` are - not used. - """ - u = parse_url(url) - return self.connection_from_host( - u.host, port=u.port, scheme=u.scheme, pool_kwargs=pool_kwargs - ) - - def _merge_pool_kwargs(self, override): - """ - Merge a dictionary of override values for self.connection_pool_kw. - - This does not modify self.connection_pool_kw and returns a new dict. - Any keys in the override dictionary with a value of ``None`` are - removed from the merged dictionary. - """ - base_pool_kwargs = self.connection_pool_kw.copy() - if override: - for key, value in override.items(): - if value is None: - try: - del base_pool_kwargs[key] - except KeyError: - pass - else: - base_pool_kwargs[key] = value - return base_pool_kwargs - - def _proxy_requires_url_absolute_form(self, parsed_url): - """ - Indicates if the proxy requires the complete destination URL in the - request. Normally this is only needed when not using an HTTP CONNECT - tunnel. - """ - if self.proxy is None: - return False - - return not connection_requires_http_tunnel( - self.proxy, self.proxy_config, parsed_url.scheme - ) - - def _validate_proxy_scheme_url_selection(self, url_scheme): - """ - Validates that were not attempting to do TLS in TLS connections on - Python2 or with unsupported SSL implementations. - """ - if self.proxy is None or url_scheme != "https": - return - - if self.proxy.scheme != "https": - return - - if six.PY2 and not self.proxy_config.use_forwarding_for_https: - raise ProxySchemeUnsupported( - "Contacting HTTPS destinations through HTTPS proxies " - "'via CONNECT tunnels' is not supported in Python 2" - ) - - def urlopen(self, method, url, redirect=True, **kw): - """ - Same as :meth:`urllib3.HTTPConnectionPool.urlopen` - with custom cross-host redirect logic and only sends the request-uri - portion of the ``url``. - - The given ``url`` parameter must be absolute, such that an appropriate - :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it. - """ - u = parse_url(url) - self._validate_proxy_scheme_url_selection(u.scheme) - - conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme) - - kw["assert_same_host"] = False - kw["redirect"] = False - - if "headers" not in kw: - kw["headers"] = self.headers.copy() - - if self._proxy_requires_url_absolute_form(u): - response = conn.urlopen(method, url, **kw) - else: - response = conn.urlopen(method, u.request_uri, **kw) - - redirect_location = redirect and response.get_redirect_location() - if not redirect_location: - return response - - # Support relative URLs for redirecting. - redirect_location = urljoin(url, redirect_location) - - # RFC 7231, Section 6.4.4 - if response.status == 303: - method = "GET" - - retries = kw.get("retries") - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect) - - # Strip headers marked as unsafe to forward to the redirected location. - # Check remove_headers_on_redirect to avoid a potential network call within - # conn.is_same_host() which may use socket.gethostbyname() in the future. - if retries.remove_headers_on_redirect and not conn.is_same_host( - redirect_location - ): - headers = list(six.iterkeys(kw["headers"])) - for header in headers: - if header.lower() in retries.remove_headers_on_redirect: - kw["headers"].pop(header, None) - - try: - retries = retries.increment(method, url, response=response, _pool=conn) - except MaxRetryError: - if retries.raise_on_redirect: - response.drain_conn() - raise - return response - - kw["retries"] = retries - kw["redirect"] = redirect - - log.info("Redirecting %s -> %s", url, redirect_location) - - response.drain_conn() - return self.urlopen(method, redirect_location, **kw) - - -class ProxyManager(PoolManager): - """ - Behaves just like :class:`PoolManager`, but sends all requests through - the defined proxy, using the CONNECT method for HTTPS URLs. - - :param proxy_url: - The URL of the proxy to be used. - - :param proxy_headers: - A dictionary containing headers that will be sent to the proxy. In case - of HTTP they are being sent with each request, while in the - HTTPS/CONNECT case they are sent only once. Could be used for proxy - authentication. - - :param proxy_ssl_context: - The proxy SSL context is used to establish the TLS connection to the - proxy when using HTTPS proxies. - - :param use_forwarding_for_https: - (Defaults to False) If set to True will forward requests to the HTTPS - proxy to be made on behalf of the client instead of creating a TLS - tunnel via the CONNECT method. **Enabling this flag means that request - and response headers and content will be visible from the HTTPS proxy** - whereas tunneling keeps request and response headers and content - private. IP address, target hostname, SNI, and port are always visible - to an HTTPS proxy even when this flag is disabled. - - Example: - >>> proxy = urllib3.ProxyManager('http://localhost:3128/') - >>> r1 = proxy.request('GET', 'http://google.com/') - >>> r2 = proxy.request('GET', 'http://httpbin.org/') - >>> len(proxy.pools) - 1 - >>> r3 = proxy.request('GET', 'https://httpbin.org/') - >>> r4 = proxy.request('GET', 'https://twitter.com/') - >>> len(proxy.pools) - 3 - - """ - - def __init__( - self, - proxy_url, - num_pools=10, - headers=None, - proxy_headers=None, - proxy_ssl_context=None, - use_forwarding_for_https=False, - **connection_pool_kw - ): - - if isinstance(proxy_url, HTTPConnectionPool): - proxy_url = "%s://%s:%i" % ( - proxy_url.scheme, - proxy_url.host, - proxy_url.port, - ) - proxy = parse_url(proxy_url) - - if proxy.scheme not in ("http", "https"): - raise ProxySchemeUnknown(proxy.scheme) - - if not proxy.port: - port = port_by_scheme.get(proxy.scheme, 80) - proxy = proxy._replace(port=port) - - self.proxy = proxy - self.proxy_headers = proxy_headers or {} - self.proxy_ssl_context = proxy_ssl_context - self.proxy_config = ProxyConfig(proxy_ssl_context, use_forwarding_for_https) - - connection_pool_kw["_proxy"] = self.proxy - connection_pool_kw["_proxy_headers"] = self.proxy_headers - connection_pool_kw["_proxy_config"] = self.proxy_config - - super(ProxyManager, self).__init__(num_pools, headers, **connection_pool_kw) - - def connection_from_host(self, host, port=None, scheme="http", pool_kwargs=None): - if scheme == "https": - return super(ProxyManager, self).connection_from_host( - host, port, scheme, pool_kwargs=pool_kwargs - ) - - return super(ProxyManager, self).connection_from_host( - self.proxy.host, self.proxy.port, self.proxy.scheme, pool_kwargs=pool_kwargs - ) - - def _set_proxy_headers(self, url, headers=None): - """ - Sets headers needed by proxies: specifically, the Accept and Host - headers. Only sets headers not provided by the user. - """ - headers_ = {"Accept": "*/*"} - - netloc = parse_url(url).netloc - if netloc: - headers_["Host"] = netloc - - if headers: - headers_.update(headers) - return headers_ - - def urlopen(self, method, url, redirect=True, **kw): - "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute." - u = parse_url(url) - if not connection_requires_http_tunnel(self.proxy, self.proxy_config, u.scheme): - # For connections using HTTP CONNECT, httplib sets the necessary - # headers on the CONNECT to the proxy. If we're not using CONNECT, - # we'll definitely need to set 'Host' at the very least. - headers = kw.get("headers", self.headers) - kw["headers"] = self._set_proxy_headers(url, headers) - - return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw) - - -def proxy_from_url(url, **kw): - return ProxyManager(proxy_url=url, **kw) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/request.py b/infrastructure/sandbox/Data/lambda/urllib3/request.py deleted file mode 100644 index 398386a5b..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/request.py +++ /dev/null @@ -1,170 +0,0 @@ -from __future__ import absolute_import - -from .filepost import encode_multipart_formdata -from .packages.six.moves.urllib.parse import urlencode - -__all__ = ["RequestMethods"] - - -class RequestMethods(object): - """ - Convenience mixin for classes who implement a :meth:`urlopen` method, such - as :class:`urllib3.HTTPConnectionPool` and - :class:`urllib3.PoolManager`. - - Provides behavior for making common types of HTTP request methods and - decides which type of request field encoding to use. - - Specifically, - - :meth:`.request_encode_url` is for sending requests whose fields are - encoded in the URL (such as GET, HEAD, DELETE). - - :meth:`.request_encode_body` is for sending requests whose fields are - encoded in the *body* of the request using multipart or www-form-urlencoded - (such as for POST, PUT, PATCH). - - :meth:`.request` is for making any kind of request, it will look up the - appropriate encoding format and use one of the above two methods to make - the request. - - Initializer parameters: - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - """ - - _encode_url_methods = {"DELETE", "GET", "HEAD", "OPTIONS"} - - def __init__(self, headers=None): - self.headers = headers or {} - - def urlopen( - self, - method, - url, - body=None, - headers=None, - encode_multipart=True, - multipart_boundary=None, - **kw - ): # Abstract - raise NotImplementedError( - "Classes extending RequestMethods must implement " - "their own ``urlopen`` method." - ) - - def request(self, method, url, fields=None, headers=None, **urlopen_kw): - """ - Make a request using :meth:`urlopen` with the appropriate encoding of - ``fields`` based on the ``method`` used. - - This is a convenience method that requires the least amount of manual - effort. It can be used in most situations, while still having the - option to drop down to more specific methods when necessary, such as - :meth:`request_encode_url`, :meth:`request_encode_body`, - or even the lowest level :meth:`urlopen`. - """ - method = method.upper() - - urlopen_kw["request_url"] = url - - if method in self._encode_url_methods: - return self.request_encode_url( - method, url, fields=fields, headers=headers, **urlopen_kw - ) - else: - return self.request_encode_body( - method, url, fields=fields, headers=headers, **urlopen_kw - ) - - def request_encode_url(self, method, url, fields=None, headers=None, **urlopen_kw): - """ - Make a request using :meth:`urlopen` with the ``fields`` encoded in - the url. This is useful for request methods like GET, HEAD, DELETE, etc. - """ - if headers is None: - headers = self.headers - - extra_kw = {"headers": headers} - extra_kw.update(urlopen_kw) - - if fields: - url += "?" + urlencode(fields) - - return self.urlopen(method, url, **extra_kw) - - def request_encode_body( - self, - method, - url, - fields=None, - headers=None, - encode_multipart=True, - multipart_boundary=None, - **urlopen_kw - ): - """ - Make a request using :meth:`urlopen` with the ``fields`` encoded in - the body. This is useful for request methods like POST, PUT, PATCH, etc. - - When ``encode_multipart=True`` (default), then - :func:`urllib3.encode_multipart_formdata` is used to encode - the payload with the appropriate content type. Otherwise - :func:`urllib.parse.urlencode` is used with the - 'application/x-www-form-urlencoded' content type. - - Multipart encoding must be used when posting files, and it's reasonably - safe to use it in other times too. However, it may break request - signing, such as with OAuth. - - Supports an optional ``fields`` parameter of key/value strings AND - key/filetuple. A filetuple is a (filename, data, MIME type) tuple where - the MIME type is optional. For example:: - - fields = { - 'foo': 'bar', - 'fakefile': ('foofile.txt', 'contents of foofile'), - 'realfile': ('barfile.txt', open('realfile').read()), - 'typedfile': ('bazfile.bin', open('bazfile').read(), - 'image/jpeg'), - 'nonamefile': 'contents of nonamefile field', - } - - When uploading a file, providing a filename (the first parameter of the - tuple) is optional but recommended to best mimic behavior of browsers. - - Note that if ``headers`` are supplied, the 'Content-Type' header will - be overwritten because it depends on the dynamic random boundary string - which is used to compose the body of the request. The random boundary - string can be explicitly set with the ``multipart_boundary`` parameter. - """ - if headers is None: - headers = self.headers - - extra_kw = {"headers": {}} - - if fields: - if "body" in urlopen_kw: - raise TypeError( - "request got values for both 'fields' and 'body', can only specify one." - ) - - if encode_multipart: - body, content_type = encode_multipart_formdata( - fields, boundary=multipart_boundary - ) - else: - body, content_type = ( - urlencode(fields), - "application/x-www-form-urlencoded", - ) - - extra_kw["body"] = body - extra_kw["headers"] = {"Content-Type": content_type} - - extra_kw["headers"].update(headers) - extra_kw.update(urlopen_kw) - - return self.urlopen(method, url, **extra_kw) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/response.py b/infrastructure/sandbox/Data/lambda/urllib3/response.py deleted file mode 100644 index 01f08eee8..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/response.py +++ /dev/null @@ -1,872 +0,0 @@ -from __future__ import absolute_import - -import io -import logging -import sys -import zlib -from contextlib import contextmanager -from socket import error as SocketError -from socket import timeout as SocketTimeout - -try: - try: - import brotlicffi as brotli - except ImportError: - import brotli -except ImportError: - brotli = None - -from . import util -from ._collections import HTTPHeaderDict -from .connection import BaseSSLError, HTTPException -from .exceptions import ( - BodyNotHttplibCompatible, - DecodeError, - HTTPError, - IncompleteRead, - InvalidChunkLength, - InvalidHeader, - ProtocolError, - ReadTimeoutError, - ResponseNotChunked, - SSLError, -) -from .packages import six -from .util.response import is_fp_closed, is_response_to_head - -log = logging.getLogger(__name__) - - -class DeflateDecoder(object): - def __init__(self): - self._first_try = True - self._data = b"" - self._obj = zlib.decompressobj() - - def __getattr__(self, name): - return getattr(self._obj, name) - - def decompress(self, data): - if not data: - return data - - if not self._first_try: - return self._obj.decompress(data) - - self._data += data - try: - decompressed = self._obj.decompress(data) - if decompressed: - self._first_try = False - self._data = None - return decompressed - except zlib.error: - self._first_try = False - self._obj = zlib.decompressobj(-zlib.MAX_WBITS) - try: - return self.decompress(self._data) - finally: - self._data = None - - -class GzipDecoderState(object): - - FIRST_MEMBER = 0 - OTHER_MEMBERS = 1 - SWALLOW_DATA = 2 - - -class GzipDecoder(object): - def __init__(self): - self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) - self._state = GzipDecoderState.FIRST_MEMBER - - def __getattr__(self, name): - return getattr(self._obj, name) - - def decompress(self, data): - ret = bytearray() - if self._state == GzipDecoderState.SWALLOW_DATA or not data: - return bytes(ret) - while True: - try: - ret += self._obj.decompress(data) - except zlib.error: - previous_state = self._state - # Ignore data after the first error - self._state = GzipDecoderState.SWALLOW_DATA - if previous_state == GzipDecoderState.OTHER_MEMBERS: - # Allow trailing garbage acceptable in other gzip clients - return bytes(ret) - raise - data = self._obj.unused_data - if not data: - return bytes(ret) - self._state = GzipDecoderState.OTHER_MEMBERS - self._obj = zlib.decompressobj(16 + zlib.MAX_WBITS) - - -if brotli is not None: - - class BrotliDecoder(object): - # Supports both 'brotlipy' and 'Brotli' packages - # since they share an import name. The top branches - # are for 'brotlipy' and bottom branches for 'Brotli' - def __init__(self): - self._obj = brotli.Decompressor() - if hasattr(self._obj, "decompress"): - self.decompress = self._obj.decompress - else: - self.decompress = self._obj.process - - def flush(self): - if hasattr(self._obj, "flush"): - return self._obj.flush() - return b"" - - -class MultiDecoder(object): - """ - From RFC7231: - If one or more encodings have been applied to a representation, the - sender that applied the encodings MUST generate a Content-Encoding - header field that lists the content codings in the order in which - they were applied. - """ - - def __init__(self, modes): - self._decoders = [_get_decoder(m.strip()) for m in modes.split(",")] - - def flush(self): - return self._decoders[0].flush() - - def decompress(self, data): - for d in reversed(self._decoders): - data = d.decompress(data) - return data - - -def _get_decoder(mode): - if "," in mode: - return MultiDecoder(mode) - - if mode == "gzip": - return GzipDecoder() - - if brotli is not None and mode == "br": - return BrotliDecoder() - - return DeflateDecoder() - - -class HTTPResponse(io.IOBase): - """ - HTTP Response container. - - Backwards-compatible with :class:`http.client.HTTPResponse` but the response ``body`` is - loaded and decoded on-demand when the ``data`` property is accessed. This - class is also compatible with the Python standard library's :mod:`io` - module, and can hence be treated as a readable object in the context of that - framework. - - Extra parameters for behaviour not present in :class:`http.client.HTTPResponse`: - - :param preload_content: - If True, the response's body will be preloaded during construction. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - - :param original_response: - When this HTTPResponse wrapper is generated from an :class:`http.client.HTTPResponse` - object, it's convenient to include the original for debug purposes. It's - otherwise unused. - - :param retries: - The retries contains the last :class:`~urllib3.util.retry.Retry` that - was used during the request. - - :param enforce_content_length: - Enforce content length checking. Body returned by server must match - value of Content-Length header, if present. Otherwise, raise error. - """ - - CONTENT_DECODERS = ["gzip", "deflate"] - if brotli is not None: - CONTENT_DECODERS += ["br"] - REDIRECT_STATUSES = [301, 302, 303, 307, 308] - - def __init__( - self, - body="", - headers=None, - status=0, - version=0, - reason=None, - strict=0, - preload_content=True, - decode_content=True, - original_response=None, - pool=None, - connection=None, - msg=None, - retries=None, - enforce_content_length=False, - request_method=None, - request_url=None, - auto_close=True, - ): - - if isinstance(headers, HTTPHeaderDict): - self.headers = headers - else: - self.headers = HTTPHeaderDict(headers) - self.status = status - self.version = version - self.reason = reason - self.strict = strict - self.decode_content = decode_content - self.retries = retries - self.enforce_content_length = enforce_content_length - self.auto_close = auto_close - - self._decoder = None - self._body = None - self._fp = None - self._original_response = original_response - self._fp_bytes_read = 0 - self.msg = msg - self._request_url = request_url - - if body and isinstance(body, (six.string_types, bytes)): - self._body = body - - self._pool = pool - self._connection = connection - - if hasattr(body, "read"): - self._fp = body - - # Are we using the chunked-style of transfer encoding? - self.chunked = False - self.chunk_left = None - tr_enc = self.headers.get("transfer-encoding", "").lower() - # Don't incur the penalty of creating a list and then discarding it - encodings = (enc.strip() for enc in tr_enc.split(",")) - if "chunked" in encodings: - self.chunked = True - - # Determine length of response - self.length_remaining = self._init_length(request_method) - - # If requested, preload the body. - if preload_content and not self._body: - self._body = self.read(decode_content=decode_content) - - def get_redirect_location(self): - """ - Should we redirect and where to? - - :returns: Truthy redirect location string if we got a redirect status - code and valid location. ``None`` if redirect status and no - location. ``False`` if not a redirect status code. - """ - if self.status in self.REDIRECT_STATUSES: - return self.headers.get("location") - - return False - - def release_conn(self): - if not self._pool or not self._connection: - return - - self._pool._put_conn(self._connection) - self._connection = None - - def drain_conn(self): - """ - Read and discard any remaining HTTP response data in the response connection. - - Unread data in the HTTPResponse connection blocks the connection from being released back to the pool. - """ - try: - self.read() - except (HTTPError, SocketError, BaseSSLError, HTTPException): - pass - - @property - def data(self): - # For backwards-compat with earlier urllib3 0.4 and earlier. - if self._body: - return self._body - - if self._fp: - return self.read(cache_content=True) - - @property - def connection(self): - return self._connection - - def isclosed(self): - return is_fp_closed(self._fp) - - def tell(self): - """ - Obtain the number of bytes pulled over the wire so far. May differ from - the amount of content returned by :meth:``urllib3.response.HTTPResponse.read`` - if bytes are encoded on the wire (e.g, compressed). - """ - return self._fp_bytes_read - - def _init_length(self, request_method): - """ - Set initial length value for Response content if available. - """ - length = self.headers.get("content-length") - - if length is not None: - if self.chunked: - # This Response will fail with an IncompleteRead if it can't be - # received as chunked. This method falls back to attempt reading - # the response before raising an exception. - log.warning( - "Received response with both Content-Length and " - "Transfer-Encoding set. This is expressly forbidden " - "by RFC 7230 sec 3.3.2. Ignoring Content-Length and " - "attempting to process response as Transfer-Encoding: " - "chunked." - ) - return None - - try: - # RFC 7230 section 3.3.2 specifies multiple content lengths can - # be sent in a single Content-Length header - # (e.g. Content-Length: 42, 42). This line ensures the values - # are all valid ints and that as long as the `set` length is 1, - # all values are the same. Otherwise, the header is invalid. - lengths = set([int(val) for val in length.split(",")]) - if len(lengths) > 1: - raise InvalidHeader( - "Content-Length contained multiple " - "unmatching values (%s)" % length - ) - length = lengths.pop() - except ValueError: - length = None - else: - if length < 0: - length = None - - # Convert status to int for comparison - # In some cases, httplib returns a status of "_UNKNOWN" - try: - status = int(self.status) - except ValueError: - status = 0 - - # Check for responses that shouldn't include a body - if status in (204, 304) or 100 <= status < 200 or request_method == "HEAD": - length = 0 - - return length - - def _init_decoder(self): - """ - Set-up the _decoder attribute if necessary. - """ - # Note: content-encoding value should be case-insensitive, per RFC 7230 - # Section 3.2 - content_encoding = self.headers.get("content-encoding", "").lower() - if self._decoder is None: - if content_encoding in self.CONTENT_DECODERS: - self._decoder = _get_decoder(content_encoding) - elif "," in content_encoding: - encodings = [ - e.strip() - for e in content_encoding.split(",") - if e.strip() in self.CONTENT_DECODERS - ] - if len(encodings): - self._decoder = _get_decoder(content_encoding) - - DECODER_ERROR_CLASSES = (IOError, zlib.error) - if brotli is not None: - DECODER_ERROR_CLASSES += (brotli.error,) - - def _decode(self, data, decode_content, flush_decoder): - """ - Decode the data passed in and potentially flush the decoder. - """ - if not decode_content: - return data - - try: - if self._decoder: - data = self._decoder.decompress(data) - except self.DECODER_ERROR_CLASSES as e: - content_encoding = self.headers.get("content-encoding", "").lower() - raise DecodeError( - "Received response with content-encoding: %s, but " - "failed to decode it." % content_encoding, - e, - ) - if flush_decoder: - data += self._flush_decoder() - - return data - - def _flush_decoder(self): - """ - Flushes the decoder. Should only be called if the decoder is actually - being used. - """ - if self._decoder: - buf = self._decoder.decompress(b"") - return buf + self._decoder.flush() - - return b"" - - @contextmanager - def _error_catcher(self): - """ - Catch low-level python exceptions, instead re-raising urllib3 - variants, so that low-level exceptions are not leaked in the - high-level api. - - On exit, release the connection back to the pool. - """ - clean_exit = False - - try: - try: - yield - - except SocketTimeout: - # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but - # there is yet no clean way to get at it from this context. - raise ReadTimeoutError(self._pool, None, "Read timed out.") - - except BaseSSLError as e: - # FIXME: Is there a better way to differentiate between SSLErrors? - if "read operation timed out" not in str(e): - # SSL errors related to framing/MAC get wrapped and reraised here - raise SSLError(e) - - raise ReadTimeoutError(self._pool, None, "Read timed out.") - - except (HTTPException, SocketError) as e: - # This includes IncompleteRead. - raise ProtocolError("Connection broken: %r" % e, e) - - # If no exception is thrown, we should avoid cleaning up - # unnecessarily. - clean_exit = True - finally: - # If we didn't terminate cleanly, we need to throw away our - # connection. - if not clean_exit: - # The response may not be closed but we're not going to use it - # anymore so close it now to ensure that the connection is - # released back to the pool. - if self._original_response: - self._original_response.close() - - # Closing the response may not actually be sufficient to close - # everything, so if we have a hold of the connection close that - # too. - if self._connection: - self._connection.close() - - # If we hold the original response but it's closed now, we should - # return the connection back to the pool. - if self._original_response and self._original_response.isclosed(): - self.release_conn() - - def _fp_read(self, amt): - """ - Read a response with the thought that reading the number of bytes - larger than can fit in a 32-bit int at a time via SSL in some - known cases leads to an overflow error that has to be prevented - if `amt` or `self.length_remaining` indicate that a problem may - happen. - - The known cases: - * 3.8 <= CPython < 3.9.7 because of a bug - https://github.com/urllib3/urllib3/issues/2513#issuecomment-1152559900. - * urllib3 injected with pyOpenSSL-backed SSL-support. - * CPython < 3.10 only when `amt` does not fit 32-bit int. - """ - assert self._fp - c_int_max = 2 ** 31 - 1 - if ( - ( - (amt and amt > c_int_max) - or (self.length_remaining and self.length_remaining > c_int_max) - ) - and not util.IS_SECURETRANSPORT - and (util.IS_PYOPENSSL or sys.version_info < (3, 10)) - ): - buffer = io.BytesIO() - # Besides `max_chunk_amt` being a maximum chunk size, it - # affects memory overhead of reading a response by this - # method in CPython. - # `c_int_max` equal to 2 GiB - 1 byte is the actual maximum - # chunk size that does not lead to an overflow error, but - # 256 MiB is a compromise. - max_chunk_amt = 2 ** 28 - while amt is None or amt != 0: - if amt is not None: - chunk_amt = min(amt, max_chunk_amt) - amt -= chunk_amt - else: - chunk_amt = max_chunk_amt - data = self._fp.read(chunk_amt) - if not data: - break - buffer.write(data) - del data # to reduce peak memory usage by `max_chunk_amt`. - return buffer.getvalue() - else: - # StringIO doesn't like amt=None - return self._fp.read(amt) if amt is not None else self._fp.read() - - def read(self, amt=None, decode_content=None, cache_content=False): - """ - Similar to :meth:`http.client.HTTPResponse.read`, but with two additional - parameters: ``decode_content`` and ``cache_content``. - - :param amt: - How much of the content to read. If specified, caching is skipped - because it doesn't make sense to cache partial content as the full - response. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - - :param cache_content: - If True, will save the returned data such that the same result is - returned despite of the state of the underlying file object. This - is useful if you want the ``.data`` property to continue working - after having ``.read()`` the file object. (Overridden if ``amt`` is - set.) - """ - self._init_decoder() - if decode_content is None: - decode_content = self.decode_content - - if self._fp is None: - return - - flush_decoder = False - fp_closed = getattr(self._fp, "closed", False) - - with self._error_catcher(): - data = self._fp_read(amt) if not fp_closed else b"" - if amt is None: - flush_decoder = True - else: - cache_content = False - if ( - amt != 0 and not data - ): # Platform-specific: Buggy versions of Python. - # Close the connection when no data is returned - # - # This is redundant to what httplib/http.client _should_ - # already do. However, versions of python released before - # December 15, 2012 (http://bugs.python.org/issue16298) do - # not properly close the connection in all cases. There is - # no harm in redundantly calling close. - self._fp.close() - flush_decoder = True - if self.enforce_content_length and self.length_remaining not in ( - 0, - None, - ): - # This is an edge case that httplib failed to cover due - # to concerns of backward compatibility. We're - # addressing it here to make sure IncompleteRead is - # raised during streaming, so all calls with incorrect - # Content-Length are caught. - raise IncompleteRead(self._fp_bytes_read, self.length_remaining) - - if data: - self._fp_bytes_read += len(data) - if self.length_remaining is not None: - self.length_remaining -= len(data) - - data = self._decode(data, decode_content, flush_decoder) - - if cache_content: - self._body = data - - return data - - def stream(self, amt=2 ** 16, decode_content=None): - """ - A generator wrapper for the read() method. A call will block until - ``amt`` bytes have been read from the connection or until the - connection is closed. - - :param amt: - How much of the content to read. The generator will return up to - much data per iteration, but may return less. This is particularly - likely when using compressed data. However, the empty string will - never be returned. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - """ - if self.chunked and self.supports_chunked_reads(): - for line in self.read_chunked(amt, decode_content=decode_content): - yield line - else: - while not is_fp_closed(self._fp): - data = self.read(amt=amt, decode_content=decode_content) - - if data: - yield data - - @classmethod - def from_httplib(ResponseCls, r, **response_kw): - """ - Given an :class:`http.client.HTTPResponse` instance ``r``, return a - corresponding :class:`urllib3.response.HTTPResponse` object. - - Remaining parameters are passed to the HTTPResponse constructor, along - with ``original_response=r``. - """ - headers = r.msg - - if not isinstance(headers, HTTPHeaderDict): - if six.PY2: - # Python 2.7 - headers = HTTPHeaderDict.from_httplib(headers) - else: - headers = HTTPHeaderDict(headers.items()) - - # HTTPResponse objects in Python 3 don't have a .strict attribute - strict = getattr(r, "strict", 0) - resp = ResponseCls( - body=r, - headers=headers, - status=r.status, - version=r.version, - reason=r.reason, - strict=strict, - original_response=r, - **response_kw - ) - return resp - - # Backwards-compatibility methods for http.client.HTTPResponse - def getheaders(self): - return self.headers - - def getheader(self, name, default=None): - return self.headers.get(name, default) - - # Backwards compatibility for http.cookiejar - def info(self): - return self.headers - - # Overrides from io.IOBase - def close(self): - if not self.closed: - self._fp.close() - - if self._connection: - self._connection.close() - - if not self.auto_close: - io.IOBase.close(self) - - @property - def closed(self): - if not self.auto_close: - return io.IOBase.closed.__get__(self) - elif self._fp is None: - return True - elif hasattr(self._fp, "isclosed"): - return self._fp.isclosed() - elif hasattr(self._fp, "closed"): - return self._fp.closed - else: - return True - - def fileno(self): - if self._fp is None: - raise IOError("HTTPResponse has no file to get a fileno from") - elif hasattr(self._fp, "fileno"): - return self._fp.fileno() - else: - raise IOError( - "The file-like object this HTTPResponse is wrapped " - "around has no file descriptor" - ) - - def flush(self): - if ( - self._fp is not None - and hasattr(self._fp, "flush") - and not getattr(self._fp, "closed", False) - ): - return self._fp.flush() - - def readable(self): - # This method is required for `io` module compatibility. - return True - - def readinto(self, b): - # This method is required for `io` module compatibility. - temp = self.read(len(b)) - if len(temp) == 0: - return 0 - else: - b[: len(temp)] = temp - return len(temp) - - def supports_chunked_reads(self): - """ - Checks if the underlying file-like object looks like a - :class:`http.client.HTTPResponse` object. We do this by testing for - the fp attribute. If it is present we assume it returns raw chunks as - processed by read_chunked(). - """ - return hasattr(self._fp, "fp") - - def _update_chunk_length(self): - # First, we'll figure out length of a chunk and then - # we'll try to read it from socket. - if self.chunk_left is not None: - return - line = self._fp.fp.readline() - line = line.split(b";", 1)[0] - try: - self.chunk_left = int(line, 16) - except ValueError: - # Invalid chunked protocol response, abort. - self.close() - raise InvalidChunkLength(self, line) - - def _handle_chunk(self, amt): - returned_chunk = None - if amt is None: - chunk = self._fp._safe_read(self.chunk_left) - returned_chunk = chunk - self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. - self.chunk_left = None - elif amt < self.chunk_left: - value = self._fp._safe_read(amt) - self.chunk_left = self.chunk_left - amt - returned_chunk = value - elif amt == self.chunk_left: - value = self._fp._safe_read(amt) - self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. - self.chunk_left = None - returned_chunk = value - else: # amt > self.chunk_left - returned_chunk = self._fp._safe_read(self.chunk_left) - self._fp._safe_read(2) # Toss the CRLF at the end of the chunk. - self.chunk_left = None - return returned_chunk - - def read_chunked(self, amt=None, decode_content=None): - """ - Similar to :meth:`HTTPResponse.read`, but with an additional - parameter: ``decode_content``. - - :param amt: - How much of the content to read. If specified, caching is skipped - because it doesn't make sense to cache partial content as the full - response. - - :param decode_content: - If True, will attempt to decode the body based on the - 'content-encoding' header. - """ - self._init_decoder() - # FIXME: Rewrite this method and make it a class with a better structured logic. - if not self.chunked: - raise ResponseNotChunked( - "Response is not chunked. " - "Header 'transfer-encoding: chunked' is missing." - ) - if not self.supports_chunked_reads(): - raise BodyNotHttplibCompatible( - "Body should be http.client.HTTPResponse like. " - "It should have have an fp attribute which returns raw chunks." - ) - - with self._error_catcher(): - # Don't bother reading the body of a HEAD request. - if self._original_response and is_response_to_head(self._original_response): - self._original_response.close() - return - - # If a response is already read and closed - # then return immediately. - if self._fp.fp is None: - return - - while True: - self._update_chunk_length() - if self.chunk_left == 0: - break - chunk = self._handle_chunk(amt) - decoded = self._decode( - chunk, decode_content=decode_content, flush_decoder=False - ) - if decoded: - yield decoded - - if decode_content: - # On CPython and PyPy, we should never need to flush the - # decoder. However, on Jython we *might* need to, so - # lets defensively do it anyway. - decoded = self._flush_decoder() - if decoded: # Platform-specific: Jython. - yield decoded - - # Chunk content ends with \r\n: discard it. - while True: - line = self._fp.fp.readline() - if not line: - # Some sites may not end with '\r\n'. - break - if line == b"\r\n": - break - - # We read everything; close the "file". - if self._original_response: - self._original_response.close() - - def geturl(self): - """ - Returns the URL that was the source of this response. - If the request that generated this response redirected, this method - will return the final redirect location. - """ - if self.retries is not None and len(self.retries.history): - return self.retries.history[-1].redirect_location - else: - return self._request_url - - def __iter__(self): - buffer = [] - for chunk in self.stream(decode_content=True): - if b"\n" in chunk: - chunk = chunk.split(b"\n") - yield b"".join(buffer) + chunk[0] + b"\n" - for x in chunk[1:-1]: - yield x + b"\n" - if chunk[-1]: - buffer = [chunk[-1]] - else: - buffer = [] - else: - buffer.append(chunk) - if buffer: - yield b"".join(buffer) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__init__.py b/infrastructure/sandbox/Data/lambda/urllib3/util/__init__.py deleted file mode 100644 index 4547fc522..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/__init__.py +++ /dev/null @@ -1,49 +0,0 @@ -from __future__ import absolute_import - -# For backwards compatibility, provide imports that used to be here. -from .connection import is_connection_dropped -from .request import SKIP_HEADER, SKIPPABLE_HEADERS, make_headers -from .response import is_fp_closed -from .retry import Retry -from .ssl_ import ( - ALPN_PROTOCOLS, - HAS_SNI, - IS_PYOPENSSL, - IS_SECURETRANSPORT, - PROTOCOL_TLS, - SSLContext, - assert_fingerprint, - resolve_cert_reqs, - resolve_ssl_version, - ssl_wrap_socket, -) -from .timeout import Timeout, current_time -from .url import Url, get_host, parse_url, split_first -from .wait import wait_for_read, wait_for_write - -__all__ = ( - "HAS_SNI", - "IS_PYOPENSSL", - "IS_SECURETRANSPORT", - "SSLContext", - "PROTOCOL_TLS", - "ALPN_PROTOCOLS", - "Retry", - "Timeout", - "Url", - "assert_fingerprint", - "current_time", - "is_connection_dropped", - "is_fp_closed", - "get_host", - "parse_url", - "make_headers", - "resolve_cert_reqs", - "resolve_ssl_version", - "split_first", - "ssl_wrap_socket", - "wait_for_read", - "wait_for_write", - "SKIP_HEADER", - "SKIPPABLE_HEADERS", -) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/__init__.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/__init__.cpython-310.pyc deleted file mode 100644 index 73e9ad45b..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/__init__.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/connection.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/connection.cpython-310.pyc deleted file mode 100644 index 53aec6090..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/connection.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/proxy.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/proxy.cpython-310.pyc deleted file mode 100644 index e5c6d58e7..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/proxy.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/queue.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/queue.cpython-310.pyc deleted file mode 100644 index 9f0dbd4d7..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/queue.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/request.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/request.cpython-310.pyc deleted file mode 100644 index b7574bc9e..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/request.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/response.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/response.cpython-310.pyc deleted file mode 100644 index bb523b1a9..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/response.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/retry.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/retry.cpython-310.pyc deleted file mode 100644 index 6c6c65891..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/retry.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssl_.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssl_.cpython-310.pyc deleted file mode 100644 index c34660caa..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssl_.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssl_match_hostname.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssl_match_hostname.cpython-310.pyc deleted file mode 100644 index 94c1829e6..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssl_match_hostname.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssltransport.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssltransport.cpython-310.pyc deleted file mode 100644 index 1e1791e97..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/ssltransport.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/timeout.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/timeout.cpython-310.pyc deleted file mode 100644 index 2cfbfaeda..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/timeout.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/url.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/url.cpython-310.pyc deleted file mode 100644 index 7d379af54..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/url.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/wait.cpython-310.pyc b/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/wait.cpython-310.pyc deleted file mode 100644 index 1c7806579..000000000 Binary files a/infrastructure/sandbox/Data/lambda/urllib3/util/__pycache__/wait.cpython-310.pyc and /dev/null differ diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/connection.py b/infrastructure/sandbox/Data/lambda/urllib3/util/connection.py deleted file mode 100644 index 6af1138f2..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/connection.py +++ /dev/null @@ -1,149 +0,0 @@ -from __future__ import absolute_import - -import socket - -from ..contrib import _appengine_environ -from ..exceptions import LocationParseError -from ..packages import six -from .wait import NoWayToWaitForSocketError, wait_for_read - - -def is_connection_dropped(conn): # Platform-specific - """ - Returns True if the connection is dropped and should be closed. - - :param conn: - :class:`http.client.HTTPConnection` object. - - Note: For platforms like AppEngine, this will always return ``False`` to - let the platform handle connection recycling transparently for us. - """ - sock = getattr(conn, "sock", False) - if sock is False: # Platform-specific: AppEngine - return False - if sock is None: # Connection already closed (such as by httplib). - return True - try: - # Returns True if readable, which here means it's been dropped - return wait_for_read(sock, timeout=0.0) - except NoWayToWaitForSocketError: # Platform-specific: AppEngine - return False - - -# This function is copied from socket.py in the Python 2.7 standard -# library test suite. Added to its signature is only `socket_options`. -# One additional modification is that we avoid binding to IPv6 servers -# discovered in DNS if the system doesn't have IPv6 functionality. -def create_connection( - address, - timeout=socket._GLOBAL_DEFAULT_TIMEOUT, - source_address=None, - socket_options=None, -): - """Connect to *address* and return the socket object. - - Convenience function. Connect to *address* (a 2-tuple ``(host, - port)``) and return the socket object. Passing the optional - *timeout* parameter will set the timeout on the socket instance - before attempting to connect. If no *timeout* is supplied, the - global default timeout setting returned by :func:`socket.getdefaulttimeout` - is used. If *source_address* is set it must be a tuple of (host, port) - for the socket to bind as a source address before making the connection. - An host of '' or port 0 tells the OS to use the default. - """ - - host, port = address - if host.startswith("["): - host = host.strip("[]") - err = None - - # Using the value from allowed_gai_family() in the context of getaddrinfo lets - # us select whether to work with IPv4 DNS records, IPv6 records, or both. - # The original create_connection function always returns all records. - family = allowed_gai_family() - - try: - host.encode("idna") - except UnicodeError: - return six.raise_from( - LocationParseError(u"'%s', label empty or too long" % host), None - ) - - for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): - af, socktype, proto, canonname, sa = res - sock = None - try: - sock = socket.socket(af, socktype, proto) - - # If provided, set socket level options before connecting. - _set_socket_options(sock, socket_options) - - if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT: - sock.settimeout(timeout) - if source_address: - sock.bind(source_address) - sock.connect(sa) - return sock - - except socket.error as e: - err = e - if sock is not None: - sock.close() - sock = None - - if err is not None: - raise err - - raise socket.error("getaddrinfo returns an empty list") - - -def _set_socket_options(sock, options): - if options is None: - return - - for opt in options: - sock.setsockopt(*opt) - - -def allowed_gai_family(): - """This function is designed to work in the context of - getaddrinfo, where family=socket.AF_UNSPEC is the default and - will perform a DNS search for both IPv6 and IPv4 records.""" - - family = socket.AF_INET - if HAS_IPV6: - family = socket.AF_UNSPEC - return family - - -def _has_ipv6(host): - """Returns True if the system can bind an IPv6 address.""" - sock = None - has_ipv6 = False - - # App Engine doesn't support IPV6 sockets and actually has a quota on the - # number of sockets that can be used, so just early out here instead of - # creating a socket needlessly. - # See https://github.com/urllib3/urllib3/issues/1446 - if _appengine_environ.is_appengine_sandbox(): - return False - - if socket.has_ipv6: - # has_ipv6 returns true if cPython was compiled with IPv6 support. - # It does not tell us if the system has IPv6 support enabled. To - # determine that we must bind to an IPv6 address. - # https://github.com/urllib3/urllib3/pull/611 - # https://bugs.python.org/issue658327 - try: - sock = socket.socket(socket.AF_INET6) - sock.bind((host, 0)) - has_ipv6 = True - except Exception: - pass - - if sock: - sock.close() - return has_ipv6 - - -HAS_IPV6 = _has_ipv6("::1") diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/proxy.py b/infrastructure/sandbox/Data/lambda/urllib3/util/proxy.py deleted file mode 100644 index 2199cc7b7..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/proxy.py +++ /dev/null @@ -1,57 +0,0 @@ -from .ssl_ import create_urllib3_context, resolve_cert_reqs, resolve_ssl_version - - -def connection_requires_http_tunnel( - proxy_url=None, proxy_config=None, destination_scheme=None -): - """ - Returns True if the connection requires an HTTP CONNECT through the proxy. - - :param URL proxy_url: - URL of the proxy. - :param ProxyConfig proxy_config: - Proxy configuration from poolmanager.py - :param str destination_scheme: - The scheme of the destination. (i.e https, http, etc) - """ - # If we're not using a proxy, no way to use a tunnel. - if proxy_url is None: - return False - - # HTTP destinations never require tunneling, we always forward. - if destination_scheme == "http": - return False - - # Support for forwarding with HTTPS proxies and HTTPS destinations. - if ( - proxy_url.scheme == "https" - and proxy_config - and proxy_config.use_forwarding_for_https - ): - return False - - # Otherwise always use a tunnel. - return True - - -def create_proxy_ssl_context( - ssl_version, cert_reqs, ca_certs=None, ca_cert_dir=None, ca_cert_data=None -): - """ - Generates a default proxy ssl context if one hasn't been provided by the - user. - """ - ssl_context = create_urllib3_context( - ssl_version=resolve_ssl_version(ssl_version), - cert_reqs=resolve_cert_reqs(cert_reqs), - ) - - if ( - not ca_certs - and not ca_cert_dir - and not ca_cert_data - and hasattr(ssl_context, "load_default_certs") - ): - ssl_context.load_default_certs() - - return ssl_context diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/queue.py b/infrastructure/sandbox/Data/lambda/urllib3/util/queue.py deleted file mode 100644 index 41784104e..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/queue.py +++ /dev/null @@ -1,22 +0,0 @@ -import collections - -from ..packages import six -from ..packages.six.moves import queue - -if six.PY2: - # Queue is imported for side effects on MS Windows. See issue #229. - import Queue as _unused_module_Queue # noqa: F401 - - -class LifoQueue(queue.Queue): - def _init(self, _): - self.queue = collections.deque() - - def _qsize(self, len=len): - return len(self.queue) - - def _put(self, item): - self.queue.append(item) - - def _get(self): - return self.queue.pop() diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/request.py b/infrastructure/sandbox/Data/lambda/urllib3/util/request.py deleted file mode 100644 index b574b081e..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/request.py +++ /dev/null @@ -1,146 +0,0 @@ -from __future__ import absolute_import - -from base64 import b64encode - -from ..exceptions import UnrewindableBodyError -from ..packages.six import b, integer_types - -# Pass as a value within ``headers`` to skip -# emitting some HTTP headers that are added automatically. -# The only headers that are supported are ``Accept-Encoding``, -# ``Host``, and ``User-Agent``. -SKIP_HEADER = "@@@SKIP_HEADER@@@" -SKIPPABLE_HEADERS = frozenset(["accept-encoding", "host", "user-agent"]) - -ACCEPT_ENCODING = "gzip,deflate" -try: - try: - import brotlicffi as _unused_module_brotli # noqa: F401 - except ImportError: - import brotli as _unused_module_brotli # noqa: F401 -except ImportError: - pass -else: - ACCEPT_ENCODING += ",br" - -_FAILEDTELL = object() - - -def make_headers( - keep_alive=None, - accept_encoding=None, - user_agent=None, - basic_auth=None, - proxy_basic_auth=None, - disable_cache=None, -): - """ - Shortcuts for generating request headers. - - :param keep_alive: - If ``True``, adds 'connection: keep-alive' header. - - :param accept_encoding: - Can be a boolean, list, or string. - ``True`` translates to 'gzip,deflate'. - List will get joined by comma. - String will be used as provided. - - :param user_agent: - String representing the user-agent you want, such as - "python-urllib3/0.6" - - :param basic_auth: - Colon-separated username:password string for 'authorization: basic ...' - auth header. - - :param proxy_basic_auth: - Colon-separated username:password string for 'proxy-authorization: basic ...' - auth header. - - :param disable_cache: - If ``True``, adds 'cache-control: no-cache' header. - - Example:: - - >>> make_headers(keep_alive=True, user_agent="Batman/1.0") - {'connection': 'keep-alive', 'user-agent': 'Batman/1.0'} - >>> make_headers(accept_encoding=True) - {'accept-encoding': 'gzip,deflate'} - """ - headers = {} - if accept_encoding: - if isinstance(accept_encoding, str): - pass - elif isinstance(accept_encoding, list): - accept_encoding = ",".join(accept_encoding) - else: - accept_encoding = ACCEPT_ENCODING - headers["accept-encoding"] = accept_encoding - - if user_agent: - headers["user-agent"] = user_agent - - if keep_alive: - headers["connection"] = "keep-alive" - - if basic_auth: - headers["authorization"] = "Basic " + b64encode(b(basic_auth)).decode("utf-8") - - if proxy_basic_auth: - headers["proxy-authorization"] = "Basic " + b64encode( - b(proxy_basic_auth) - ).decode("utf-8") - - if disable_cache: - headers["cache-control"] = "no-cache" - - return headers - - -def set_file_position(body, pos): - """ - If a position is provided, move file to that point. - Otherwise, we'll attempt to record a position for future use. - """ - if pos is not None: - rewind_body(body, pos) - elif getattr(body, "tell", None) is not None: - try: - pos = body.tell() - except (IOError, OSError): - # This differentiates from None, allowing us to catch - # a failed `tell()` later when trying to rewind the body. - pos = _FAILEDTELL - - return pos - - -def rewind_body(body, body_pos): - """ - Attempt to rewind body to a certain position. - Primarily used for request redirects and retries. - - :param body: - File-like object that supports seek. - - :param int pos: - Position to seek to in file. - """ - body_seek = getattr(body, "seek", None) - if body_seek is not None and isinstance(body_pos, integer_types): - try: - body_seek(body_pos) - except (IOError, OSError): - raise UnrewindableBodyError( - "An error occurred when rewinding request body for redirect/retry." - ) - elif body_pos is _FAILEDTELL: - raise UnrewindableBodyError( - "Unable to record file position for rewinding " - "request body during a redirect/retry." - ) - else: - raise ValueError( - "body_pos must be of type integer, instead it was %s." % type(body_pos) - ) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/response.py b/infrastructure/sandbox/Data/lambda/urllib3/util/response.py deleted file mode 100644 index 5ea609cce..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/response.py +++ /dev/null @@ -1,107 +0,0 @@ -from __future__ import absolute_import - -from email.errors import MultipartInvariantViolationDefect, StartBoundaryNotFoundDefect - -from ..exceptions import HeaderParsingError -from ..packages.six.moves import http_client as httplib - - -def is_fp_closed(obj): - """ - Checks whether a given file-like object is closed. - - :param obj: - The file-like object to check. - """ - - try: - # Check `isclosed()` first, in case Python3 doesn't set `closed`. - # GH Issue #928 - return obj.isclosed() - except AttributeError: - pass - - try: - # Check via the official file-like-object way. - return obj.closed - except AttributeError: - pass - - try: - # Check if the object is a container for another file-like object that - # gets released on exhaustion (e.g. HTTPResponse). - return obj.fp is None - except AttributeError: - pass - - raise ValueError("Unable to determine whether fp is closed.") - - -def assert_header_parsing(headers): - """ - Asserts whether all headers have been successfully parsed. - Extracts encountered errors from the result of parsing headers. - - Only works on Python 3. - - :param http.client.HTTPMessage headers: Headers to verify. - - :raises urllib3.exceptions.HeaderParsingError: - If parsing errors are found. - """ - - # This will fail silently if we pass in the wrong kind of parameter. - # To make debugging easier add an explicit check. - if not isinstance(headers, httplib.HTTPMessage): - raise TypeError("expected httplib.Message, got {0}.".format(type(headers))) - - defects = getattr(headers, "defects", None) - get_payload = getattr(headers, "get_payload", None) - - unparsed_data = None - if get_payload: - # get_payload is actually email.message.Message.get_payload; - # we're only interested in the result if it's not a multipart message - if not headers.is_multipart(): - payload = get_payload() - - if isinstance(payload, (bytes, str)): - unparsed_data = payload - if defects: - # httplib is assuming a response body is available - # when parsing headers even when httplib only sends - # header data to parse_headers() This results in - # defects on multipart responses in particular. - # See: https://github.com/urllib3/urllib3/issues/800 - - # So we ignore the following defects: - # - StartBoundaryNotFoundDefect: - # The claimed start boundary was never found. - # - MultipartInvariantViolationDefect: - # A message claimed to be a multipart but no subparts were found. - defects = [ - defect - for defect in defects - if not isinstance( - defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect) - ) - ] - - if defects or unparsed_data: - raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data) - - -def is_response_to_head(response): - """ - Checks whether the request of a response has been a HEAD-request. - Handles the quirks of AppEngine. - - :param http.client.HTTPResponse response: - Response to check if the originating request - used 'HEAD' as a method. - """ - # FIXME: Can we do this somehow without accessing private httplib _method? - method = response._method - if isinstance(method, int): # Platform-specific: Appengine - return method == 3 - return method.upper() == "HEAD" diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/retry.py b/infrastructure/sandbox/Data/lambda/urllib3/util/retry.py deleted file mode 100644 index 3398323fd..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/retry.py +++ /dev/null @@ -1,620 +0,0 @@ -from __future__ import absolute_import - -import email -import logging -import re -import time -import warnings -from collections import namedtuple -from itertools import takewhile - -from ..exceptions import ( - ConnectTimeoutError, - InvalidHeader, - MaxRetryError, - ProtocolError, - ProxyError, - ReadTimeoutError, - ResponseError, -) -from ..packages import six - -log = logging.getLogger(__name__) - - -# Data structure for representing the metadata of requests that result in a retry. -RequestHistory = namedtuple( - "RequestHistory", ["method", "url", "error", "status", "redirect_location"] -) - - -# TODO: In v2 we can remove this sentinel and metaclass with deprecated options. -_Default = object() - - -class _RetryMeta(type): - @property - def DEFAULT_METHOD_WHITELIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - return cls.DEFAULT_ALLOWED_METHODS - - @DEFAULT_METHOD_WHITELIST.setter - def DEFAULT_METHOD_WHITELIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - cls.DEFAULT_ALLOWED_METHODS = value - - @property - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value - - @property - def BACKOFF_MAX(cls): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - return cls.DEFAULT_BACKOFF_MAX - - @BACKOFF_MAX.setter - def BACKOFF_MAX(cls, value): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - cls.DEFAULT_BACKOFF_MAX = value - - -@six.add_metaclass(_RetryMeta) -class Retry(object): - """Retry configuration. - - Each retry attempt will create a new Retry object with updated values, so - they can be safely reused. - - Retries can be defined as a default for a pool:: - - retries = Retry(connect=5, read=2, redirect=5) - http = PoolManager(retries=retries) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool):: - - response = http.request('GET', 'http://example.com/', retries=Retry(10)) - - Retries can be disabled by passing ``False``:: - - response = http.request('GET', 'http://example.com/', retries=False) - - Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless - retries are disabled, in which case the causing exception will be raised. - - :param int total: - Total number of retries to allow. Takes precedence over other counts. - - Set to ``None`` to remove this constraint and fall back on other - counts. - - Set to ``0`` to fail on the first retry. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int connect: - How many connection-related errors to retry on. - - These are errors raised before the request is sent to the remote server, - which we assume has not triggered the server to process the request. - - Set to ``0`` to fail on the first retry of this type. - - :param int read: - How many times to retry on read errors. - - These errors are raised after the request was sent to the server, so the - request may have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - :param int redirect: - How many redirects to perform. Limit this to avoid infinite redirect - loops. - - A redirect is a HTTP response with a status code 301, 302, 303, 307 or - 308. - - Set to ``0`` to fail on the first retry of this type. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int status: - How many times to retry on bad status codes. - - These are retries made on responses, where status code matches - ``status_forcelist``. - - Set to ``0`` to fail on the first retry of this type. - - :param int other: - How many times to retry on other errors. - - Other errors are errors that are not connect, read, redirect or status errors. - These errors might be raised after the request was sent to the server, so the - request might have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - If ``total`` is not set, it's a good idea to set this to 0 to account - for unexpected edge cases and avoid infinite retry loops. - - :param iterable allowed_methods: - Set of uppercased HTTP method verbs that we should retry on. - - By default, we only retry on methods which are considered to be - idempotent (multiple requests with the same parameters end with the - same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`. - - Set to a ``False`` value to retry on any verb. - - .. warning:: - - Previously this parameter was named ``method_whitelist``, that - usage is deprecated in v1.26.0 and will be removed in v2.0. - - :param iterable status_forcelist: - A set of integer HTTP status codes that we should force a retry on. - A retry is initiated if the request method is in ``allowed_methods`` - and the response status code is in ``status_forcelist``. - - By default, this is disabled with ``None``. - - :param float backoff_factor: - A backoff factor to apply between attempts after the second try - (most errors are resolved immediately by a second try without a - delay). urllib3 will sleep for:: - - {backoff factor} * (2 ** ({number of total retries} - 1)) - - seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep - for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer - than :attr:`Retry.DEFAULT_BACKOFF_MAX`. - - By default, backoff is disabled (set to 0). - - :param bool raise_on_redirect: Whether, if the number of redirects is - exhausted, to raise a MaxRetryError, or to return a response with a - response code in the 3xx range. - - :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: - whether we should raise an exception, or return a response, - if status falls in ``status_forcelist`` range and retries have - been exhausted. - - :param tuple history: The history of the request encountered during - each call to :meth:`~Retry.increment`. The list is in the order - the requests occurred. Each list item is of class :class:`RequestHistory`. - - :param bool respect_retry_after_header: - Whether to respect Retry-After header on status codes defined as - :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. - - :param iterable remove_headers_on_redirect: - Sequence of headers to remove from the request when a response - indicating a redirect is returned before firing off the redirected - request. - """ - - #: Default methods to be used for ``allowed_methods`` - DEFAULT_ALLOWED_METHODS = frozenset( - ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"] - ) - - #: Default status codes to be used for ``status_forcelist`` - RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) - - #: Default headers to be used for ``remove_headers_on_redirect`` - DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) - - #: Maximum backoff time. - DEFAULT_BACKOFF_MAX = 120 - - def __init__( - self, - total=10, - connect=None, - read=None, - redirect=None, - status=None, - other=None, - allowed_methods=_Default, - status_forcelist=None, - backoff_factor=0, - raise_on_redirect=True, - raise_on_status=True, - history=None, - respect_retry_after_header=True, - remove_headers_on_redirect=_Default, - # TODO: Deprecated, remove in v2.0 - method_whitelist=_Default, - ): - - if method_whitelist is not _Default: - if allowed_methods is not _Default: - raise ValueError( - "Using both 'allowed_methods' and " - "'method_whitelist' together is not allowed. " - "Instead only use 'allowed_methods'" - ) - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - stacklevel=2, - ) - allowed_methods = method_whitelist - if allowed_methods is _Default: - allowed_methods = self.DEFAULT_ALLOWED_METHODS - if remove_headers_on_redirect is _Default: - remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - self.total = total - self.connect = connect - self.read = read - self.status = status - self.other = other - - if redirect is False or total is False: - redirect = 0 - raise_on_redirect = False - - self.redirect = redirect - self.status_forcelist = status_forcelist or set() - self.allowed_methods = allowed_methods - self.backoff_factor = backoff_factor - self.raise_on_redirect = raise_on_redirect - self.raise_on_status = raise_on_status - self.history = history or tuple() - self.respect_retry_after_header = respect_retry_after_header - self.remove_headers_on_redirect = frozenset( - [h.lower() for h in remove_headers_on_redirect] - ) - - def new(self, **kw): - params = dict( - total=self.total, - connect=self.connect, - read=self.read, - redirect=self.redirect, - status=self.status, - other=self.other, - status_forcelist=self.status_forcelist, - backoff_factor=self.backoff_factor, - raise_on_redirect=self.raise_on_redirect, - raise_on_status=self.raise_on_status, - history=self.history, - remove_headers_on_redirect=self.remove_headers_on_redirect, - respect_retry_after_header=self.respect_retry_after_header, - ) - - # TODO: If already given in **kw we use what's given to us - # If not given we need to figure out what to pass. We decide - # based on whether our class has the 'method_whitelist' property - # and if so we pass the deprecated 'method_whitelist' otherwise - # we use 'allowed_methods'. Remove in v2.0 - if "method_whitelist" not in kw and "allowed_methods" not in kw: - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - params["method_whitelist"] = self.allowed_methods - else: - params["allowed_methods"] = self.allowed_methods - - params.update(kw) - return type(self)(**params) - - @classmethod - def from_int(cls, retries, redirect=True, default=None): - """Backwards-compatibility for the old retries format.""" - if retries is None: - retries = default if default is not None else cls.DEFAULT - - if isinstance(retries, Retry): - return retries - - redirect = bool(redirect) and None - new_retries = cls(retries, redirect=redirect) - log.debug("Converted retries value: %r -> %r", retries, new_retries) - return new_retries - - def get_backoff_time(self): - """Formula for computing the current backoff - - :rtype: float - """ - # We want to consider only the last consecutive errors sequence (Ignore redirects). - consecutive_errors_len = len( - list( - takewhile(lambda x: x.redirect_location is None, reversed(self.history)) - ) - ) - if consecutive_errors_len <= 1: - return 0 - - backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) - return min(self.DEFAULT_BACKOFF_MAX, backoff_value) - - def parse_retry_after(self, retry_after): - # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 - if re.match(r"^\s*[0-9]+\s*$", retry_after): - seconds = int(retry_after) - else: - retry_date_tuple = email.utils.parsedate_tz(retry_after) - if retry_date_tuple is None: - raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) - if retry_date_tuple[9] is None: # Python 2 - # Assume UTC if no timezone was specified - # On Python2.7, parsedate_tz returns None for a timezone offset - # instead of 0 if no timezone is given, where mktime_tz treats - # a None timezone offset as local time. - retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:] - - retry_date = email.utils.mktime_tz(retry_date_tuple) - seconds = retry_date - time.time() - - if seconds < 0: - seconds = 0 - - return seconds - - def get_retry_after(self, response): - """Get the value of Retry-After in seconds.""" - - retry_after = response.getheader("Retry-After") - - if retry_after is None: - return None - - return self.parse_retry_after(retry_after) - - def sleep_for_retry(self, response=None): - retry_after = self.get_retry_after(response) - if retry_after: - time.sleep(retry_after) - return True - - return False - - def _sleep_backoff(self): - backoff = self.get_backoff_time() - if backoff <= 0: - return - time.sleep(backoff) - - def sleep(self, response=None): - """Sleep between retry attempts. - - This method will respect a server's ``Retry-After`` response header - and sleep the duration of the time requested. If that is not present, it - will use an exponential backoff. By default, the backoff factor is 0 and - this method will return immediately. - """ - - if self.respect_retry_after_header and response: - slept = self.sleep_for_retry(response) - if slept: - return - - self._sleep_backoff() - - def _is_connection_error(self, err): - """Errors when we're fairly sure that the server did not receive the - request, so it should be safe to retry. - """ - if isinstance(err, ProxyError): - err = err.original_error - return isinstance(err, ConnectTimeoutError) - - def _is_read_error(self, err): - """Errors that occur after the request has been started, so we should - assume that the server began processing it. - """ - return isinstance(err, (ReadTimeoutError, ProtocolError)) - - def _is_method_retryable(self, method): - """Checks if a given HTTP method should be retried upon, depending if - it is included in the allowed_methods - """ - # TODO: For now favor if the Retry implementation sets its own method_whitelist - # property outside of our constructor to avoid breaking custom implementations. - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - allowed_methods = self.method_whitelist - else: - allowed_methods = self.allowed_methods - - if allowed_methods and method.upper() not in allowed_methods: - return False - return True - - def is_retry(self, method, status_code, has_retry_after=False): - """Is this method/status code retryable? (Based on allowlists and control - variables such as the number of total retries to allow, whether to - respect the Retry-After header, whether this header is present, and - whether the returned status code is on the list of status codes to - be retried upon on the presence of the aforementioned header) - """ - if not self._is_method_retryable(method): - return False - - if self.status_forcelist and status_code in self.status_forcelist: - return True - - return ( - self.total - and self.respect_retry_after_header - and has_retry_after - and (status_code in self.RETRY_AFTER_STATUS_CODES) - ) - - def is_exhausted(self): - """Are we out of retries?""" - retry_counts = ( - self.total, - self.connect, - self.read, - self.redirect, - self.status, - self.other, - ) - retry_counts = list(filter(None, retry_counts)) - if not retry_counts: - return False - - return min(retry_counts) < 0 - - def increment( - self, - method=None, - url=None, - response=None, - error=None, - _pool=None, - _stacktrace=None, - ): - """Return a new Retry object with incremented retry counters. - - :param response: A response object, or None, if the server did not - return a response. - :type response: :class:`~urllib3.response.HTTPResponse` - :param Exception error: An error encountered during the request, or - None if the response was received successfully. - - :return: A new ``Retry`` object. - """ - if self.total is False and error: - # Disabled, indicate to re-raise the error. - raise six.reraise(type(error), error, _stacktrace) - - total = self.total - if total is not None: - total -= 1 - - connect = self.connect - read = self.read - redirect = self.redirect - status_count = self.status - other = self.other - cause = "unknown" - status = None - redirect_location = None - - if error and self._is_connection_error(error): - # Connect retry? - if connect is False: - raise six.reraise(type(error), error, _stacktrace) - elif connect is not None: - connect -= 1 - - elif error and self._is_read_error(error): - # Read retry? - if read is False or not self._is_method_retryable(method): - raise six.reraise(type(error), error, _stacktrace) - elif read is not None: - read -= 1 - - elif error: - # Other retry? - if other is not None: - other -= 1 - - elif response and response.get_redirect_location(): - # Redirect retry? - if redirect is not None: - redirect -= 1 - cause = "too many redirects" - redirect_location = response.get_redirect_location() - status = response.status - - else: - # Incrementing because of a server error like a 500 in - # status_forcelist and the given method is in the allowed_methods - cause = ResponseError.GENERIC_ERROR - if response and response.status: - if status_count is not None: - status_count -= 1 - cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) - status = response.status - - history = self.history + ( - RequestHistory(method, url, error, status, redirect_location), - ) - - new_retry = self.new( - total=total, - connect=connect, - read=read, - redirect=redirect, - status=status_count, - other=other, - history=history, - ) - - if new_retry.is_exhausted(): - raise MaxRetryError(_pool, url, error or ResponseError(cause)) - - log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) - - return new_retry - - def __repr__(self): - return ( - "{cls.__name__}(total={self.total}, connect={self.connect}, " - "read={self.read}, redirect={self.redirect}, status={self.status})" - ).format(cls=type(self), self=self) - - def __getattr__(self, item): - if item == "method_whitelist": - # TODO: Remove this deprecated alias in v2.0 - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - return self.allowed_methods - try: - return getattr(super(Retry, self), item) - except AttributeError: - return getattr(Retry, item) - - -# For backwards compatibility (equivalent to pre-v1.9): -Retry.DEFAULT = Retry(3) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/ssl_.py b/infrastructure/sandbox/Data/lambda/urllib3/util/ssl_.py deleted file mode 100644 index 8f867812a..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/ssl_.py +++ /dev/null @@ -1,495 +0,0 @@ -from __future__ import absolute_import - -import hmac -import os -import sys -import warnings -from binascii import hexlify, unhexlify -from hashlib import md5, sha1, sha256 - -from ..exceptions import ( - InsecurePlatformWarning, - ProxySchemeUnsupported, - SNIMissingWarning, - SSLError, -) -from ..packages import six -from .url import BRACELESS_IPV6_ADDRZ_RE, IPV4_RE - -SSLContext = None -SSLTransport = None -HAS_SNI = False -IS_PYOPENSSL = False -IS_SECURETRANSPORT = False -ALPN_PROTOCOLS = ["http/1.1"] - -# Maps the length of a digest to a possible hash function producing this digest -HASHFUNC_MAP = {32: md5, 40: sha1, 64: sha256} - - -def _const_compare_digest_backport(a, b): - """ - Compare two digests of equal length in constant time. - - The digests must be of type str/bytes. - Returns True if the digests match, and False otherwise. - """ - result = abs(len(a) - len(b)) - for left, right in zip(bytearray(a), bytearray(b)): - result |= left ^ right - return result == 0 - - -_const_compare_digest = getattr(hmac, "compare_digest", _const_compare_digest_backport) - -try: # Test for SSL features - import ssl - from ssl import CERT_REQUIRED, wrap_socket -except ImportError: - pass - -try: - from ssl import HAS_SNI # Has SNI? -except ImportError: - pass - -try: - from .ssltransport import SSLTransport -except ImportError: - pass - - -try: # Platform-specific: Python 3.6 - from ssl import PROTOCOL_TLS - - PROTOCOL_SSLv23 = PROTOCOL_TLS -except ImportError: - try: - from ssl import PROTOCOL_SSLv23 as PROTOCOL_TLS - - PROTOCOL_SSLv23 = PROTOCOL_TLS - except ImportError: - PROTOCOL_SSLv23 = PROTOCOL_TLS = 2 - -try: - from ssl import PROTOCOL_TLS_CLIENT -except ImportError: - PROTOCOL_TLS_CLIENT = PROTOCOL_TLS - - -try: - from ssl import OP_NO_COMPRESSION, OP_NO_SSLv2, OP_NO_SSLv3 -except ImportError: - OP_NO_SSLv2, OP_NO_SSLv3 = 0x1000000, 0x2000000 - OP_NO_COMPRESSION = 0x20000 - - -try: # OP_NO_TICKET was added in Python 3.6 - from ssl import OP_NO_TICKET -except ImportError: - OP_NO_TICKET = 0x4000 - - -# A secure default. -# Sources for more information on TLS ciphers: -# -# - https://wiki.mozilla.org/Security/Server_Side_TLS -# - https://www.ssllabs.com/projects/best-practices/index.html -# - https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ -# -# The general intent is: -# - prefer cipher suites that offer perfect forward secrecy (DHE/ECDHE), -# - prefer ECDHE over DHE for better performance, -# - prefer any AES-GCM and ChaCha20 over any AES-CBC for better performance and -# security, -# - prefer AES-GCM over ChaCha20 because hardware-accelerated AES is common, -# - disable NULL authentication, MD5 MACs, DSS, and other -# insecure ciphers for security reasons. -# - NOTE: TLS 1.3 cipher suites are managed through a different interface -# not exposed by CPython (yet!) and are enabled by default if they're available. -DEFAULT_CIPHERS = ":".join( - [ - "ECDHE+AESGCM", - "ECDHE+CHACHA20", - "DHE+AESGCM", - "DHE+CHACHA20", - "ECDH+AESGCM", - "DH+AESGCM", - "ECDH+AES", - "DH+AES", - "RSA+AESGCM", - "RSA+AES", - "!aNULL", - "!eNULL", - "!MD5", - "!DSS", - ] -) - -try: - from ssl import SSLContext # Modern SSL? -except ImportError: - - class SSLContext(object): # Platform-specific: Python 2 - def __init__(self, protocol_version): - self.protocol = protocol_version - # Use default values from a real SSLContext - self.check_hostname = False - self.verify_mode = ssl.CERT_NONE - self.ca_certs = None - self.options = 0 - self.certfile = None - self.keyfile = None - self.ciphers = None - - def load_cert_chain(self, certfile, keyfile): - self.certfile = certfile - self.keyfile = keyfile - - def load_verify_locations(self, cafile=None, capath=None, cadata=None): - self.ca_certs = cafile - - if capath is not None: - raise SSLError("CA directories not supported in older Pythons") - - if cadata is not None: - raise SSLError("CA data not supported in older Pythons") - - def set_ciphers(self, cipher_suite): - self.ciphers = cipher_suite - - def wrap_socket(self, socket, server_hostname=None, server_side=False): - warnings.warn( - "A true SSLContext object is not available. This prevents " - "urllib3 from configuring SSL appropriately and may cause " - "certain SSL connections to fail. You can upgrade to a newer " - "version of Python to solve this. For more information, see " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings", - InsecurePlatformWarning, - ) - kwargs = { - "keyfile": self.keyfile, - "certfile": self.certfile, - "ca_certs": self.ca_certs, - "cert_reqs": self.verify_mode, - "ssl_version": self.protocol, - "server_side": server_side, - } - return wrap_socket(socket, ciphers=self.ciphers, **kwargs) - - -def assert_fingerprint(cert, fingerprint): - """ - Checks if given fingerprint matches the supplied certificate. - - :param cert: - Certificate as bytes object. - :param fingerprint: - Fingerprint as string of hexdigits, can be interspersed by colons. - """ - - fingerprint = fingerprint.replace(":", "").lower() - digest_length = len(fingerprint) - hashfunc = HASHFUNC_MAP.get(digest_length) - if not hashfunc: - raise SSLError("Fingerprint of invalid length: {0}".format(fingerprint)) - - # We need encode() here for py32; works on py2 and p33. - fingerprint_bytes = unhexlify(fingerprint.encode()) - - cert_digest = hashfunc(cert).digest() - - if not _const_compare_digest(cert_digest, fingerprint_bytes): - raise SSLError( - 'Fingerprints did not match. Expected "{0}", got "{1}".'.format( - fingerprint, hexlify(cert_digest) - ) - ) - - -def resolve_cert_reqs(candidate): - """ - Resolves the argument to a numeric constant, which can be passed to - the wrap_socket function/method from the ssl module. - Defaults to :data:`ssl.CERT_REQUIRED`. - If given a string it is assumed to be the name of the constant in the - :mod:`ssl` module or its abbreviation. - (So you can specify `REQUIRED` instead of `CERT_REQUIRED`. - If it's neither `None` nor a string we assume it is already the numeric - constant which can directly be passed to wrap_socket. - """ - if candidate is None: - return CERT_REQUIRED - - if isinstance(candidate, str): - res = getattr(ssl, candidate, None) - if res is None: - res = getattr(ssl, "CERT_" + candidate) - return res - - return candidate - - -def resolve_ssl_version(candidate): - """ - like resolve_cert_reqs - """ - if candidate is None: - return PROTOCOL_TLS - - if isinstance(candidate, str): - res = getattr(ssl, candidate, None) - if res is None: - res = getattr(ssl, "PROTOCOL_" + candidate) - return res - - return candidate - - -def create_urllib3_context( - ssl_version=None, cert_reqs=None, options=None, ciphers=None -): - """All arguments have the same meaning as ``ssl_wrap_socket``. - - By default, this function does a lot of the same work that - ``ssl.create_default_context`` does on Python 3.4+. It: - - - Disables SSLv2, SSLv3, and compression - - Sets a restricted set of server ciphers - - If you wish to enable SSLv3, you can do:: - - from urllib3.util import ssl_ - context = ssl_.create_urllib3_context() - context.options &= ~ssl_.OP_NO_SSLv3 - - You can do the same to enable compression (substituting ``COMPRESSION`` - for ``SSLv3`` in the last line above). - - :param ssl_version: - The desired protocol version to use. This will default to - PROTOCOL_SSLv23 which will negotiate the highest protocol that both - the server and your installation of OpenSSL support. - :param cert_reqs: - Whether to require the certificate verification. This defaults to - ``ssl.CERT_REQUIRED``. - :param options: - Specific OpenSSL options. These default to ``ssl.OP_NO_SSLv2``, - ``ssl.OP_NO_SSLv3``, ``ssl.OP_NO_COMPRESSION``, and ``ssl.OP_NO_TICKET``. - :param ciphers: - Which cipher suites to allow the server to select. - :returns: - Constructed SSLContext object with specified options - :rtype: SSLContext - """ - # PROTOCOL_TLS is deprecated in Python 3.10 - if not ssl_version or ssl_version == PROTOCOL_TLS: - ssl_version = PROTOCOL_TLS_CLIENT - - context = SSLContext(ssl_version) - - context.set_ciphers(ciphers or DEFAULT_CIPHERS) - - # Setting the default here, as we may have no ssl module on import - cert_reqs = ssl.CERT_REQUIRED if cert_reqs is None else cert_reqs - - if options is None: - options = 0 - # SSLv2 is easily broken and is considered harmful and dangerous - options |= OP_NO_SSLv2 - # SSLv3 has several problems and is now dangerous - options |= OP_NO_SSLv3 - # Disable compression to prevent CRIME attacks for OpenSSL 1.0+ - # (issue #309) - options |= OP_NO_COMPRESSION - # TLSv1.2 only. Unless set explicitly, do not request tickets. - # This may save some bandwidth on wire, and although the ticket is encrypted, - # there is a risk associated with it being on wire, - # if the server is not rotating its ticketing keys properly. - options |= OP_NO_TICKET - - context.options |= options - - # Enable post-handshake authentication for TLS 1.3, see GH #1634. PHA is - # necessary for conditional client cert authentication with TLS 1.3. - # The attribute is None for OpenSSL <= 1.1.0 or does not exist in older - # versions of Python. We only enable on Python 3.7.4+ or if certificate - # verification is enabled to work around Python issue #37428 - # See: https://bugs.python.org/issue37428 - if (cert_reqs == ssl.CERT_REQUIRED or sys.version_info >= (3, 7, 4)) and getattr( - context, "post_handshake_auth", None - ) is not None: - context.post_handshake_auth = True - - def disable_check_hostname(): - if ( - getattr(context, "check_hostname", None) is not None - ): # Platform-specific: Python 3.2 - # We do our own verification, including fingerprints and alternative - # hostnames. So disable it here - context.check_hostname = False - - # The order of the below lines setting verify_mode and check_hostname - # matter due to safe-guards SSLContext has to prevent an SSLContext with - # check_hostname=True, verify_mode=NONE/OPTIONAL. This is made even more - # complex because we don't know whether PROTOCOL_TLS_CLIENT will be used - # or not so we don't know the initial state of the freshly created SSLContext. - if cert_reqs == ssl.CERT_REQUIRED: - context.verify_mode = cert_reqs - disable_check_hostname() - else: - disable_check_hostname() - context.verify_mode = cert_reqs - - # Enable logging of TLS session keys via defacto standard environment variable - # 'SSLKEYLOGFILE', if the feature is available (Python 3.8+). Skip empty values. - if hasattr(context, "keylog_filename"): - sslkeylogfile = os.environ.get("SSLKEYLOGFILE") - if sslkeylogfile: - context.keylog_filename = sslkeylogfile - - return context - - -def ssl_wrap_socket( - sock, - keyfile=None, - certfile=None, - cert_reqs=None, - ca_certs=None, - server_hostname=None, - ssl_version=None, - ciphers=None, - ssl_context=None, - ca_cert_dir=None, - key_password=None, - ca_cert_data=None, - tls_in_tls=False, -): - """ - All arguments except for server_hostname, ssl_context, and ca_cert_dir have - the same meaning as they do when using :func:`ssl.wrap_socket`. - - :param server_hostname: - When SNI is supported, the expected hostname of the certificate - :param ssl_context: - A pre-made :class:`SSLContext` object. If none is provided, one will - be created using :func:`create_urllib3_context`. - :param ciphers: - A string of ciphers we wish the client to support. - :param ca_cert_dir: - A directory containing CA certificates in multiple separate files, as - supported by OpenSSL's -CApath flag or the capath argument to - SSLContext.load_verify_locations(). - :param key_password: - Optional password if the keyfile is encrypted. - :param ca_cert_data: - Optional string containing CA certificates in PEM format suitable for - passing as the cadata parameter to SSLContext.load_verify_locations() - :param tls_in_tls: - Use SSLTransport to wrap the existing socket. - """ - context = ssl_context - if context is None: - # Note: This branch of code and all the variables in it are no longer - # used by urllib3 itself. We should consider deprecating and removing - # this code. - context = create_urllib3_context(ssl_version, cert_reqs, ciphers=ciphers) - - if ca_certs or ca_cert_dir or ca_cert_data: - try: - context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data) - except (IOError, OSError) as e: - raise SSLError(e) - - elif ssl_context is None and hasattr(context, "load_default_certs"): - # try to load OS default certs; works well on Windows (require Python3.4+) - context.load_default_certs() - - # Attempt to detect if we get the goofy behavior of the - # keyfile being encrypted and OpenSSL asking for the - # passphrase via the terminal and instead error out. - if keyfile and key_password is None and _is_key_file_encrypted(keyfile): - raise SSLError("Client private key is encrypted, password is required") - - if certfile: - if key_password is None: - context.load_cert_chain(certfile, keyfile) - else: - context.load_cert_chain(certfile, keyfile, key_password) - - try: - if hasattr(context, "set_alpn_protocols"): - context.set_alpn_protocols(ALPN_PROTOCOLS) - except NotImplementedError: # Defensive: in CI, we always have set_alpn_protocols - pass - - # If we detect server_hostname is an IP address then the SNI - # extension should not be used according to RFC3546 Section 3.1 - use_sni_hostname = server_hostname and not is_ipaddress(server_hostname) - # SecureTransport uses server_hostname in certificate verification. - send_sni = (use_sni_hostname and HAS_SNI) or ( - IS_SECURETRANSPORT and server_hostname - ) - # Do not warn the user if server_hostname is an invalid SNI hostname. - if not HAS_SNI and use_sni_hostname: - warnings.warn( - "An HTTPS request has been made, but the SNI (Server Name " - "Indication) extension to TLS is not available on this platform. " - "This may cause the server to present an incorrect TLS " - "certificate, which can cause validation failures. You can upgrade to " - "a newer version of Python to solve this. For more information, see " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings", - SNIMissingWarning, - ) - - if send_sni: - ssl_sock = _ssl_wrap_socket_impl( - sock, context, tls_in_tls, server_hostname=server_hostname - ) - else: - ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls) - return ssl_sock - - -def is_ipaddress(hostname): - """Detects whether the hostname given is an IPv4 or IPv6 address. - Also detects IPv6 addresses with Zone IDs. - - :param str hostname: Hostname to examine. - :return: True if the hostname is an IP address, False otherwise. - """ - if not six.PY2 and isinstance(hostname, bytes): - # IDN A-label bytes are ASCII compatible. - hostname = hostname.decode("ascii") - return bool(IPV4_RE.match(hostname) or BRACELESS_IPV6_ADDRZ_RE.match(hostname)) - - -def _is_key_file_encrypted(key_file): - """Detects if a key file is encrypted or not.""" - with open(key_file, "r") as f: - for line in f: - # Look for Proc-Type: 4,ENCRYPTED - if "ENCRYPTED" in line: - return True - - return False - - -def _ssl_wrap_socket_impl(sock, ssl_context, tls_in_tls, server_hostname=None): - if tls_in_tls: - if not SSLTransport: - # Import error, ssl is not available. - raise ProxySchemeUnsupported( - "TLS in TLS requires support for the 'ssl' module" - ) - - SSLTransport._validate_ssl_context_for_tls_in_tls(ssl_context) - return SSLTransport(sock, ssl_context, server_hostname) - - if server_hostname: - return ssl_context.wrap_socket(sock, server_hostname=server_hostname) - else: - return ssl_context.wrap_socket(sock) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/ssl_match_hostname.py b/infrastructure/sandbox/Data/lambda/urllib3/util/ssl_match_hostname.py deleted file mode 100644 index 1dd950c48..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/ssl_match_hostname.py +++ /dev/null @@ -1,159 +0,0 @@ -"""The match_hostname() function from Python 3.3.3, essential when using SSL.""" - -# Note: This file is under the PSF license as the code comes from the python -# stdlib. http://docs.python.org/3/license.html - -import re -import sys - -# ipaddress has been backported to 2.6+ in pypi. If it is installed on the -# system, use it to handle IPAddress ServerAltnames (this was added in -# python-3.5) otherwise only do DNS matching. This allows -# util.ssl_match_hostname to continue to be used in Python 2.7. -try: - import ipaddress -except ImportError: - ipaddress = None - -__version__ = "3.5.0.1" - - -class CertificateError(ValueError): - pass - - -def _dnsname_match(dn, hostname, max_wildcards=1): - """Matching according to RFC 6125, section 6.4.3 - - http://tools.ietf.org/html/rfc6125#section-6.4.3 - """ - pats = [] - if not dn: - return False - - # Ported from python3-syntax: - # leftmost, *remainder = dn.split(r'.') - parts = dn.split(r".") - leftmost = parts[0] - remainder = parts[1:] - - wildcards = leftmost.count("*") - if wildcards > max_wildcards: - # Issue #17980: avoid denials of service by refusing more - # than one wildcard per fragment. A survey of established - # policy among SSL implementations showed it to be a - # reasonable choice. - raise CertificateError( - "too many wildcards in certificate DNS name: " + repr(dn) - ) - - # speed up common case w/o wildcards - if not wildcards: - return dn.lower() == hostname.lower() - - # RFC 6125, section 6.4.3, subitem 1. - # The client SHOULD NOT attempt to match a presented identifier in which - # the wildcard character comprises a label other than the left-most label. - if leftmost == "*": - # When '*' is a fragment by itself, it matches a non-empty dotless - # fragment. - pats.append("[^.]+") - elif leftmost.startswith("xn--") or hostname.startswith("xn--"): - # RFC 6125, section 6.4.3, subitem 3. - # The client SHOULD NOT attempt to match a presented identifier - # where the wildcard character is embedded within an A-label or - # U-label of an internationalized domain name. - pats.append(re.escape(leftmost)) - else: - # Otherwise, '*' matches any dotless string, e.g. www* - pats.append(re.escape(leftmost).replace(r"\*", "[^.]*")) - - # add the remaining fragments, ignore any wildcards - for frag in remainder: - pats.append(re.escape(frag)) - - pat = re.compile(r"\A" + r"\.".join(pats) + r"\Z", re.IGNORECASE) - return pat.match(hostname) - - -def _to_unicode(obj): - if isinstance(obj, str) and sys.version_info < (3,): - # ignored flake8 # F821 to support python 2.7 function - obj = unicode(obj, encoding="ascii", errors="strict") # noqa: F821 - return obj - - -def _ipaddress_match(ipname, host_ip): - """Exact matching of IP addresses. - - RFC 6125 explicitly doesn't define an algorithm for this - (section 1.7.2 - "Out of Scope"). - """ - # OpenSSL may add a trailing newline to a subjectAltName's IP address - # Divergence from upstream: ipaddress can't handle byte str - ip = ipaddress.ip_address(_to_unicode(ipname).rstrip()) - return ip == host_ip - - -def match_hostname(cert, hostname): - """Verify that *cert* (in decoded format as returned by - SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 - rules are followed, but IP addresses are not accepted for *hostname*. - - CertificateError is raised on failure. On success, the function - returns nothing. - """ - if not cert: - raise ValueError( - "empty or no certificate, match_hostname needs a " - "SSL socket or SSL context with either " - "CERT_OPTIONAL or CERT_REQUIRED" - ) - try: - # Divergence from upstream: ipaddress can't handle byte str - host_ip = ipaddress.ip_address(_to_unicode(hostname)) - except (UnicodeError, ValueError): - # ValueError: Not an IP address (common case) - # UnicodeError: Divergence from upstream: Have to deal with ipaddress not taking - # byte strings. addresses should be all ascii, so we consider it not - # an ipaddress in this case - host_ip = None - except AttributeError: - # Divergence from upstream: Make ipaddress library optional - if ipaddress is None: - host_ip = None - else: # Defensive - raise - dnsnames = [] - san = cert.get("subjectAltName", ()) - for key, value in san: - if key == "DNS": - if host_ip is None and _dnsname_match(value, hostname): - return - dnsnames.append(value) - elif key == "IP Address": - if host_ip is not None and _ipaddress_match(value, host_ip): - return - dnsnames.append(value) - if not dnsnames: - # The subject is only checked when there is no dNSName entry - # in subjectAltName - for sub in cert.get("subject", ()): - for key, value in sub: - # XXX according to RFC 2818, the most specific Common Name - # must be used. - if key == "commonName": - if _dnsname_match(value, hostname): - return - dnsnames.append(value) - if len(dnsnames) > 1: - raise CertificateError( - "hostname %r " - "doesn't match either of %s" % (hostname, ", ".join(map(repr, dnsnames))) - ) - elif len(dnsnames) == 1: - raise CertificateError("hostname %r doesn't match %r" % (hostname, dnsnames[0])) - else: - raise CertificateError( - "no appropriate commonName or subjectAltName fields were found" - ) diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/ssltransport.py b/infrastructure/sandbox/Data/lambda/urllib3/util/ssltransport.py deleted file mode 100644 index 4a7105d17..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/ssltransport.py +++ /dev/null @@ -1,221 +0,0 @@ -import io -import socket -import ssl - -from ..exceptions import ProxySchemeUnsupported -from ..packages import six - -SSL_BLOCKSIZE = 16384 - - -class SSLTransport: - """ - The SSLTransport wraps an existing socket and establishes an SSL connection. - - Contrary to Python's implementation of SSLSocket, it allows you to chain - multiple TLS connections together. It's particularly useful if you need to - implement TLS within TLS. - - The class supports most of the socket API operations. - """ - - @staticmethod - def _validate_ssl_context_for_tls_in_tls(ssl_context): - """ - Raises a ProxySchemeUnsupported if the provided ssl_context can't be used - for TLS in TLS. - - The only requirement is that the ssl_context provides the 'wrap_bio' - methods. - """ - - if not hasattr(ssl_context, "wrap_bio"): - if six.PY2: - raise ProxySchemeUnsupported( - "TLS in TLS requires SSLContext.wrap_bio() which isn't " - "supported on Python 2" - ) - else: - raise ProxySchemeUnsupported( - "TLS in TLS requires SSLContext.wrap_bio() which isn't " - "available on non-native SSLContext" - ) - - def __init__( - self, socket, ssl_context, server_hostname=None, suppress_ragged_eofs=True - ): - """ - Create an SSLTransport around socket using the provided ssl_context. - """ - self.incoming = ssl.MemoryBIO() - self.outgoing = ssl.MemoryBIO() - - self.suppress_ragged_eofs = suppress_ragged_eofs - self.socket = socket - - self.sslobj = ssl_context.wrap_bio( - self.incoming, self.outgoing, server_hostname=server_hostname - ) - - # Perform initial handshake. - self._ssl_io_loop(self.sslobj.do_handshake) - - def __enter__(self): - return self - - def __exit__(self, *_): - self.close() - - def fileno(self): - return self.socket.fileno() - - def read(self, len=1024, buffer=None): - return self._wrap_ssl_read(len, buffer) - - def recv(self, len=1024, flags=0): - if flags != 0: - raise ValueError("non-zero flags not allowed in calls to recv") - return self._wrap_ssl_read(len) - - def recv_into(self, buffer, nbytes=None, flags=0): - if flags != 0: - raise ValueError("non-zero flags not allowed in calls to recv_into") - if buffer and (nbytes is None): - nbytes = len(buffer) - elif nbytes is None: - nbytes = 1024 - return self.read(nbytes, buffer) - - def sendall(self, data, flags=0): - if flags != 0: - raise ValueError("non-zero flags not allowed in calls to sendall") - count = 0 - with memoryview(data) as view, view.cast("B") as byte_view: - amount = len(byte_view) - while count < amount: - v = self.send(byte_view[count:]) - count += v - - def send(self, data, flags=0): - if flags != 0: - raise ValueError("non-zero flags not allowed in calls to send") - response = self._ssl_io_loop(self.sslobj.write, data) - return response - - def makefile( - self, mode="r", buffering=None, encoding=None, errors=None, newline=None - ): - """ - Python's httpclient uses makefile and buffered io when reading HTTP - messages and we need to support it. - - This is unfortunately a copy and paste of socket.py makefile with small - changes to point to the socket directly. - """ - if not set(mode) <= {"r", "w", "b"}: - raise ValueError("invalid mode %r (only r, w, b allowed)" % (mode,)) - - writing = "w" in mode - reading = "r" in mode or not writing - assert reading or writing - binary = "b" in mode - rawmode = "" - if reading: - rawmode += "r" - if writing: - rawmode += "w" - raw = socket.SocketIO(self, rawmode) - self.socket._io_refs += 1 - if buffering is None: - buffering = -1 - if buffering < 0: - buffering = io.DEFAULT_BUFFER_SIZE - if buffering == 0: - if not binary: - raise ValueError("unbuffered streams must be binary") - return raw - if reading and writing: - buffer = io.BufferedRWPair(raw, raw, buffering) - elif reading: - buffer = io.BufferedReader(raw, buffering) - else: - assert writing - buffer = io.BufferedWriter(raw, buffering) - if binary: - return buffer - text = io.TextIOWrapper(buffer, encoding, errors, newline) - text.mode = mode - return text - - def unwrap(self): - self._ssl_io_loop(self.sslobj.unwrap) - - def close(self): - self.socket.close() - - def getpeercert(self, binary_form=False): - return self.sslobj.getpeercert(binary_form) - - def version(self): - return self.sslobj.version() - - def cipher(self): - return self.sslobj.cipher() - - def selected_alpn_protocol(self): - return self.sslobj.selected_alpn_protocol() - - def selected_npn_protocol(self): - return self.sslobj.selected_npn_protocol() - - def shared_ciphers(self): - return self.sslobj.shared_ciphers() - - def compression(self): - return self.sslobj.compression() - - def settimeout(self, value): - self.socket.settimeout(value) - - def gettimeout(self): - return self.socket.gettimeout() - - def _decref_socketios(self): - self.socket._decref_socketios() - - def _wrap_ssl_read(self, len, buffer=None): - try: - return self._ssl_io_loop(self.sslobj.read, len, buffer) - except ssl.SSLError as e: - if e.errno == ssl.SSL_ERROR_EOF and self.suppress_ragged_eofs: - return 0 # eof, return 0. - else: - raise - - def _ssl_io_loop(self, func, *args): - """Performs an I/O loop between incoming/outgoing and the socket.""" - should_loop = True - ret = None - - while should_loop: - errno = None - try: - ret = func(*args) - except ssl.SSLError as e: - if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): - # WANT_READ, and WANT_WRITE are expected, others are not. - raise e - errno = e.errno - - buf = self.outgoing.read() - self.socket.sendall(buf) - - if errno is None: - should_loop = False - elif errno == ssl.SSL_ERROR_WANT_READ: - buf = self.socket.recv(SSL_BLOCKSIZE) - if buf: - self.incoming.write(buf) - else: - self.incoming.write_eof() - return ret diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/timeout.py b/infrastructure/sandbox/Data/lambda/urllib3/util/timeout.py deleted file mode 100644 index ff69593b0..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/timeout.py +++ /dev/null @@ -1,268 +0,0 @@ -from __future__ import absolute_import - -import time - -# The default socket timeout, used by httplib to indicate that no timeout was -# specified by the user -from socket import _GLOBAL_DEFAULT_TIMEOUT - -from ..exceptions import TimeoutStateError - -# A sentinel value to indicate that no timeout was specified by the user in -# urllib3 -_Default = object() - - -# Use time.monotonic if available. -current_time = getattr(time, "monotonic", time.time) - - -class Timeout(object): - """Timeout configuration. - - Timeouts can be defined as a default for a pool: - - .. code-block:: python - - timeout = Timeout(connect=2.0, read=7.0) - http = PoolManager(timeout=timeout) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool): - - .. code-block:: python - - response = http.request('GET', 'http://example.com/', timeout=Timeout(10)) - - Timeouts can be disabled by setting all the parameters to ``None``: - - .. code-block:: python - - no_timeout = Timeout(connect=None, read=None) - response = http.request('GET', 'http://example.com/, timeout=no_timeout) - - - :param total: - This combines the connect and read timeouts into one; the read timeout - will be set to the time leftover from the connect attempt. In the - event that both a connect timeout and a total are specified, or a read - timeout and a total are specified, the shorter timeout will be applied. - - Defaults to None. - - :type total: int, float, or None - - :param connect: - The maximum amount of time (in seconds) to wait for a connection - attempt to a server to succeed. Omitting the parameter will default the - connect timeout to the system default, probably `the global default - timeout in socket.py - `_. - None will set an infinite timeout for connection attempts. - - :type connect: int, float, or None - - :param read: - The maximum amount of time (in seconds) to wait between consecutive - read operations for a response from the server. Omitting the parameter - will default the read timeout to the system default, probably `the - global default timeout in socket.py - `_. - None will set an infinite timeout. - - :type read: int, float, or None - - .. note:: - - Many factors can affect the total amount of time for urllib3 to return - an HTTP response. - - For example, Python's DNS resolver does not obey the timeout specified - on the socket. Other factors that can affect total request time include - high CPU load, high swap, the program running at a low priority level, - or other behaviors. - - In addition, the read and total timeouts only measure the time between - read operations on the socket connecting the client and the server, - not the total amount of time for the request to return a complete - response. For most requests, the timeout is raised because the server - has not sent the first byte in the specified time. This is not always - the case; if a server streams one byte every fifteen seconds, a timeout - of 20 seconds will not trigger, even though the request will take - several minutes to complete. - - If your goal is to cut off any request after a set amount of wall clock - time, consider having a second "watcher" thread to cut off a slow - request. - """ - - #: A sentinel object representing the default timeout value - DEFAULT_TIMEOUT = _GLOBAL_DEFAULT_TIMEOUT - - def __init__(self, total=None, connect=_Default, read=_Default): - self._connect = self._validate_timeout(connect, "connect") - self._read = self._validate_timeout(read, "read") - self.total = self._validate_timeout(total, "total") - self._start_connect = None - - def __repr__(self): - return "%s(connect=%r, read=%r, total=%r)" % ( - type(self).__name__, - self._connect, - self._read, - self.total, - ) - - # __str__ provided for backwards compatibility - __str__ = __repr__ - - @classmethod - def _validate_timeout(cls, value, name): - """Check that a timeout attribute is valid. - - :param value: The timeout value to validate - :param name: The name of the timeout attribute to validate. This is - used to specify in error messages. - :return: The validated and casted version of the given value. - :raises ValueError: If it is a numeric value less than or equal to - zero, or the type is not an integer, float, or None. - """ - if value is _Default: - return cls.DEFAULT_TIMEOUT - - if value is None or value is cls.DEFAULT_TIMEOUT: - return value - - if isinstance(value, bool): - raise ValueError( - "Timeout cannot be a boolean value. It must " - "be an int, float or None." - ) - try: - float(value) - except (TypeError, ValueError): - raise ValueError( - "Timeout value %s was %s, but it must be an " - "int, float or None." % (name, value) - ) - - try: - if value <= 0: - raise ValueError( - "Attempted to set %s timeout to %s, but the " - "timeout cannot be set to a value less " - "than or equal to 0." % (name, value) - ) - except TypeError: - # Python 3 - raise ValueError( - "Timeout value %s was %s, but it must be an " - "int, float or None." % (name, value) - ) - - return value - - @classmethod - def from_float(cls, timeout): - """Create a new Timeout from a legacy timeout value. - - The timeout value used by httplib.py sets the same timeout on the - connect(), and recv() socket requests. This creates a :class:`Timeout` - object that sets the individual timeouts to the ``timeout`` value - passed to this function. - - :param timeout: The legacy timeout value. - :type timeout: integer, float, sentinel default object, or None - :return: Timeout object - :rtype: :class:`Timeout` - """ - return Timeout(read=timeout, connect=timeout) - - def clone(self): - """Create a copy of the timeout object - - Timeout properties are stored per-pool but each request needs a fresh - Timeout object to ensure each one has its own start/stop configured. - - :return: a copy of the timeout object - :rtype: :class:`Timeout` - """ - # We can't use copy.deepcopy because that will also create a new object - # for _GLOBAL_DEFAULT_TIMEOUT, which socket.py uses as a sentinel to - # detect the user default. - return Timeout(connect=self._connect, read=self._read, total=self.total) - - def start_connect(self): - """Start the timeout clock, used during a connect() attempt - - :raises urllib3.exceptions.TimeoutStateError: if you attempt - to start a timer that has been started already. - """ - if self._start_connect is not None: - raise TimeoutStateError("Timeout timer has already been started.") - self._start_connect = current_time() - return self._start_connect - - def get_connect_duration(self): - """Gets the time elapsed since the call to :meth:`start_connect`. - - :return: Elapsed time in seconds. - :rtype: float - :raises urllib3.exceptions.TimeoutStateError: if you attempt - to get duration for a timer that hasn't been started. - """ - if self._start_connect is None: - raise TimeoutStateError( - "Can't get connect duration for timer that has not started." - ) - return current_time() - self._start_connect - - @property - def connect_timeout(self): - """Get the value to use when setting a connection timeout. - - This will be a positive float or integer, the value None - (never timeout), or the default system timeout. - - :return: Connect timeout. - :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None - """ - if self.total is None: - return self._connect - - if self._connect is None or self._connect is self.DEFAULT_TIMEOUT: - return self.total - - return min(self._connect, self.total) - - @property - def read_timeout(self): - """Get the value for the read timeout. - - This assumes some time has elapsed in the connection timeout and - computes the read timeout appropriately. - - If self.total is set, the read timeout is dependent on the amount of - time taken by the connect timeout. If the connection time has not been - established, a :exc:`~urllib3.exceptions.TimeoutStateError` will be - raised. - - :return: Value to use for the read timeout. - :rtype: int, float, :attr:`Timeout.DEFAULT_TIMEOUT` or None - :raises urllib3.exceptions.TimeoutStateError: If :meth:`start_connect` - has not yet been called on this object. - """ - if ( - self.total is not None - and self.total is not self.DEFAULT_TIMEOUT - and self._read is not None - and self._read is not self.DEFAULT_TIMEOUT - ): - # In case the connect timeout has not yet been established. - if self._start_connect is None: - return self._read - return max(0, min(self.total - self.get_connect_duration(), self._read)) - elif self.total is not None and self.total is not self.DEFAULT_TIMEOUT: - return max(0, self.total - self.get_connect_duration()) - else: - return self._read diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/url.py b/infrastructure/sandbox/Data/lambda/urllib3/util/url.py deleted file mode 100644 index b667c160a..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/url.py +++ /dev/null @@ -1,435 +0,0 @@ -from __future__ import absolute_import - -import re -from collections import namedtuple - -from ..exceptions import LocationParseError -from ..packages import six - -url_attrs = ["scheme", "auth", "host", "port", "path", "query", "fragment"] - -# We only want to normalize urls with an HTTP(S) scheme. -# urllib3 infers URLs without a scheme (None) to be http. -NORMALIZABLE_SCHEMES = ("http", "https", None) - -# Almost all of these patterns were derived from the -# 'rfc3986' module: https://github.com/python-hyper/rfc3986 -PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}") -SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)") -URI_RE = re.compile( - r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?" - r"(?://([^\\/?#]*))?" - r"([^?#]*)" - r"(?:\?([^#]*))?" - r"(?:#(.*))?$", - re.UNICODE | re.DOTALL, -) - -IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}" -HEX_PAT = "[0-9A-Fa-f]{1,4}" -LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT) -_subs = {"hex": HEX_PAT, "ls32": LS32_PAT} -_variations = [ - # 6( h16 ":" ) ls32 - "(?:%(hex)s:){6}%(ls32)s", - # "::" 5( h16 ":" ) ls32 - "::(?:%(hex)s:){5}%(ls32)s", - # [ h16 ] "::" 4( h16 ":" ) ls32 - "(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s", - # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32 - "(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s", - # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32 - "(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s", - # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32 - "(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s", - # [ *4( h16 ":" ) h16 ] "::" ls32 - "(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s", - # [ *5( h16 ":" ) h16 ] "::" h16 - "(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s", - # [ *6( h16 ":" ) h16 ] "::" - "(?:(?:%(hex)s:){0,6}%(hex)s)?::", -] - -UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._!\-~" -IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")" -ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+" -IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]" -REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*" -TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$") - -IPV4_RE = re.compile("^" + IPV4_PAT + "$") -IPV6_RE = re.compile("^" + IPV6_PAT + "$") -IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT + "$") -BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT[2:-2] + "$") -ZONE_ID_RE = re.compile("(" + ZONE_ID_PAT + r")\]$") - -_HOST_PORT_PAT = ("^(%s|%s|%s)(?::([0-9]{0,5}))?$") % ( - REG_NAME_PAT, - IPV4_PAT, - IPV6_ADDRZ_PAT, -) -_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL) - -UNRESERVED_CHARS = set( - "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~" -) -SUB_DELIM_CHARS = set("!$&'()*+,;=") -USERINFO_CHARS = UNRESERVED_CHARS | SUB_DELIM_CHARS | {":"} -PATH_CHARS = USERINFO_CHARS | {"@", "/"} -QUERY_CHARS = FRAGMENT_CHARS = PATH_CHARS | {"?"} - - -class Url(namedtuple("Url", url_attrs)): - """ - Data structure for representing an HTTP URL. Used as a return value for - :func:`parse_url`. Both the scheme and host are normalized as they are - both case-insensitive according to RFC 3986. - """ - - __slots__ = () - - def __new__( - cls, - scheme=None, - auth=None, - host=None, - port=None, - path=None, - query=None, - fragment=None, - ): - if path and not path.startswith("/"): - path = "/" + path - if scheme is not None: - scheme = scheme.lower() - return super(Url, cls).__new__( - cls, scheme, auth, host, port, path, query, fragment - ) - - @property - def hostname(self): - """For backwards-compatibility with urlparse. We're nice like that.""" - return self.host - - @property - def request_uri(self): - """Absolute path including the query string.""" - uri = self.path or "/" - - if self.query is not None: - uri += "?" + self.query - - return uri - - @property - def netloc(self): - """Network location including host and port""" - if self.port: - return "%s:%d" % (self.host, self.port) - return self.host - - @property - def url(self): - """ - Convert self into a url - - This function should more or less round-trip with :func:`.parse_url`. The - returned url may not be exactly the same as the url inputted to - :func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls - with a blank port will have : removed). - - Example: :: - - >>> U = parse_url('http://google.com/mail/') - >>> U.url - 'http://google.com/mail/' - >>> Url('http', 'username:password', 'host.com', 80, - ... '/path', 'query', 'fragment').url - 'http://username:password@host.com:80/path?query#fragment' - """ - scheme, auth, host, port, path, query, fragment = self - url = u"" - - # We use "is not None" we want things to happen with empty strings (or 0 port) - if scheme is not None: - url += scheme + u"://" - if auth is not None: - url += auth + u"@" - if host is not None: - url += host - if port is not None: - url += u":" + str(port) - if path is not None: - url += path - if query is not None: - url += u"?" + query - if fragment is not None: - url += u"#" + fragment - - return url - - def __str__(self): - return self.url - - -def split_first(s, delims): - """ - .. deprecated:: 1.25 - - Given a string and an iterable of delimiters, split on the first found - delimiter. Return two split parts and the matched delimiter. - - If not found, then the first part is the full input string. - - Example:: - - >>> split_first('foo/bar?baz', '?/=') - ('foo', 'bar?baz', '/') - >>> split_first('foo/bar?baz', '123') - ('foo/bar?baz', '', None) - - Scales linearly with number of delims. Not ideal for large number of delims. - """ - min_idx = None - min_delim = None - for d in delims: - idx = s.find(d) - if idx < 0: - continue - - if min_idx is None or idx < min_idx: - min_idx = idx - min_delim = d - - if min_idx is None or min_idx < 0: - return s, "", None - - return s[:min_idx], s[min_idx + 1 :], min_delim - - -def _encode_invalid_chars(component, allowed_chars, encoding="utf-8"): - """Percent-encodes a URI component without reapplying - onto an already percent-encoded component. - """ - if component is None: - return component - - component = six.ensure_text(component) - - # Normalize existing percent-encoded bytes. - # Try to see if the component we're encoding is already percent-encoded - # so we can skip all '%' characters but still encode all others. - component, percent_encodings = PERCENT_RE.subn( - lambda match: match.group(0).upper(), component - ) - - uri_bytes = component.encode("utf-8", "surrogatepass") - is_percent_encoded = percent_encodings == uri_bytes.count(b"%") - encoded_component = bytearray() - - for i in range(0, len(uri_bytes)): - # Will return a single character bytestring on both Python 2 & 3 - byte = uri_bytes[i : i + 1] - byte_ord = ord(byte) - if (is_percent_encoded and byte == b"%") or ( - byte_ord < 128 and byte.decode() in allowed_chars - ): - encoded_component += byte - continue - encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper())) - - return encoded_component.decode(encoding) - - -def _remove_path_dot_segments(path): - # See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code - segments = path.split("/") # Turn the path into a list of segments - output = [] # Initialize the variable to use to store output - - for segment in segments: - # '.' is the current directory, so ignore it, it is superfluous - if segment == ".": - continue - # Anything other than '..', should be appended to the output - elif segment != "..": - output.append(segment) - # In this case segment == '..', if we can, we should pop the last - # element - elif output: - output.pop() - - # If the path starts with '/' and the output is empty or the first string - # is non-empty - if path.startswith("/") and (not output or output[0]): - output.insert(0, "") - - # If the path starts with '/.' or '/..' ensure we add one more empty - # string to add a trailing '/' - if path.endswith(("/.", "/..")): - output.append("") - - return "/".join(output) - - -def _normalize_host(host, scheme): - if host: - if isinstance(host, six.binary_type): - host = six.ensure_str(host) - - if scheme in NORMALIZABLE_SCHEMES: - is_ipv6 = IPV6_ADDRZ_RE.match(host) - if is_ipv6: - # IPv6 hosts of the form 'a::b%zone' are encoded in a URL as - # such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID - # separator as necessary to return a valid RFC 4007 scoped IP. - match = ZONE_ID_RE.search(host) - if match: - start, end = match.span(1) - zone_id = host[start:end] - - if zone_id.startswith("%25") and zone_id != "%25": - zone_id = zone_id[3:] - else: - zone_id = zone_id[1:] - zone_id = "%" + _encode_invalid_chars(zone_id, UNRESERVED_CHARS) - return host[:start].lower() + zone_id + host[end:] - else: - return host.lower() - elif not IPV4_RE.match(host): - return six.ensure_str( - b".".join([_idna_encode(label) for label in host.split(".")]) - ) - return host - - -def _idna_encode(name): - if name and any([ord(x) > 128 for x in name]): - try: - import idna - except ImportError: - six.raise_from( - LocationParseError("Unable to parse URL without the 'idna' module"), - None, - ) - try: - return idna.encode(name.lower(), strict=True, std3_rules=True) - except idna.IDNAError: - six.raise_from( - LocationParseError(u"Name '%s' is not a valid IDNA label" % name), None - ) - return name.lower().encode("ascii") - - -def _encode_target(target): - """Percent-encodes a request target so that there are no invalid characters""" - path, query = TARGET_RE.match(target).groups() - target = _encode_invalid_chars(path, PATH_CHARS) - query = _encode_invalid_chars(query, QUERY_CHARS) - if query is not None: - target += "?" + query - return target - - -def parse_url(url): - """ - Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is - performed to parse incomplete urls. Fields not provided will be None. - This parser is RFC 3986 and RFC 6874 compliant. - - The parser logic and helper functions are based heavily on - work done in the ``rfc3986`` module. - - :param str url: URL to parse into a :class:`.Url` namedtuple. - - Partly backwards-compatible with :mod:`urlparse`. - - Example:: - - >>> parse_url('http://google.com/mail/') - Url(scheme='http', host='google.com', port=None, path='/mail/', ...) - >>> parse_url('google.com:80') - Url(scheme=None, host='google.com', port=80, path=None, ...) - >>> parse_url('/foo?bar') - Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...) - """ - if not url: - # Empty - return Url() - - source_url = url - if not SCHEME_RE.search(url): - url = "//" + url - - try: - scheme, authority, path, query, fragment = URI_RE.match(url).groups() - normalize_uri = scheme is None or scheme.lower() in NORMALIZABLE_SCHEMES - - if scheme: - scheme = scheme.lower() - - if authority: - auth, _, host_port = authority.rpartition("@") - auth = auth or None - host, port = _HOST_PORT_RE.match(host_port).groups() - if auth and normalize_uri: - auth = _encode_invalid_chars(auth, USERINFO_CHARS) - if port == "": - port = None - else: - auth, host, port = None, None, None - - if port is not None: - port = int(port) - if not (0 <= port <= 65535): - raise LocationParseError(url) - - host = _normalize_host(host, scheme) - - if normalize_uri and path: - path = _remove_path_dot_segments(path) - path = _encode_invalid_chars(path, PATH_CHARS) - if normalize_uri and query: - query = _encode_invalid_chars(query, QUERY_CHARS) - if normalize_uri and fragment: - fragment = _encode_invalid_chars(fragment, FRAGMENT_CHARS) - - except (ValueError, AttributeError): - return six.raise_from(LocationParseError(source_url), None) - - # For the sake of backwards compatibility we put empty - # string values for path if there are any defined values - # beyond the path in the URL. - # TODO: Remove this when we break backwards compatibility. - if not path: - if query is not None or fragment is not None: - path = "" - else: - path = None - - # Ensure that each part of the URL is a `str` for - # backwards compatibility. - if isinstance(url, six.text_type): - ensure_func = six.ensure_text - else: - ensure_func = six.ensure_str - - def ensure_type(x): - return x if x is None else ensure_func(x) - - return Url( - scheme=ensure_type(scheme), - auth=ensure_type(auth), - host=ensure_type(host), - port=port, - path=ensure_type(path), - query=ensure_type(query), - fragment=ensure_type(fragment), - ) - - -def get_host(url): - """ - Deprecated. Use :func:`parse_url` instead. - """ - p = parse_url(url) - return p.scheme or "http", p.hostname, p.port diff --git a/infrastructure/sandbox/Data/lambda/urllib3/util/wait.py b/infrastructure/sandbox/Data/lambda/urllib3/util/wait.py deleted file mode 100644 index 21b4590b3..000000000 --- a/infrastructure/sandbox/Data/lambda/urllib3/util/wait.py +++ /dev/null @@ -1,152 +0,0 @@ -import errno -import select -import sys -from functools import partial - -try: - from time import monotonic -except ImportError: - from time import time as monotonic - -__all__ = ["NoWayToWaitForSocketError", "wait_for_read", "wait_for_write"] - - -class NoWayToWaitForSocketError(Exception): - pass - - -# How should we wait on sockets? -# -# There are two types of APIs you can use for waiting on sockets: the fancy -# modern stateful APIs like epoll/kqueue, and the older stateless APIs like -# select/poll. The stateful APIs are more efficient when you have a lots of -# sockets to keep track of, because you can set them up once and then use them -# lots of times. But we only ever want to wait on a single socket at a time -# and don't want to keep track of state, so the stateless APIs are actually -# more efficient. So we want to use select() or poll(). -# -# Now, how do we choose between select() and poll()? On traditional Unixes, -# select() has a strange calling convention that makes it slow, or fail -# altogether, for high-numbered file descriptors. The point of poll() is to fix -# that, so on Unixes, we prefer poll(). -# -# On Windows, there is no poll() (or at least Python doesn't provide a wrapper -# for it), but that's OK, because on Windows, select() doesn't have this -# strange calling convention; plain select() works fine. -# -# So: on Windows we use select(), and everywhere else we use poll(). We also -# fall back to select() in case poll() is somehow broken or missing. - -if sys.version_info >= (3, 5): - # Modern Python, that retries syscalls by default - def _retry_on_intr(fn, timeout): - return fn(timeout) - -else: - # Old and broken Pythons. - def _retry_on_intr(fn, timeout): - if timeout is None: - deadline = float("inf") - else: - deadline = monotonic() + timeout - - while True: - try: - return fn(timeout) - # OSError for 3 <= pyver < 3.5, select.error for pyver <= 2.7 - except (OSError, select.error) as e: - # 'e.args[0]' incantation works for both OSError and select.error - if e.args[0] != errno.EINTR: - raise - else: - timeout = deadline - monotonic() - if timeout < 0: - timeout = 0 - if timeout == float("inf"): - timeout = None - continue - - -def select_wait_for_socket(sock, read=False, write=False, timeout=None): - if not read and not write: - raise RuntimeError("must specify at least one of read=True, write=True") - rcheck = [] - wcheck = [] - if read: - rcheck.append(sock) - if write: - wcheck.append(sock) - # When doing a non-blocking connect, most systems signal success by - # marking the socket writable. Windows, though, signals success by marked - # it as "exceptional". We paper over the difference by checking the write - # sockets for both conditions. (The stdlib selectors module does the same - # thing.) - fn = partial(select.select, rcheck, wcheck, wcheck) - rready, wready, xready = _retry_on_intr(fn, timeout) - return bool(rready or wready or xready) - - -def poll_wait_for_socket(sock, read=False, write=False, timeout=None): - if not read and not write: - raise RuntimeError("must specify at least one of read=True, write=True") - mask = 0 - if read: - mask |= select.POLLIN - if write: - mask |= select.POLLOUT - poll_obj = select.poll() - poll_obj.register(sock, mask) - - # For some reason, poll() takes timeout in milliseconds - def do_poll(t): - if t is not None: - t *= 1000 - return poll_obj.poll(t) - - return bool(_retry_on_intr(do_poll, timeout)) - - -def null_wait_for_socket(*args, **kwargs): - raise NoWayToWaitForSocketError("no select-equivalent available") - - -def _have_working_poll(): - # Apparently some systems have a select.poll that fails as soon as you try - # to use it, either due to strange configuration or broken monkeypatching - # from libraries like eventlet/greenlet. - try: - poll_obj = select.poll() - _retry_on_intr(poll_obj.poll, 0) - except (AttributeError, OSError): - return False - else: - return True - - -def wait_for_socket(*args, **kwargs): - # We delay choosing which implementation to use until the first time we're - # called. We could do it at import time, but then we might make the wrong - # decision if someone goes wild with monkeypatching select.poll after - # we're imported. - global wait_for_socket - if _have_working_poll(): - wait_for_socket = poll_wait_for_socket - elif hasattr(select, "select"): - wait_for_socket = select_wait_for_socket - else: # Platform-specific: Appengine. - wait_for_socket = null_wait_for_socket - return wait_for_socket(*args, **kwargs) - - -def wait_for_read(sock, timeout=None): - """Waits for reading to be available on a given socket. - Returns True if the socket is readable, or False if the timeout expired. - """ - return wait_for_socket(sock, read=True, timeout=timeout) - - -def wait_for_write(sock, timeout=None): - """Waits for writing to be available on a given socket. - Returns True if the socket is readable, or False if the timeout expired. - """ - return wait_for_socket(sock, write=True, timeout=timeout) diff --git a/infrastructure/sandbox/Data/main.tf b/infrastructure/sandbox/Data/main.tf deleted file mode 100644 index 0341db0dc..000000000 --- a/infrastructure/sandbox/Data/main.tf +++ /dev/null @@ -1,201 +0,0 @@ -resource "aws_iam_role" "iam_for_lambda" { - name = "${var.prefix}-iam_for_lambda" - - assume_role_policy = < terraform.zip -RUN unzip terraform.zip -RUN rm terraform.zip -RUN chmod 644 $(find . -type f) -RUN chmod 755 $(find . -type d) -RUN chmod 655 lambda terraform - -#FROM scratch -#COPY --from=builder /build/lambda /build/terraform / -#COPY --from=builder /build/deploy_terraform /deploy_terraform -#COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ -ENTRYPOINT ["/build/lambda"] diff --git a/infrastructure/sandbox/JITProvisioner/deprovisioner/backend-template.conf b/infrastructure/sandbox/JITProvisioner/deprovisioner/backend-template.conf deleted file mode 100644 index b60666d9a..000000000 --- a/infrastructure/sandbox/JITProvisioner/deprovisioner/backend-template.conf +++ /dev/null @@ -1,6 +0,0 @@ -bucket = "${remote_state.state_bucket.id}" -key = "terraform.tfstate" # This should be set to account_alias/unique_key/terraform.tfstate -region = "us-east-2" -encrypt = true -kms_key_id = "${remote_state.kms_key.id}" -dynamodb_table = "${remote_state.dynamodb_table.id}" diff --git a/infrastructure/sandbox/JITProvisioner/deprovisioner/deploy_terraform/.gitignore b/infrastructure/sandbox/JITProvisioner/deprovisioner/deploy_terraform/.gitignore deleted file mode 100644 index 19af85e5b..000000000 --- a/infrastructure/sandbox/JITProvisioner/deprovisioner/deploy_terraform/.gitignore +++ /dev/null @@ -1 +0,0 @@ -backend.conf diff --git a/infrastructure/sandbox/JITProvisioner/deprovisioner/deploy_terraform/main.tf b/infrastructure/sandbox/JITProvisioner/deprovisioner/deploy_terraform/main.tf deleted file mode 100644 index 177578d00..000000000 --- a/infrastructure/sandbox/JITProvisioner/deprovisioner/deploy_terraform/main.tf +++ /dev/null @@ -1,50 +0,0 @@ -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - version = "~> 4.10.0" - } - random = { - source = "hashicorp/random" - version = "~> 3.1.2" - } - mysql = { - source = "petoju/mysql" - version = "3.0.12" - } - helm = { - source = "hashicorp/helm" - version = "2.5.1" - } - } - backend "s3" {} -} - -provider "helm" { - kubernetes { - host = data.aws_eks_cluster.cluster.endpoint - token = data.aws_eks_cluster_auth.cluster.token - cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) - } -} - -data "aws_eks_cluster" "cluster" { - name = var.eks_cluster -} - -data "aws_eks_cluster_auth" "cluster" { - name = var.eks_cluster -} - -provider "mysql" { - endpoint = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["endpoint"] - username = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["username"] - password = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["password"] -} - -variable "mysql_secret" {} -variable "eks_cluster" {} - -data "aws_secretsmanager_secret_version" "mysql" { - secret_id = var.mysql_secret -} diff --git a/infrastructure/sandbox/JITProvisioner/deprovisioner/go.mod b/infrastructure/sandbox/JITProvisioner/deprovisioner/go.mod deleted file mode 100644 index 781fd34e9..000000000 --- a/infrastructure/sandbox/JITProvisioner/deprovisioner/go.mod +++ /dev/null @@ -1,10 +0,0 @@ -module github.com/fleetdm/fleet/infrastructure/demo/PreProvisioner/lambda - -go 1.21 - -require ( - github.com/aws/aws-lambda-go v1.29.0 - github.com/jessevdk/go-flags v1.5.0 -) - -require golang.org/x/sys v0.1.0 // indirect diff --git a/infrastructure/sandbox/JITProvisioner/deprovisioner/go.sum b/infrastructure/sandbox/JITProvisioner/deprovisioner/go.sum deleted file mode 100644 index 8bfcf2494..000000000 --- a/infrastructure/sandbox/JITProvisioner/deprovisioner/go.sum +++ /dev/null @@ -1,15 +0,0 @@ -github.com/aws/aws-lambda-go v1.29.0 h1:u+sfZkvNBUgt0ZkO8Q/jOMBV22DqMDMbZu04oomM2no= -github.com/aws/aws-lambda-go v1.29.0/go.mod h1:aakqVz9vDHhtbt0U2zegh/z9SI2+rJ+yRREZYNQLmWY= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc= -github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= -github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U= -golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776 h1:tQIYjPdBoyREyB9XMu+nnTclpTYkz2zFM+lzLJFO4gQ= -gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/infrastructure/sandbox/JITProvisioner/deprovisioner/main.go b/infrastructure/sandbox/JITProvisioner/deprovisioner/main.go deleted file mode 100644 index 53a1e3c7f..000000000 --- a/infrastructure/sandbox/JITProvisioner/deprovisioner/main.go +++ /dev/null @@ -1,108 +0,0 @@ -package main - -import ( - "context" - "github.com/aws/aws-lambda-go/lambda" - flags "github.com/jessevdk/go-flags" - "log" - "os" - "os/exec" -) - -type OptionsStruct struct { - LambdaExecutionEnv string `long:"lambda-execution-environment" env:"AWS_EXECUTION_ENV"` - InstanceID string `long:"instance-id" env:"INSTANCE_ID" required:"true"` -} - -var options = OptionsStruct{} - -type LifecycleRecord struct { - ID string - State string -} - -func runCmd(args []string) error { - cmd := exec.Cmd{ - Path: "/build/terraform", - Dir: "/build/deploy_terraform", - Stdout: os.Stdout, - Stderr: os.Stderr, - Args: append([]string{"/build/terraform"}, args...), - } - log.Printf("%+v\n", cmd) - return cmd.Run() -} - -func initTerraform() error { - err := runCmd([]string{ - "init", - "-backend-config=backend.conf", - }) - return err -} - -func runTerraform(workspace string) error { - err := runCmd([]string{ - "workspace", - "select", - workspace, - }) - if err != nil { - return err - } - err = runCmd([]string{ - "destroy", - "-auto-approve", - "-no-color", - }) - if err != nil { - return err - } - err = runCmd([]string{ - "workspace", - "select", - "default", - }) - if err != nil { - return err - } - err = runCmd([]string{ - "workspace", - "delete", - workspace, - }) - return err -} - -func handler(ctx context.Context, name NullEvent) error { - if err := initTerraform(); err != nil { - return err - } - if err := runTerraform(options.InstanceID); err != nil { - return err - } - return nil -} - -type NullEvent struct{} - -func main() { - var err error - log.SetFlags(log.LstdFlags | log.Lshortfile) - // Get config from environment - parser := flags.NewParser(&options, flags.Default) - if _, err = parser.Parse(); err != nil { - if flagsErr, ok := err.(*flags.Error); ok && flagsErr.Type == flags.ErrHelp { - return - } else { - log.Fatal(err) - } - } - if options.LambdaExecutionEnv == "AWS_Lambda_go1.x" { - lambda.Start(handler) - } else { - if err = handler(context.Background(), NullEvent{}); err != nil { - log.Fatal(err) - } - } -} diff --git a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/Dockerfile b/infrastructure/sandbox/JITProvisioner/ingress_destroyer/Dockerfile deleted file mode 100644 index f282e28b4..000000000 --- a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -FROM golang:1.21.6-bullseye@sha256:fa52abd182d334cfcdffdcc934e21fcfbc71c3cde568e606193ae7db045b1b8d as BUILDER -WORKDIR /src - -RUN apt update && apt upgrade -y - -COPY go.mod . -COPY go.sum . - -RUN go mod download - -COPY main.go . - -RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-extldflags '-static'" - - -FROM public.ecr.aws/aws-cli/aws-cli:latest - -COPY --from=BUILDER /src/ingress_destroyer /usr/local/bin/ingress_destroyer - -RUN chmod +x /usr/local/bin/ingress_destroyer - -ENTRYPOINT ["ingress_destroyer"] diff --git a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/go.mod b/infrastructure/sandbox/JITProvisioner/ingress_destroyer/go.mod deleted file mode 100644 index 5d473ad00..000000000 --- a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/go.mod +++ /dev/null @@ -1,50 +0,0 @@ -module github.com/fleetfm/fleet/infrastructure/sandbox/JITProvisioner/ingress_destroyer - -go 1.21 - -require ( - github.com/aws/aws-sdk-go v1.44.235 - k8s.io/apimachinery v0.26.2 - k8s.io/client-go v0.26.2 -) - -require ( - github.com/davecgh/go-spew v1.1.1 // indirect - github.com/emicklei/go-restful/v3 v3.9.0 // indirect - github.com/go-logr/logr v1.2.3 // indirect - github.com/go-openapi/jsonpointer v0.19.5 // indirect - github.com/go-openapi/jsonreference v0.20.0 // indirect - github.com/go-openapi/swag v0.19.14 // indirect - github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang/protobuf v1.5.2 // indirect - github.com/google/gnostic v0.5.7-v3refs // indirect - github.com/google/go-cmp v0.5.9 // indirect - github.com/google/gofuzz v1.1.0 // indirect - github.com/imdario/mergo v0.3.6 // indirect - github.com/jmespath/go-jmespath v0.4.0 // indirect - github.com/josharian/intern v1.0.0 // indirect - github.com/json-iterator/go v1.1.12 // indirect - github.com/mailru/easyjson v0.7.6 // indirect - github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.2 // indirect - github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect - github.com/spf13/pflag v1.0.5 // indirect - golang.org/x/net v0.17.0 // indirect - golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect - golang.org/x/sys v0.13.0 // indirect - golang.org/x/term v0.13.0 // indirect - golang.org/x/text v0.13.0 // indirect - golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect - google.golang.org/appengine v1.6.7 // indirect - google.golang.org/protobuf v1.28.1 // indirect - gopkg.in/inf.v0 v0.9.1 // indirect - gopkg.in/yaml.v2 v2.4.0 // indirect - gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/api v0.26.2 // indirect - k8s.io/klog/v2 v2.80.1 // indirect - k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect - k8s.io/utils v0.0.0-20221107191617-1a15be271d1d // indirect - sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect - sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect - sigs.k8s.io/yaml v1.3.0 // indirect -) diff --git a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/go.sum b/infrastructure/sandbox/JITProvisioner/ingress_destroyer/go.sum deleted file mode 100644 index aa84d88df..000000000 --- a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/go.sum +++ /dev/null @@ -1,502 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= -cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU= -cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= -cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= -cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= -cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= -cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= -cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M= -cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc= -cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk= -cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs= -cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc= -cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY= -cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= -cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= -cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= -cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= -cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= -cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= -cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= -cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= -cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= -cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= -cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU= -cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw= -cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos= -cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk= -cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= -cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= -dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= -github.com/aws/aws-sdk-go v1.44.235 h1:5MS1ZW1Pr27mmHFqqjuXYwGMlNTW/g6DqU5ekamPMeU= -github.com/aws/aws-sdk-go v1.44.235/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= -github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= -github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= -github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE= -github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0= -github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= -github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY= -github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= -github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA= -github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo= -github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= -github.com/go-openapi/swag v0.19.14 h1:gm3vOOXfiuw5i9p5N9xJvfjvuofpyvLA9Wr6QfK5Fng= -github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ= -github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= -github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y= -github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= -github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= -github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54= -github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g= -github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= -github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= -github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= -github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= -github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= -github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= -github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28= -github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= -github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= -github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= -github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= -github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= -github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= -github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= -github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= -github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= -github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= -github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= -github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= -github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= -github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA= -github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= -github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= -github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= -github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= -github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= -github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs= -github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= -github.com/onsi/ginkgo/v2 v2.4.0 h1:+Ig9nvqgS5OBSACXNk15PLdp0U9XPYROt9CFzVdFGIs= -github.com/onsi/ginkgo/v2 v2.4.0/go.mod h1:iHkDK1fKGcBoEHT5W7YBq4RFWaQulw+caOMkAt4OrFo= -github.com/onsi/gomega v1.23.0 h1:/oxKu9c2HVap+F3PfKort2Hw5DEU+HGlW8n+tguWsys= -github.com/onsi/gomega v1.23.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2vQAg= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= -github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= -github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk= -github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= -go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= -go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= -golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= -golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= -golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= -golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= -golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= -golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= -golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= -golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= -golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= -golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY= -golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= -golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM= -golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg= -golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE= -golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek= -golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U= -golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= -golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k= -golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= -golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 h1:vVKdlvoWBphwdxWKrFZEuM0kGgGLxUOYcY4U/2Vjg44= -golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= -golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= -golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= -golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= -google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= -google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= -google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= -google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= -google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= -google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM= -google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= -google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= -google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8= -google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA= -google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA= -google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= -google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60= -google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= -google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w= -google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU= -gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= -gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= -gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= -gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= -gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= -honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -k8s.io/api v0.26.2 h1:dM3cinp3PGB6asOySalOZxEG4CZ0IAdJsrYZXE/ovGQ= -k8s.io/api v0.26.2/go.mod h1:1kjMQsFE+QHPfskEcVNgL3+Hp88B80uj0QtSOlj8itU= -k8s.io/apimachinery v0.26.2 h1:da1u3D5wfR5u2RpLhE/ZtZS2P7QvDgLZTi9wrNZl/tQ= -k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I= -k8s.io/client-go v0.26.2 h1:s1WkVujHX3kTp4Zn4yGNFK+dlDXy1bAAkIl+cFAiuYI= -k8s.io/client-go v0.26.2/go.mod h1:u5EjOuSyBa09yqqyY7m3abZeovO/7D/WehVVlZ2qcqU= -k8s.io/klog/v2 v2.80.1 h1:atnLQ121W371wYYFawwYx1aEY2eUfs4l3J72wtgAwV4= -k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= -k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 h1:+70TFaan3hfJzs+7VK2o+OGxg8HsuBr/5f6tVAjDu6E= -k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4= -k8s.io/utils v0.0.0-20221107191617-1a15be271d1d h1:0Smp/HP1OH4Rvhe+4B8nWGERtlqAGSftbSbbmm45oFs= -k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= -rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= -rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= -sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k= -sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= -sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE= -sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E= -sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo= -sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8= diff --git a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/main.go b/infrastructure/sandbox/JITProvisioner/ingress_destroyer/main.go deleted file mode 100644 index 328a68ae9..000000000 --- a/infrastructure/sandbox/JITProvisioner/ingress_destroyer/main.go +++ /dev/null @@ -1,118 +0,0 @@ -package main - -import ( - "context" - "fmt" - "log" - "os" - "os/exec" - "time" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/dynamodb" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/tools/clientcmd" -) - -func main() { - log.SetFlags(log.LstdFlags | log.Lshortfile) - instanceID := getOrPanic("INSTANCE_ID") - ddbTable := getOrPanic("DYNAMODB_LIFECYCLE_TABLE") - clusterName := getOrPanic("CLUSTER_NAME") - - deleteIngress(instanceID, clusterName, ddbTable) -} - -func getOrPanic(env string) string { - s, ok := os.LookupEnv(env) - if !ok { - panic(fmt.Sprintf("%s not found", env)) - } - return s -} - -func deleteIngress(id, name, ddbTable string) { - - sess, err := session.NewSession() - if err != nil { - panic(err) - } - // AWS_PROFILE=Sandbox aws eks --region us-east-2 update-kubeconfig --name sandbox-prod - conf := os.TempDir() + "/kube-config" - cmd := exec.Command("aws", "eks", "update-kubeconfig", "--name", name, "--kubeconfig", conf) - cmd.Env = os.Environ() - buf, err := cmd.CombinedOutput() - if err != nil { - log.Println(cmd.String()) - log.Println(string(buf)) - log.Fatal(err) - } - - config, err := clientcmd.BuildConfigFromFlags("", conf) - if err != nil { - log.Fatal(err) - } - - clientset, err := kubernetes.NewForConfig(config) - if err != nil { - log.Fatal(err) - } - - // Delete the ingress using the Kubernetes clientset - err = clientset.NetworkingV1().Ingresses("default").Delete(context.Background(), id, v1.DeleteOptions{}) - if err != nil { - log.Fatal(err) - } - - // Delete the cronjob so we don't spam the database for stuff that's not running - err = clientset.BatchV1().CronJobs("default").Delete(context.Background(), id, v1.DeleteOptions{}) - if err != nil { - log.Fatal(err) - } - - // Scale it down to save money - time.Sleep(60) - s, err := clientset.AppsV1().Deployments("default").GetScale(context.Background(), id, v1.GetOptions{}) - if err != nil { - log.Fatal(err) - } - - sc := *s - sc.Spec.Replicas = 0 - _, err = clientset.AppsV1().Deployments("default").UpdateScale(context.Background(), id, &sc, v1.UpdateOptions{}) - if err != nil { - log.Fatal(err) - } - - svc := dynamodb.New(sess) - err = updateFleetInstanceState(id, ddbTable, svc) - if err != nil { - log.Fatal(err) - } - - log.Printf("Ingress %s deleted\n", id) -} - -func updateFleetInstanceState(id, table string, svc *dynamodb.DynamoDB) (err error) { - log.Printf("updating instance: %+v", id) - // Perform a conditional update to claim the item - input := &dynamodb.UpdateItemInput{ - TableName: aws.String(table), - Key: map[string]*dynamodb.AttributeValue{ - "ID": { - S: aws.String(id), - }, - }, - UpdateExpression: aws.String("set #fleet_state = :v2"), - ExpressionAttributeNames: map[string]*string{"#fleet_state": aws.String("State")}, - ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{ - ":v2": { - S: aws.String("ingress_destroyed"), - }, - }, - } - _, err = svc.UpdateItem(input) - return -} diff --git a/infrastructure/sandbox/JITProvisioner/jitprovisioner.tf b/infrastructure/sandbox/JITProvisioner/jitprovisioner.tf deleted file mode 100644 index f7a1daea0..000000000 --- a/infrastructure/sandbox/JITProvisioner/jitprovisioner.tf +++ /dev/null @@ -1,260 +0,0 @@ -resource "aws_lb_listener_rule" "jitprovisioner" { - listener_arn = var.alb_listener.arn - priority = 100 - - action { - type = "forward" - target_group_arn = aws_lb_target_group.jitprovisioner.arn - } - - condition { - host_header { - values = [var.base_domain] - } - } -} - -resource "aws_lb_target_group_attachment" "jitprovisioner" { - target_group_arn = aws_lb_target_group.jitprovisioner.arn - target_id = aws_lambda_function.jitprovisioner.arn - depends_on = [aws_lambda_permission.jitprovisioner] -} - -resource "aws_lambda_permission" "jitprovisioner" { - action = "lambda:InvokeFunction" - function_name = aws_lambda_function.jitprovisioner.arn - principal = "elasticloadbalancing.amazonaws.com" - source_arn = aws_lb_target_group.jitprovisioner.arn -} - -resource "aws_lb_target_group" "jitprovisioner" { - name = "${local.full_name}-lambda" - target_type = "lambda" - lambda_multi_value_headers_enabled = true -} - -data "aws_iam_policy_document" "lambda_assume_role" { - statement { - actions = ["sts:AssumeRole"] - principals { - type = "Service" - identifiers = ["lambda.amazonaws.com"] - } - } -} - -resource "aws_iam_role" "jitprovisioner" { - name = "${var.prefix}-lambda" - assume_role_policy = data.aws_iam_policy_document.lambda_assume_role.json -} - -resource "aws_iam_role_policy_attachment" "jitprovisioner-ecr" { - role = aws_iam_role.jitprovisioner.name - policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy" -} - -resource "aws_iam_role_policy_attachment" "jitprovisioner-vpc" { - role = aws_iam_role.jitprovisioner.name - policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole" -} - -resource "aws_iam_policy" "jitprovisioner" { - name = "${var.prefix}-jitprovisioner" - policy = data.aws_iam_policy_document.jitprovisioner.json -} - -data "aws_iam_policy_document" "jitprovisioner" { - statement { - actions = [ - "dynamodb:BatchGetItem", - "dynamodb:BatchWriteItem", - "dynamodb:ConditionCheckItem", - "dynamodb:PutItem", - "dynamodb:DescribeTable", - "dynamodb:DeleteItem", - "dynamodb:GetItem", - "dynamodb:Scan", - "dynamodb:Query", - "dynamodb:UpdateItem", - ] - resources = [var.dynamodb_table.arn, "${var.dynamodb_table.arn}/*"] - } - - statement { - actions = [ #tfsec:ignore:aws-iam-no-policy-wildcards - "kms:Encrypt*", - "kms:Decrypt*", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:Describe*" - ] - resources = [var.kms_key.arn, var.mysql_secret_kms.arn] - } - - statement { - actions = ["states:StartExecution"] - resources = [aws_sfn_state_machine.main.arn] - } - - statement { - actions = ["states:DescribeExecution"] - resources = ["*"] - } - - statement { - actions = [ - "secretsmanager:GetResourcePolicy", - "secretsmanager:GetSecretValue", - "secretsmanager:DescribeSecret", - "secretsmanager:ListSecretVersionIds" - ] - resources = [var.mysql_secret.arn] - } - - statement { - actions = ["secretsmanager:ListSecrets"] - resources = ["*"] - } -} - -resource "aws_iam_role_policy_attachment" "jitprovisioner" { - role = aws_iam_role.jitprovisioner.name - policy_arn = aws_iam_policy.jitprovisioner.arn -} - -resource "aws_lambda_function" "jitprovisioner" { - # If the file is not in the current working directory you will need to include a - # path.module in the filename. - image_uri = docker_registry_image.jitprovisioner.name - package_type = "Image" - function_name = "${var.prefix}-lambda" - role = aws_iam_role.jitprovisioner.arn - reserved_concurrent_executions = -1 - kms_key_arn = var.kms_key.arn - timeout = 10 - memory_size = 512 - vpc_config { - security_group_ids = [aws_security_group.jitprovisioner.id] - subnet_ids = var.vpc.private_subnets - } - tracing_config { - mode = "Active" - } - environment { - variables = { - DYNAMODB_LIFECYCLE_TABLE = var.dynamodb_table.id - LIFECYCLE_SFN = aws_sfn_state_machine.main.arn - FLEET_BASE_URL = "${var.base_domain}" - AUTHORIZATION_PSK = random_password.authorization.result - MYSQL_SECRET = var.mysql_secret.arn - } - } -} - -module "jitprovisioner-lambda-warmer" { - source = "Nuagic/lambda-warmer/aws" - version = "3.0.1" - function_name = aws_lambda_function.jitprovisioner.function_name - function_arn = aws_lambda_function.jitprovisioner.arn - # This just needs to have a request to parse. - input = < github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24 - -replace github.com/micromdm/scep/v2 => github.com/fleetdm/scep/v2 v2.1.1-0.20220729212655-4f19f0a10a03 diff --git a/infrastructure/sandbox/JITProvisioner/lambda/go.sum b/infrastructure/sandbox/JITProvisioner/lambda/go.sum deleted file mode 100644 index ec0791382..000000000 --- a/infrastructure/sandbox/JITProvisioner/lambda/go.sum +++ /dev/null @@ -1,1219 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= -cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU= -cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= -cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= -cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= -cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= -cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= -cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= -cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M= -cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc= -cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk= -cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs= -cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc= -cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY= -cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI= -cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk= -cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPTY= -cloud.google.com/go v0.110.1 h1:oDJ19Fu9TX9Xs06iyCw4yifSqZ7JQ8BeuVHcTmWQlOA= -cloud.google.com/go v0.110.1/go.mod h1:uc+V/WjzxQ7vpkxfJhgW4Q4axWXyfAerpQOuSNDZyFw= -cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= -cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= -cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= -cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= -cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= -cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/compute v1.19.2 h1:GbJtPo8OKVHbVep8jvM57KidbYHxeE68LOVqouNLrDY= -cloud.google.com/go/compute v1.19.2/go.mod h1:5f5a+iC1IriXYauaQ0EyQmEAEq9CGRnV5xJSQSlTV08= -cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY= -cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA= -cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= -cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= -cloud.google.com/go/iam v1.0.1 h1:lyeCAU6jpnVNrE9zGQkTl3WgNgK/X+uWwaw0kynZJMU= -cloud.google.com/go/iam v1.0.1/go.mod h1:yR3tmSL8BcZB4bxByRv2jkSIahVmCtfKZwLYGBalRE8= -cloud.google.com/go/kms v1.10.2 h1:8UePKEypK3SQ6g+4mn/s/VgE5L7XOh+FwGGRUqvY3Hw= -cloud.google.com/go/kms v1.10.2/go.mod h1:9mX3Q6pdroWzL20pbK6RaOdBbXBEhMNgK4Pfz2bweb4= -cloud.google.com/go/longrunning v0.4.1 h1:v+yFJOfKC3yZdY6ZUI933pIYdhyhV8S3NpWrXWmg7jM= -cloud.google.com/go/longrunning v0.4.1/go.mod h1:4iWDqhBZ70CvZ6BfETbvam3T8FMvLK+eFj0E6AaRQTo= -cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= -cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= -cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= -cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU= -cloud.google.com/go/pubsub v1.30.0 h1:vCge8m7aUKBJYOgrZp7EsNDf6QMd2CAlXZqWTn3yq6s= -cloud.google.com/go/pubsub v1.30.0/go.mod h1:qWi1OPS0B+b5L+Sg6Gmc9zD1Y+HaM0MdUr7LsupY1P4= -cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw= -cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos= -cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk= -cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= -cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= -cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo= -dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak= -github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ= -github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= -github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60= -github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM= -github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= -github.com/OneOfOne/xxhash v1.2.8 h1:31czK/TI9sNkxIKfaUfGlU47BAxQ0ztGgd9vPyqimf8= -github.com/OneOfOne/xxhash v1.2.8/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q= -github.com/Pallinder/go-randomdata v1.2.0 h1:DZ41wBchNRb/0GfsePLiSwb0PHZmT67XY00lCDlaYPg= -github.com/Pallinder/go-randomdata v1.2.0/go.mod h1:yHmJgulpD2Nfrm0cR9tI/+oAgRqCQQixsA8HyRZfV9Y= -github.com/RobotsAndPencils/buford v0.14.0/go.mod h1:F5FvdB/nkMby8Pge6HFpPHgLOeUZne/iE5wKzvx64Y0= -github.com/VividCortex/gohistogram v1.0.0 h1:6+hBz+qvs0JOrrNhhmR7lFxo5sINxBCGXrdtl/UvroE= -github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g= -github.com/VividCortex/mysqlerr v0.0.0-20170204212430-6c6b55f8796f h1:HR5nRmUQgXrwqZOwZ2DAc/aCi3Bu3xENpspW935vxu0= -github.com/VividCortex/mysqlerr v0.0.0-20170204212430-6c6b55f8796f/go.mod h1:f3HiCrHjHBdcm6E83vGaXh1KomZMA2P6aeo3hKx/wg0= -github.com/WatchBeam/clock v0.0.0-20170901150240-b08e6b4da7ea h1:C9Xwp9fZf9BFJMsTqs8P+4PETXwJPUOuJZwBfVci+4A= -github.com/WatchBeam/clock v0.0.0-20170901150240-b08e6b4da7ea/go.mod h1:N5eJIl14rhNCrE5I3O10HIyhZ1HpjaRHT9WDg1eXxtI= -github.com/XSAM/otelsql v0.10.0 h1:y8o7q4NaZEV0dBiUC7TuNTHNKyDaX3Z4anntNu7dfYw= -github.com/XSAM/otelsql v0.10.0/go.mod h1:7n9dZASOnVJncMmBPQjL5OdjQosb5gryCgsgNISnJVo= -github.com/a8m/expect v1.0.0/go.mod h1:4IwSCMumY49ScypDnjNbYEjgVeqy1/U2cEs3Lat96eA= -github.com/aai/gocrypto v0.0.0-20160205191751-93df0c47f8b8/go.mod h1:nE/FnVUmtbP0EbgMVCUtDrm1+86H47QfJIdcmZb+J1s= -github.com/agnivade/levenshtein v1.1.1 h1:QY8M92nrzkmr798gCo3kmMyqXFzdQVpxLlGPRBij0P8= -github.com/agnivade/levenshtein v1.1.1/go.mod h1:veldBMzWxcCG2ZvUTKD2kJNRdCk5hVbJomOvKkmgYbo= -github.com/akrylysov/algnhsa v0.12.1 h1:A9Ojt4hZrL77mhBc3qGO3Sn9reyf+tvM3DmR0SfXguc= -github.com/akrylysov/algnhsa v0.12.1/go.mod h1:xAcJ/X8DV+81e+dUjIoB/r5CbISrSXV9//leoMDHcdk= -github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= -github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= -github.com/andygrunwald/go-jira v1.16.0 h1:PU7C7Fkk5L96JvPc6vDVIrd99vdPnYudHu4ju2c2ikQ= -github.com/andygrunwald/go-jira v1.16.0/go.mod h1:UQH4IBVxIYWbgagc0LF/k9FRs9xjIiQ8hIcC6HfLwFU= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= -github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0 h1:jfIu9sQUG6Ig+0+Ap1h4unLjW6YQJpKZVmUzxsD4E/Q= -github.com/arbovm/levenshtein v0.0.0-20160628152529-48b4e1c0c4d0/go.mod h1:t2tdKJDJF9BV14lnkjHmOQgcvEKgtqs5a1N3LNdJhGE= -github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= -github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI= -github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/aws/aws-lambda-go v1.9.0/go.mod h1:zUsUQhAUjYzR8AuduJPCfhBuKWUaDbQiPOG+ouzmE1A= -github.com/aws/aws-lambda-go v1.31.1 h1:ECZ4ECLm+watHJ+mjNK8D4gU66UVuR8MfqDKTr/Ffkc= -github.com/aws/aws-lambda-go v1.31.1/go.mod h1:IF5Q7wj4VyZyUFnZ54IQqeWtctHQ9tz+KhcbDenr220= -github.com/aws/aws-sdk-go v1.44.259 h1:7yDn1dcv4DZFMKpu+2exIH5O6ipNj9qXrKfdMUaIJwY= -github.com/aws/aws-sdk-go v1.44.259/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI= -github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs= -github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A= -github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= -github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= -github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= -github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= -github.com/boltdb/bolt v1.3.1 h1:JQmyP4ZBrce+ZQu0dY660FMfatumYDLun9hBCUVIkF4= -github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps= -github.com/bytecodealliance/wasmtime-go/v3 v3.0.2 h1:3uZCA/BLTIu+DqCfguByNMJa2HVHpXvjfy0Dy7g6fuA= -github.com/bytecodealliance/wasmtime-go/v3 v3.0.2/go.mod h1:RnUjnIXxEJcL6BgCvNyzCCRzZcxCgsZCi+RNlvYor5Q= -github.com/bytedance/sonic v1.5.0/go.mod h1:ED5hyg4y6t3/9Ku1R6dU/4KyJ48DZ4jPhfY1O2AihPM= -github.com/bytedance/sonic v1.9.1 h1:6iJ6NqdoxCDr6mbY8h18oSO+cShGSMRGCEo7F2h0x8s= -github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U= -github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= -github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko= -github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= -github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= -github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY= -github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams= -github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk= -github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= -github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= -github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI= -github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= -github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= -github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= -github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= -github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= -github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= -github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/denisenkom/go-mssqldb v0.10.0/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU= -github.com/dgraph-io/badger/v3 v3.2103.5 h1:ylPa6qzbjYRQMU6jokoj4wzcaweHylt//CH0AKt0akg= -github.com/dgraph-io/badger/v3 v3.2103.5/go.mod h1:4MPiseMeDQ3FNCYwRbbcBOGJLf5jsE0PPFzRiKjtcdw= -github.com/dgraph-io/ristretto v0.1.1 h1:6CWw5tJNgpegArSHpNHJKldNeq03FQCwYvfMVWajOK8= -github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA= -github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= -github.com/dgryski/trifles v0.0.0-20200323201526-dd97f9abfb48 h1:fRzb/w+pyskVMQ+UbP35JkH8yB7MYb4q/qhBarqZE6g= -github.com/dgryski/trifles v0.0.0-20200323201526-dd97f9abfb48/go.mod h1:if7Fbed8SFyPtHLHbg49SI7NAdJiC5WIA09pe59rfAA= -github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q= -github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A= -github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= -github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= -github.com/doug-martin/goqu/v9 v9.18.0 h1:/6bcuEtAe6nsSMVK/M+fOiXUNfyFF3yYtE07DBPFMYY= -github.com/doug-martin/goqu/v9 v9.18.0/go.mod h1:nf0Wc2/hV3gYK9LiyqIrzBEVGlI8qW3GuDCEobC4wBQ= -github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= -github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto= -github.com/elastic/go-licenser v0.4.0 h1:jLq6A5SilDS/Iz1ABRkO6BHy91B9jBora8FwGRsDqUI= -github.com/elastic/go-licenser v0.4.0/go.mod h1:V56wHMpmdURfibNBggaSBfqgPxyT1Tldns1i87iTEvU= -github.com/elastic/go-sysinfo v1.7.1 h1:Wx4DSARcKLllpKT2TnFVdSUJOsybqMYCNQZq1/wO+s0= -github.com/elastic/go-sysinfo v1.7.1/go.mod h1:i1ZYdU10oLNfRzq4vq62BEwD2fH8KaWh6eh0ikPT9F0= -github.com/elastic/go-windows v1.0.0/go.mod h1:TsU0Nrp7/y3+VwE82FoZF8gC/XFg/Elz6CcloAxnPgU= -github.com/elastic/go-windows v1.0.1 h1:AlYZOldA+UJ0/2nBuqWdo90GFCgG9xuyw9SYzGUtJm0= -github.com/elastic/go-windows v1.0.1/go.mod h1:FoVvqWSun28vaDQPbj2Elfc0JahhPB7WQEGa3c814Ss= -github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw= -github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po= -github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= -github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs= -github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw= -github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo= -github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= -github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk= -github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= -github.com/fleetdm/fleet/v4 v4.28.1-0.20230412210146-4e9e8d82e349 h1:+SThTToDqlLF2dSpSExKnvywo2X4Bhut8lft6C2O91g= -github.com/fleetdm/fleet/v4 v4.28.1-0.20230412210146-4e9e8d82e349/go.mod h1:P/MT3a24GacjGf/JJRk7O+xE/w2gH6hcSnsdO0Zn3Oo= -github.com/fleetdm/goose v0.0.0-20221011182040-1d76b1817fd7 h1:AO4VyGsaVCPDt/Tc6uajsQvLDTWJx9XicERbUdCSPfQ= -github.com/fleetdm/goose v0.0.0-20221011182040-1d76b1817fd7/go.mod h1:d7Q+0eCENnKQUhkfAUVLfGnD4QcgJMF/uB9WRTN9TDI= -github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24 h1:XhczaxKV3J4NjztroidSnYKyq5xtxF+amBYdBWeik58= -github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24/go.mod h1:QzQrCUTmSr9HotzKZAcfmy+czbEGK8Mq26hA+0DN4ag= -github.com/fleetdm/scep/v2 v2.1.1-0.20220729212655-4f19f0a10a03 h1:oW24iL1GfGWW1VhyWrVdw7VdnyvFePyzR88zvfnTqCo= -github.com/fleetdm/scep/v2 v2.1.1-0.20220729212655-4f19f0a10a03/go.mod h1:PajjVSF3LaELUh847MlOtanfqrF8R2DOO4oS3NSPemI= -github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8vw= -github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= -github.com/foxcpp/go-mockdns v1.0.0 h1:7jBqxd3WDWwi/6WhDvacvH1XsN3rOLXyHM1uhvIx6FI= -github.com/foxcpp/go-mockdns v1.0.0/go.mod h1:lgRN6+KxQBawyIghpnl5CezHFGS9VLzvtVlwxvzXTQ4= -github.com/frankban/quicktest v1.14.3 h1:FJKSZTDHjyhriyC81FLQ0LY93eSai0ZyR/ZIkd3ZUKE= -github.com/frankban/quicktest v1.14.3/go.mod h1:mgiwOwqx65TmIk1wJ6Q7wvnVMocbUorkibMOrVTHZps= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY= -github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw= -github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU= -github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA= -github.com/getsentry/sentry-go v0.18.0 h1:MtBW5H9QgdcJabtZcuJG80BMOwaBpkRDZkxRkNC1sN0= -github.com/getsentry/sentry-go v0.18.0/go.mod h1:Kgon4Mby+FJ7ZWHFUAZgVaIa8sxHtnRJRLTXZr51aKQ= -github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= -github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= -github.com/gin-contrib/cors v1.3.0 h1:PolezCc89peu+NgkIWt9OB01Kbzt6IP0J/JvkG6xxlg= -github.com/gin-contrib/cors v1.3.0/go.mod h1:artPvLlhkF7oG06nK8v3U8TNz6IeX+w1uzCSEId5/Vc= -github.com/gin-contrib/sse v0.0.0-20190125020943-a7658810eb74/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s= -github.com/gin-contrib/sse v0.0.0-20190301062529-5545eab6dad3/go.mod h1:VJ0WA2NBN22VlZ2dKZQPAPnyWw5XTlK1KymzLKsr59s= -github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE= -github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI= -github.com/gin-gonic/gin v1.3.0/go.mod h1:7cKuhb5qV2ggCFctp2fJQ+ErvciLZrIeoOSOm6mUr7Y= -github.com/gin-gonic/gin v1.4.0/go.mod h1:OW2EZn3DO8Ln9oIKOvM++LBO+5UPHJJDH72/q/3rZdM= -github.com/gin-gonic/gin v1.6.3/go.mod h1:75u5sXoLsGZoRN5Sgbi1eraJ4GU3++wFwWzhwvtwp4M= -github.com/gin-gonic/gin v1.7.2/go.mod h1:jD2toBW3GZUr5UMcdrwQA10I7RuaFOl/SGeDjXkfUtY= -github.com/gin-gonic/gin v1.7.7/go.mod h1:axIBovoeJpVj8S3BwE0uPMTeReE4+AfFtqpqaZ1qq1U= -github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg= -github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU= -github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA= -github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og= -github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-gorp/gorp v2.2.0+incompatible/go.mod h1:7IfkAQnO7jfT/9IQ3R9wL1dFhukN6aQxzKTHnkxzA/E= -github.com/go-kit/kit v0.4.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-kit/kit v0.7.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4= -github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs= -github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU= -github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= -github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= -github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= -github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA= -github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs= -github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.2.1/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ= -github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/stdr v1.2.0/go.mod h1:YkVgnZu1ZjjL7xTxrfm/LLZBfkhTqSR1ydtm6jTKKwI= -github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= -github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= -github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY= -github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= -github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= -github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= -github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= -github.com/go-playground/locales v0.12.1/go.mod h1:IUMDtCfWo/w/mtMfIE/IG2K+Ey3ygWanZIBtBW0W2TM= -github.com/go-playground/locales v0.13.0/go.mod h1:taPMhCMXrRLJO55olJkUXHZBHCxTMfnGwq/HNwmWNS8= -github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs= -github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= -github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= -github.com/go-playground/universal-translator v0.16.0/go.mod h1:1AnU7NaIRDWWzGEKwgtJRd2xk99HeFyHw3yid4rvQIY= -github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+Scu5vgOQjsIJAF8j9muTVoKLVtA= -github.com/go-playground/universal-translator v0.18.0/go.mod h1:UvRDBj+xPUEGrFYl+lu/H90nyDXpg0fqeB/AQUGNTVA= -github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= -github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= -github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GOhaH6EGOAJShg8Id5JGkI= -github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4= -github.com/go-playground/validator/v10 v10.9.0/go.mod h1:74x4gJWsvQexRdW8Pn3dXSGrTK4nAUsbPlLADvpJkos= -github.com/go-playground/validator/v10 v10.14.0 h1:vgvQWe3XCz3gIeFDm/HnTIbj6UGmg/+t63MyGU2n5js= -github.com/go-playground/validator/v10 v10.14.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU= -github.com/go-redis/redis v6.15.8+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA= -github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= -github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= -github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= -github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= -github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc= -github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI= -github.com/go-stack/stack v1.6.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= -github.com/go-stack/stack v1.7.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= -github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= -github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= -github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= -github.com/gocarina/gocsv v0.0.0-20220310154401-d4df709ca055 h1:UfcDMw41lSx3XM7UvD1i7Fsu3rMgD55OU5LYwLoR/Yk= -github.com/gocarina/gocsv v0.0.0-20220310154401-d4df709ca055/go.mod h1:5YoVOkjYAQumqlV356Hj3xeYh4BdZuLE0/nRkf2NKkI= -github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU= -github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I= -github.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= -github.com/gofrs/uuid v4.3.1+incompatible h1:0/KbAdpx3UXAx1kEOWHJeOkpbgRFGHVgv+CFIY7dBJI= -github.com/gofrs/uuid v4.3.1+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= -github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= -github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= -github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= -github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= -github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg= -github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= -github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/glog v1.1.0 h1:/d3pCKDPWNnvIWe0vVUpNP32qc8U3PDVxySP/y360qE= -github.com/golang/glog v1.1.0/go.mod h1:pfYeQZ3JWZoXTV5sFc986z3HTpwQs9At6P4ImfuP3NQ= -github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y= -github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4= -github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= -github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM= -github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= -github.com/gomodule/oauth1 v0.2.0 h1:/nNHAD99yipOEspQFbAnNmwGTZ1UNXiD/+JLxwx79fo= -github.com/gomodule/oauth1 v0.2.0/go.mod h1:4r/a8/3RkhMBxJQWL5qzbOEcaQmNPIkNoI7P8sXeI08= -github.com/gomodule/redigo v1.8.4/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0= -github.com/gomodule/redigo v1.8.5/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0= -github.com/gomodule/redigo v1.8.9 h1:Sl3u+2BI/kk+VEatbj0scLdrFhjPmbxOc1myhDP41ws= -github.com/gomodule/redigo v1.8.9/go.mod h1:7ArFNvsTjH8GMMzB4uy1snslv2BwmginuMs06a1uzZE= -github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= -github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= -github.com/google/flatbuffers v1.12.1 h1:MVlul7pQNoDzWRLTw5imwYsl+usrS1TXG2H4jg6ImGw= -github.com/google/flatbuffers v1.12.1/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= -github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-github/v37 v37.0.0 h1:rCspN8/6kB1BAJWZfuafvHhyfIo5fkAulaP/3bOQ/tM= -github.com/google/go-github/v37 v37.0.0/go.mod h1:LM7in3NmXDrX58GbEHy7FtNLbI2JijX93RnMKvWG3m4= -github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= -github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= -github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= -github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= -github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= -github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/google/s2a-go v0.1.3 h1:FAgZmpLl/SXurPEZyCMPBIiiYeTbqfjlbdnCNTAkbGE= -github.com/google/s2a-go v0.1.3/go.mod h1:Ej+mSEMGRnqRzjc7VtF+jdBwYG5fuJfiZ8ELkjEwM0A= -github.com/google/uuid v0.0.0-20161128191214-064e2069ce9c/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.1.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= -github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/googleapis/enterprise-certificate-proxy v0.2.3 h1:yk9/cqRKtT9wXZSsRH9aurXEpJX+U6FLtpYTdC3R06k= -github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k= -github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= -github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= -github.com/googleapis/gax-go/v2 v2.8.0 h1:UBtEZqx1bjXtOQ5BVTkuYghXrr3N4V123VKJK67vJZc= -github.com/googleapis/gax-go/v2 v2.8.0/go.mod h1:4orTrqY6hXxxaUL4LHIPl6lGo8vAE38/qKbhSAKP6QI= -github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g= -github.com/gorilla/context v0.0.0-20160226214623-1ea25387ff6f/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg= -github.com/gorilla/mux v1.4.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= -github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= -github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI= -github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= -github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= -github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= -github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc= -github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= -github.com/groob/finalizer v0.0.0-20170707115354-4c2ed49aabda h1:5ikpG9mYCMFiZX0nkxoV6aU2IpCHPdws3gCNgdZeEV0= -github.com/groob/finalizer v0.0.0-20170707115354-4c2ed49aabda/go.mod h1:MyndkAZd5rUMdNogn35MWXBX1UiBigrU8eTj8DoAC2c= -github.com/groob/plist v0.0.0-20220217120414-63fa881b19a5 h1:saaSiB25B1wgaxrshQhurfPKUGJ4It3OxNJUy0rdOjU= -github.com/groob/plist v0.0.0-20220217120414-63fa881b19a5/go.mod h1:itkABA+w2cw7x5nYUS/pLRef6ludkZKOigbROmCTaFw= -github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= -github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= -github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= -github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= -github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= -github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= -github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= -github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= -github.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4= -github.com/hashicorp/golang-lru v0.6.0/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= -github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= -github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hectane/go-acl v0.0.0-20190604041725-da78bae5fc95 h1:S4qyfL2sEm5Budr4KVMyEniCy+PbS55651I/a+Kn/NQ= -github.com/hectane/go-acl v0.0.0-20190604041725-da78bae5fc95/go.mod h1:QiyDdbZLaJ/mZP4Zwc9g2QsfaEA4o7XvvgZegSci5/E= -github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= -github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= -github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= -github.com/igm/sockjs-go/v3 v3.0.0 h1:4wLoB9WCnQ8RI87cmqUH778ACDFVmRpkKRCWBeuc+Ww= -github.com/igm/sockjs-go/v3 v3.0.0/go.mod h1:UqchsOjeagIBFHvd+RZpLaVRbCwGilEC08EDHsD1jYE= -github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= -github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= -github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= -github.com/jcchavezs/porto v0.1.0 h1:Xmxxn25zQMmgE7/yHYmh19KcItG81hIwfbEEFnd6w/Q= -github.com/jcchavezs/porto v0.1.0/go.mod h1:fESH0gzDHiutHRdX2hv27ojnOVFco37hg1W6E9EZF4A= -github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= -github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc= -github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= -github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= -github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= -github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= -github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/jmoiron/sqlx v0.0.0-20180406164412-2aeb6a910c2b/go.mod h1:IiEW3SEiiErVyFdH8NTuWjSifiEQKUoyK3LNqr2kCHU= -github.com/jmoiron/sqlx v1.2.1-0.20190826204134-d7d95172beb5 h1:lrdPtrORjGv1HbbEvKWDUAy97mPpFm4B8hp77tcCUJY= -github.com/jmoiron/sqlx v1.2.1-0.20190826204134-d7d95172beb5/go.mod h1:1FEQNm3xlJgrMD+FBdI9+xvCksHtbpVBBw5dYhBSsks= -github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901 h1:rp+c0RAYOWj8l6qbCUTSiRLG/iKnW3K3/QfPPuSsBt4= -github.com/joeshaw/multierror v0.0.0-20140124173710-69b34d4ec901/go.mod h1:Z86h9688Y0wesXCyonoVr47MasHilkuLMqGhRZ4Hpak= -github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= -github.com/jonboulle/clockwork v0.2.2 h1:UOGuzwb1PwsrDAObMuhUnj0p5ULPj8V/xJ7Kx9qUBdQ= -github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8= -github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= -github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= -github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= -github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= -github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= -github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= -github.com/juju/ansiterm v0.0.0-20160907234532-b99631de12cf/go.mod h1:UJSiEoRfvx3hP73CvoARgeLjaIOjybY9vj8PUPPFGeU= -github.com/juju/clock v0.0.0-20190205081909-9c5c9712527c/go.mod h1:nD0vlnrUjcjJhqN5WuCWZyzfd5AHZAC9/ajvbSx69xA= -github.com/juju/cmd v0.0.0-20171107070456-e74f39857ca0/go.mod h1:yWJQHl73rdSX4DHVKGqkAip+huBslxRwS8m9CrOLq18= -github.com/juju/collections v0.0.0-20200605021417-0d0ec82b7271/go.mod h1:5XgO71dV1JClcOJE+4dzdn4HrI5LiyKd7PlVG6eZYhY= -github.com/juju/errors v0.0.0-20150916125642-1b5e39b83d18/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q= -github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q= -github.com/juju/errors v0.0.0-20200330140219-3fe23663418f/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q= -github.com/juju/gnuflag v0.0.0-20171113085948-2ce1bb71843d/go.mod h1:2PavIy+JPciBPrBUjwbNvtwB6RQlve+hkpll6QSNmOE= -github.com/juju/httpprof v0.0.0-20141217160036-14bf14c30767/go.mod h1:+MaLYz4PumRkkyHYeXJ2G5g5cIW0sli2bOfpmbaMV/g= -github.com/juju/loggo v0.0.0-20170605014607-8232ab8918d9/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= -github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= -github.com/juju/loggo v0.0.0-20200526014432-9ce3a2e09b5e/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U= -github.com/juju/mgo/v2 v2.0.0-20210302023703-70d5d206e208/go.mod h1:0OChplkvPTZ174D2FYZXg4IB9hbEwyHkD+zT+/eK+Fg= -github.com/juju/mutex v0.0.0-20171110020013-1fe2a4bf0a3a/go.mod h1:Y3oOzHH8CQ0Ppt0oCKJ2JFO81/EsWenH5AEqigLH+yY= -github.com/juju/retry v0.0.0-20151029024821-62c620325291/go.mod h1:OohPQGsr4pnxwD5YljhQ+TZnuVRYpa5irjugL1Yuif4= -github.com/juju/retry v0.0.0-20180821225755-9058e192b216/go.mod h1:OohPQGsr4pnxwD5YljhQ+TZnuVRYpa5irjugL1Yuif4= -github.com/juju/testing v0.0.0-20180402130637-44801989f0f7/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA= -github.com/juju/testing v0.0.0-20190723135506-ce30eb24acd2/go.mod h1:63prj8cnj0tU0S9OHjGJn+b1h0ZghCndfnbQolrYTwA= -github.com/juju/testing v0.0.0-20210302031854-2c7ee8570c07/go.mod h1:7lxZW0B50+xdGFkvhAb8bwAGt6IU87JB1H9w4t8MNVM= -github.com/juju/utils v0.0.0-20180424094159-2000ea4ff043/go.mod h1:6/KLg8Wz/y2KVGWEpkK9vMNGkOnu4k/cqs8Z1fKjTOk= -github.com/juju/utils v0.0.0-20200116185830-d40c2fe10647/go.mod h1:6/KLg8Wz/y2KVGWEpkK9vMNGkOnu4k/cqs8Z1fKjTOk= -github.com/juju/utils/v2 v2.0.0-20200923005554-4646bfea2ef1/go.mod h1:fdlDtQlzundleLLz/ggoYinEt/LmnrpNKcNTABQATNI= -github.com/juju/version v0.0.0-20161031051906-1f41e27e54f2/go.mod h1:kE8gK5X0CImdr7qpSKl3xB2PmpySSmfj7zVbkZFs81U= -github.com/juju/version v0.0.0-20180108022336-b64dbd566305/go.mod h1:kE8gK5X0CImdr7qpSKl3xB2PmpySSmfj7zVbkZFs81U= -github.com/juju/version v0.0.0-20191219164919-81c1be00b9a6/go.mod h1:kE8gK5X0CImdr7qpSKl3xB2PmpySSmfj7zVbkZFs81U= -github.com/julienschmidt/httprouter v1.1.1-0.20151013225520-77a895ad01eb/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= -github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= -github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.16.5 h1:IFV2oUNUzZaz+XyusxpLzpzS8Pt5rh0Z16For/djlyI= -github.com/klauspost/compress v1.16.5/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE= -github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= -github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk= -github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY= -github.com/kolide/kit v0.0.0-20191023141830-6312ecc11c23 h1:7rykD5+Wf11u+03TOsunGbg7f4gZEBgS0gwIRR+Han4= -github.com/kolide/kit v0.0.0-20191023141830-6312ecc11c23/go.mod h1:OYYulo9tUqRadRLwB0+LE914sa1ui2yL7OrcU3Q/1XY= -github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= -github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= -github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= -github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk= -github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= -github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= -github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/leodido/go-urn v1.1.0/go.mod h1:+cyI34gQWZcE1eQU7NVgKkkzdXDQHr1dBMtdAPozLkw= -github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII= -github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY= -github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q= -github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4= -github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= -github.com/lib/pq v1.9.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= -github.com/lib/pq v1.10.1/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= -github.com/lib/pq v1.10.7 h1:p7ZhMD+KsSRozJr34udlUrhboJwWAgCg34+/ZZNvZZw= -github.com/lib/pq v1.10.7/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= -github.com/loopfz/gadgeto v0.9.0/go.mod h1:S3tK5SXmKY3l39rUpPZw1B/iiy1CftV13QABFhj32Ss= -github.com/loopfz/gadgeto v0.11.2 h1:kc7GoNcNgjQOZmA6nwS1jOC6yek9GoyDkxQ2vCwf63g= -github.com/loopfz/gadgeto v0.11.2/go.mod h1:FFfb8QmTdEo+z9pvzTeTousO2ZoP81WInQg1aG42UOo= -github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4= -github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= -github.com/lunixbochs/vtclean v0.0.0-20160125035106-4fbf7632a2c6/go.mod h1:pHhQNgMf3btfWnGBVipUOjRYhoOsdGqdm/+2c2E2WMI= -github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= -github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY= -github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0= -github.com/masterzen/azure-sdk-for-go v3.2.0-beta.0.20161014135628-ee4f0065d00c+incompatible/go.mod h1:mf8fjOu33zCqxUjuiU3I8S1lJMyEAlH+0F2+M5xl3hE= -github.com/masterzen/simplexml v0.0.0-20160608183007-4572e39b1ab9/go.mod h1:kCEbxUJlNDEBNbdQMkPSp6yaKcRXVI6f4ddk8Riv4bc= -github.com/masterzen/winrm v0.0.0-20161014151040-7a535cd943fc/go.mod h1:CfZSN7zwz5gJiFhZJz49Uzk7mEBHIceWmbFmYx7Hf7E= -github.com/masterzen/xmlpath v0.0.0-20140218185901-13f4951698ad/go.mod h1:A0zPC53iKKKcXYxr4ROjpQRQ5FgJXtelNdSmHHuq/tY= -github.com/mattermost/xml-roundtrip-validator v0.0.0-20201213122252-bcd7e1b9601e h1:qqXczln0qwkVGcpQ+sQuPOVntt2FytYarXXxYSNJkgw= -github.com/mattermost/xml-roundtrip-validator v0.0.0-20201213122252-bcd7e1b9601e/go.mod h1:qccnGMcpgwcNaBnxqpJpWWUiPNr5H3O8eDgGV9gT5To= -github.com/mattn/go-colorable v0.0.6/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= -github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= -github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= -github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= -github.com/mattn/go-isatty v0.0.0-20160806122752-66b8e73f3f5c/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= -github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= -github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE= -github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= -github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= -github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= -github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA= -github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= -github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= -github.com/mattn/go-sqlite3 v1.10.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= -github.com/mattn/go-sqlite3 v1.14.6/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU= -github.com/mattn/go-sqlite3 v1.14.7/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU= -github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U= -github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= -github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= -github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo= -github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= -github.com/micromdm/micromdm v1.9.0 h1:FAsIKOpnGcq21UQCrHCUxZwSW4NwBLGOoUtzbURxds8= -github.com/micromdm/micromdm v1.9.0/go.mod h1:YsAtsEvfEIwpjYTUPpWkJXSfH0hhp9mMHW1BgIZgRt8= -github.com/micromdm/nanomdm v0.3.0 h1:njAC9+sQy9SpgyZhyVAJYzhRD7dt4pv7m9Z8wlUIY2o= -github.com/micromdm/nanomdm v0.3.0/go.mod h1:03+qFjfaTE6Ye9QvrHfhCKgqjSVSeWzdfNHXCIFRrLg= -github.com/miekg/dns v1.1.50 h1:DQUfb9uc6smULcREF09Uc+/Gd46YWqJd5DbpPE9xkcA= -github.com/miekg/dns v1.1.50/go.mod h1:e3IlAVfNqAllflbibAZEWOXOQ+Ynzk/dDozDxY7XnME= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc= -github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg= -github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= -github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/mna/redisc v1.3.2 h1:sc9C+nj6qmrTFnsXb70xkjAHpXKtjjBuE6v2UcQV0ZE= -github.com/mna/redisc v1.3.2/go.mod h1:CplIoaSTDi5h9icnj4FLbRgHoNKCHDNJDVRztWDGeSQ= -github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= -github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= -github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= -github.com/nelsam/hel/v2 v2.3.2/go.mod h1:1ZTGfU2PFTOd5mx22i5O0Lc2GY933lQ2wb/ggy+rL3w= -github.com/ngrok/sqlmw v0.0.0-20211220175533-9d16fdc47b31 h1:FFHgfAIoAXCCL4xBoAugZVpekfGmZ/fBBueneUKBv7I= -github.com/ngrok/sqlmw v0.0.0-20211220175533-9d16fdc47b31/go.mod h1:E26fwEtRNigBfFfHDWsklmo0T7Ixbg0XXgck+Hq4O9k= -github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d/go.mod h1:YUTz3bUH2ZwIWBy3CJBeOBEugqcmXREj14T+iG/4k4U= -github.com/nukosuke/go-zendesk v0.13.1 h1:EdYpn+FxROLguADEJK5reOHcpysM8wyWPOWO96SIc0A= -github.com/nukosuke/go-zendesk v0.13.1/go.mod h1:86Cg7RhSvPfOqZOtQXteJEV9yIQVQsy2HVDk++Yf3jA= -github.com/oklog/ulid v0.3.0/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= -github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= -github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= -github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= -github.com/open-policy-agent/opa v0.51.0 h1:2hS5xhos8HtkN+mgpqMhNJSFtn/1n/h3wh+AeTPJg6Q= -github.com/open-policy-agent/opa v0.51.0/go.mod h1:OjmwLfXdeR7skSxrt8Yd3ScXTqPxyJn7GeTRJrcEerU= -github.com/opencensus-integrations/ocsql v0.1.1/go.mod h1:ozPYpNVBHZsX33jfoQPO5TlI5lqh0/3R36kirEqJKAM= -github.com/oschwald/geoip2-golang v1.8.0 h1:KfjYB8ojCEn/QLqsDU0AzrJ3R5Qa9vFlx3z6SLNcKTs= -github.com/oschwald/geoip2-golang v1.8.0/go.mod h1:R7bRvYjOeaoenAp9sKRS8GX5bJWcZ0laWO5+DauEktw= -github.com/oschwald/maxminddb-golang v1.10.0 h1:Xp1u0ZhqkSuopaKmk1WwHtjF0H9Hd9181uj2MQ5Vndg= -github.com/oschwald/maxminddb-golang v1.10.0/go.mod h1:Y2ELenReaLAZ0b400URyGwvYxHV1dLIxBuyOsyYjHK0= -github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= -github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZR9tGQ= -github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4= -github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= -github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= -github.com/pires/go-proxyproto v0.6.0/go.mod h1:Odh9VFOZJCf9G8cLW5o435Xf1J95Jw9Gw5rnCjcwzAY= -github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= -github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw= -github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE= -github.com/poy/onpar v0.0.0-20200406201722-06f95a1c68e8/go.mod h1:nSbFQvMj97ZyhFRSJYtut+msi4sOY6zJDGCdSc+/rZU= -github.com/poy/onpar v1.1.2/go.mod h1:6X8FLNoxyr9kkmnlqpK6LSoiOtrO6MICtWwEuWkLjzg= -github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= -github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= -github.com/prometheus/client_golang v1.15.0 h1:5fCgGYogn0hFdhyhLbw7hEsWxufKtY9klyvdNfFlFhM= -github.com/prometheus/client_golang v1.15.0/go.mod h1:e9yaBhRPU2pPNsZwE+JdQl0KEt1N9XgF6zxWmaC0xOk= -github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= -github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4= -github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w= -github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= -github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= -github.com/prometheus/common v0.42.0 h1:EKsfXEYo4JpWMHH5cg+KOUWeuJSov1Id8zGR8eeI1YM= -github.com/prometheus/common v0.42.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc= -github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= -github.com/prometheus/procfs v0.0.0-20190425082905-87a4384529e0/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= -github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= -github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= -github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI= -github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY= -github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= -github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM= -github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4= -github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= -github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc= -github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE= -github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8= -github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= -github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ= -github.com/rs/zerolog v1.20.0 h1:38k9hgtUBdxFwE34yS8rTHmHBa4eN16E4DJlv177LNs= -github.com/rs/zerolog v1.20.0/go.mod h1:IzD0RJ65iWH0w97OQQebJEvTZYvsCUm9WVLWBQrJRjo= -github.com/russellhaering/goxmldsig v1.2.0 h1:Y6GTTc9Un5hCxSzVz4UIWQ/zuVwDvzJk80guqzwx6Vg= -github.com/russellhaering/goxmldsig v1.2.0/go.mod h1:gM4MDENBQf7M+V824SGfyIUVFWydB7n0KkEubVJl+Tw= -github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/santhosh-tekuri/jsonschema v1.2.4 h1:hNhW8e7t+H1vgY+1QeEQpveR6D4+OwKPXCfD2aieJis= -github.com/santhosh-tekuri/jsonschema v1.2.4/go.mod h1:TEAUOeZSmIxTTuHatJzrvARHiuO9LYd+cIxzgEHCQI4= -github.com/shirou/gopsutil/v3 v3.22.8 h1:a4s3hXogo5mE2PfdfJIonDbstO/P+9JszdfhAHSzD9Y= -github.com/shirou/gopsutil/v3 v3.22.8/go.mod h1:s648gW4IywYzUfE/KjXxUsqrqx/T2xO5VqOXxONeRfI= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= -github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= -github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= -github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= -github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= -github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= -github.com/spf13/afero v1.9.5 h1:stMpOSZFs//0Lv29HduCmli3GUfpFoF3Y1Q/aXj/wVM= -github.com/spf13/afero v1.9.5/go.mod h1:UBogFpq8E9Hx+xc5CNTTEpTnuHVmXDwZcZcE1eb/UhQ= -github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= -github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w= -github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155UU= -github.com/spf13/cobra v0.0.6/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE= -github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I= -github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0= -github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= -github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk= -github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= -github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= -github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= -github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE= -github.com/spf13/viper v1.15.0 h1:js3yy885G8xwJa6iOISGFwd+qlUo5AvyXb7CiihdtiU= -github.com/spf13/viper v1.15.0/go.mod h1:fFcTBJxvhhzSJiZy8n+PeW6t8l+KeT/uTARa0jHOQLA= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= -github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c= -github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= -github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= -github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= -github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= -github.com/subosito/gotenv v1.4.2 h1:X1TuBLAMDFbaTAChgCBLu3DU3UPyELpnF2jjJ2cz/S8= -github.com/subosito/gotenv v1.4.2/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0= -github.com/tchap/go-patricia/v2 v2.3.1 h1:6rQp39lgIYZ+MHmdEq4xzuk1t7OdC35z/xm0BGhTkes= -github.com/tchap/go-patricia/v2 v2.3.1/go.mod h1:VZRHKAb53DLaG+nA9EaYYiaEx6YztwDlLElMsnSHD4k= -github.com/throttled/throttled/v2 v2.8.0 h1:B5VfdM8BE+ClI2Ji238SbNOTWfYcocvuAhgT27lvwrE= -github.com/throttled/throttled/v2 v2.8.0/go.mod h1:q1QyZVQXxb2NUfJ+Hjucmlrsrz9s/jt2ilMwSMo7a2I= -github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw= -github.com/tklauser/go-sysconf v0.3.10/go.mod h1:C8XykCvCb+Gn0oNCWPIlcb0RuglQTYaQ2hGm7jmxEFk= -github.com/tklauser/numcpus v0.4.0 h1:E53Dm1HjH1/R2/aoCtXtPgzmElmn51aOkhCFSuZq//o= -github.com/tklauser/numcpus v0.4.0/go.mod h1:1+UI3pD8NW14VMwdgJNJ1ESk2UnwhAnz5hMwiKKqXCQ= -github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= -github.com/trivago/tgo v1.0.7 h1:uaWH/XIy9aWYWpjm2CU3RpcqZXmX2ysQ9/Go+d9gyrM= -github.com/trivago/tgo v1.0.7/go.mod h1:w4dpD+3tzNIIiIfkWWa85w5/B77tlvdZckQ+6PkFnhc= -github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI= -github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08= -github.com/ugorji/go v1.1.2/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ= -github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= -github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw= -github.com/ugorji/go v1.2.6/go.mod h1:anCg0y61KIhDlPZmnH+so+RQbysYVyDko0IMgJv0Nn0= -github.com/ugorji/go/codec v0.0.0-20190128213124-ee1426cffec0/go.mod h1:iT03XoTwV7xq/+UGwKO3UbC1nNNlopQiY61beSdrtOA= -github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY= -github.com/ugorji/go/codec v1.2.6/go.mod h1:V6TCNZ4PHqoHGFZuSG1W8nrCzzdgA2DozYxWFFpvxTw= -github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU= -github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg= -github.com/ulikunitz/xz v0.5.11 h1:kpFauv27b6ynzBNT/Xy+1k+fK4WswhN/6PN5WhFAGw8= -github.com/ulikunitz/xz v0.5.11/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14= -github.com/wI2L/fizz v0.20.0 h1:zNYpU+JErl5PJanPEYUvn5YUn8Pv3K+kMtb86z4BPiU= -github.com/wI2L/fizz v0.20.0/go.mod h1:CMxMR1amz8id9wr2YUpONf+F/F9hW1cqRXxVNNuWVxE= -github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo= -github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= -github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= -github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= -github.com/yashtewari/glob-intersection v0.1.0 h1:6gJvMYQlTDOL3dMsPF6J0+26vwX9MB8/1q3uAdhmTrg= -github.com/yashtewari/glob-intersection v0.1.0/go.mod h1:LK7pIC3piUjovexikBbJ26Yml7g8xa5bsjfx2v1fwok= -github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= -github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= -github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= -github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/yusufpapurcu/wmi v1.2.2 h1:KBNDSne4vP5mbSWnJbO+51IMOXJB67QiYCSBrubbPRg= -github.com/yusufpapurcu/wmi v1.2.2/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0= -github.com/ziutek/mymysql v1.5.4/go.mod h1:LMSpPZ6DbqWFxNCHW77HeMg9I646SAhApZ/wKdgO/C0= -go.elastic.co/apm/module/apmgin/v2 v2.1.0 h1:z7yaVOI9AB4UlrQ+Ml3k+wpUN5drarqojXopVK2c+FM= -go.elastic.co/apm/module/apmgin/v2 v2.1.0/go.mod h1:VO8iLbtSFCTEq3nsB29DHQ2Ks4dGdfdfSi1W1Y/hBIM= -go.elastic.co/apm/module/apmhttp/v2 v2.1.0 h1:3knDFopO6LmgrqY5z9HlmCaIG+PtM9HwZGhByFCCjh4= -go.elastic.co/apm/module/apmhttp/v2 v2.1.0/go.mod h1:cKGRK1snYy5Sl/zs0GD+msE9b/amcM0CWbZn8XXBa9s= -go.elastic.co/apm/v2 v2.1.0 h1:rkJSHE4ggekHhUR5v0KKkoMbrRSJN8YoBiEgQnkV1OY= -go.elastic.co/apm/v2 v2.1.0/go.mod h1:KGQn56LtRmkQjt2qw4+c1Jz8gv9rCBUU/m21uxrqcps= -go.elastic.co/fastjson v1.1.0 h1:3MrGBWWVIxe/xvsbpghtkFoPciPhOCmjsR/HfwEeQR4= -go.elastic.co/fastjson v1.1.0/go.mod h1:boNGISWMjQsUPy/t6yqt2/1Wx4YNPSe+mZjlyw9vKKI= -go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= -go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 h1:CCriYyAfq1Br1aIYettdHZTy8mBTIPo7We18TuO/bak= -go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk= -go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= -go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= -go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA= -go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= -go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= -go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= -go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux v0.44.0 h1:QaNUlLvmettd1vnmFHrgBYQHearxWP3uO4h4F3pVtkM= -go.opentelemetry.io/contrib/instrumentation/github.com/gorilla/mux/otelmux v0.44.0/go.mod h1:cJu+5jZwoZfkBOECSFtBZK/O7h/pY5djn0fwnIGnQ4A= -go.opentelemetry.io/otel v1.3.0/go.mod h1:PWIKzi6JCp7sM0k9yZ43VX+T345uNbAkDKwHVjb2PTs= -go.opentelemetry.io/otel v1.18.0 h1:TgVozPGZ01nHyDZxK5WGPFB9QexeTMXEH7+tIClWfzs= -go.opentelemetry.io/otel v1.18.0/go.mod h1:9lWqYO0Db579XzVuCKFNPDl4s73Voa+zEck3wHaAYQI= -go.opentelemetry.io/otel/metric v1.18.0 h1:JwVzw94UYmbx3ej++CwLUQZxEODDj/pOuTCvzhtRrSQ= -go.opentelemetry.io/otel/metric v1.18.0/go.mod h1:nNSpsVDjWGfb7chbRLUNW+PBNdcSTHD4Uu5pfFMOI0k= -go.opentelemetry.io/otel/sdk v1.3.0/go.mod h1:rIo4suHNhQwBIPg9axF8V9CA72Wz2mKF1teNrup8yzs= -go.opentelemetry.io/otel/sdk v1.15.0 h1:jZTCkRRd08nxD6w7rIaZeDNGZGGQstH3SfLQ3ZsKICk= -go.opentelemetry.io/otel/sdk v1.15.0/go.mod h1:XDEMrYWzJ4YlC17i6Luih2lwDw2j6G0PkUfr1ZqE+rQ= -go.opentelemetry.io/otel/trace v1.3.0/go.mod h1:c/VDhno8888bvQYmbYLqe41/Ldmr/KKunbvWM4/fEjk= -go.opentelemetry.io/otel/trace v1.18.0 h1:NY+czwbHbmndxojTEKiSMHkG2ClNH2PwmcHrdo0JY10= -go.opentelemetry.io/otel/trace v1.18.0/go.mod h1:T2+SGJGuYZY3bjj5rgh/hN7KIrlpWC5nS8Mjvzckz+0= -go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= -go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= -go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= -go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= -golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8= -golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k= -golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8= -golang.org/x/crypto v0.0.0-20180214000028-650f4a345ab4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= -golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.0.0-20220314234659-1baeb1ce4c0b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k= -golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= -golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= -golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= -golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= -golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= -golang.org/x/exp v0.0.0-20230425010034-47ecfdc1ba53 h1:5llv2sWeaMSnA3w2kS57ouQQ4pudlXrR0dCgw51QK9o= -golang.org/x/exp v0.0.0-20230425010034-47ecfdc1ba53/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w= -golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= -golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= -golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= -golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= -golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= -golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY= -golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro= -golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk= -golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/net v0.0.0-20170726083632-f5079bd7f6f7/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180406214816-61147c48b25b/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= -golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200904194848-62affa334b73/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= -golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= -golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM= -golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8= -golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.2.0 h1:PUR+T4wwASmuSTYdKjYHI5TD22Wy5ogLU5qZCOLxBrI= -golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sys v0.0.0-20170728174421-0f826bdd13b5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190529164535-6a60838ec259/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191025021431-6c3a3bfe00ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211102192858-4dd72447c267/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220330033206-e17cdc41300f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc= -golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= -golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= -golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= -golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20190828213141-aed303cbaa74/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= -golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= -golang.org/x/tools v0.0.0-20200313205530-4303120df7d8/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= -golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= -golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200509030707-2212a7e161a5/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE= -golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= -golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo= -golang.org/x/tools v0.1.9/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU= -golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= -golang.org/x/tools v0.9.0 h1:CtBMYmb33qYal6XpayZzNXlyK/3FpZV8bDq4CZo57b8= -golang.org/x/tools v0.9.0/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= -google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= -google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= -google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= -google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= -google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= -google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM= -google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= -google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg= -google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE= -google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8= -google.golang.org/api v0.121.0 h1:8Oopoo8Vavxx6gt+sgs8s8/X60WBAtKQq6JqnkF+xow= -google.golang.org/api v0.121.0/go.mod h1:gcitW0lvnyWjSp9nKxAbdHKIZ6vF4aajGueeslZOyms= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= -google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= -google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= -google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8= -google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA= -google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA= -google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A= -google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= -google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= -google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60= -google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= -google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= -google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= -google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8= -google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ= -google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc= -google.golang.org/grpc v1.56.3/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng= -google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20160105164936-4f90aeace3a2/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= -gopkg.in/errgo.v1 v1.0.0-20161222125816-442357a80af5/go.mod h1:u0ALmqvLRxLI95fkdCEWrE6mhWYZW1aMOJHp5YXLHTg= -gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= -gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= -gopkg.in/go-playground/assert.v1 v1.2.1/go.mod h1:9RXL0bg/zibRAgZUYszZSwO/z8Y/a8bDuhia5mkpMnE= -gopkg.in/go-playground/validator.v8 v8.18.2/go.mod h1:RX2a/7Ha8BgOhfk7j780h4/u/RRjR0eouCJSH80/M2Y= -gopkg.in/go-playground/validator.v9 v9.26.0/go.mod h1:+c9/zcJMFNgbLvly1L1V+PpxWdVbfP1avr/N00E2vyQ= -gopkg.in/go-playground/validator.v9 v9.30.0/go.mod h1:+c9/zcJMFNgbLvly1L1V+PpxWdVbfP1avr/N00E2vyQ= -gopkg.in/guregu/null.v3 v3.5.0 h1:xTcasT8ETfMcUHn0zTvIYtQud/9Mx5dJqD554SZct0o= -gopkg.in/guregu/null.v3 v3.5.0/go.mod h1:E4tX2Qe3h7QdL+uZ3a0vqvYwKQsRSQKM5V4YltdgH9Y= -gopkg.in/httprequest.v1 v1.1.1/go.mod h1:/CkavNL+g3qLOrpFHVrEx4NKepeqR4XTZWNj4sGGjz0= -gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA= -gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= -gopkg.in/mgo.v2 v2.0.0-20160818015218-f2b6f6c918c4/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= -gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= -gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8= -gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= -gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= -gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= -gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637/go.mod h1:BHsqpu/nsuzkT5BpiH1EMZPLyqSMM8JbIavyFACoFNk= -gopkg.in/yaml.v1 v1.0.0-20140924161607-9f9df34309c0/go.mod h1:WDnlLJ4WF5VGsH/HVa3CI79GS0ol3YnhVnKP89i0kNg= -gopkg.in/yaml.v2 v2.0.0-20170712054546-1be3d31502d6/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= -gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= -gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= -gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= -gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= -honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -howett.net/plist v0.0.0-20181124034731-591f970eefbb/go.mod h1:vMygbs4qMhSZSc4lCUl2OEE+rDiIIJAIdR4m7MiMcm0= -howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM= -howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g= -launchpad.net/gocheck v0.0.0-20140225173054-000000000087/go.mod h1:hj7XX3B/0A+80Vse0e+BUHsHMTEhd0O4cpUHr/e/BUM= -launchpad.net/xmlpath v0.0.0-20130614043138-000000000004/go.mod h1:vqyExLOM3qBx7mvYRkoxjSCF945s0mbe7YynlKYXtsA= -rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= -rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4= -rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= -rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= diff --git a/infrastructure/sandbox/JITProvisioner/lambda/main.go b/infrastructure/sandbox/JITProvisioner/lambda/main.go deleted file mode 100644 index 61a2c3ce9..000000000 --- a/infrastructure/sandbox/JITProvisioner/lambda/main.go +++ /dev/null @@ -1,443 +0,0 @@ -package main - -import ( - "github.com/akrylysov/algnhsa" - "github.com/gin-contrib/cors" - "github.com/gin-gonic/gin" - flags "github.com/jessevdk/go-flags" - - //"github.com/juju/errors" - "database/sql" - "encoding/json" - "errors" - "fmt" - "log" - "math/rand" - "os" - "strings" - "time" - - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/arn" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute" - "github.com/aws/aws-sdk-go/service/secretsmanager" - "github.com/aws/aws-sdk-go/service/sfn" - "github.com/fleetdm/fleet/v4/pkg/spec" - "github.com/fleetdm/fleet/v4/server/fleet" - "github.com/fleetdm/fleet/v4/server/ptr" - "github.com/fleetdm/fleet/v4/server/service" - _ "github.com/go-sql-driver/mysql" - "github.com/loopfz/gadgeto/tonic" - "github.com/wI2L/fizz" - "github.com/wI2L/fizz/openapi" - "go.elastic.co/apm/module/apmgin/v2" - _ "go.elastic.co/apm/v2" -) - -type OptionsStruct struct { - LambdaExecutionEnv string `long:"lambda-execution-environment" env:"AWS_EXECUTION_ENV"` - LifecycleTable string `long:"dynamodb-lifecycle-table" env:"DYNAMODB_LIFECYCLE_TABLE" required:"true"` - LifecycleSFN string `long:"lifecycle-sfn" env:"LIFECYCLE_SFN" required:"true"` - FleetBaseURL string `long:"fleet-base-url" env:"FLEET_BASE_URL" required:"true"` - AuthorizationPSK string `long:"authorization-psk" env:"AUTHORIZATION_PSK" required:"true"` - MysqlSecret string `long:"mysql-secret" env:"MYSQL_SECRET" required:"true"` -} - -var options = OptionsStruct{} - -func applyConfig(c *gin.Context, url, token string) (err error) { - var client *service.Client - if client, err = service.NewClient(url, false, "", ""); err != nil { - log.Print(err) - return - } - client.SetToken(token) - - buf, err := os.ReadFile("standard-query-library.yml") - if err != nil { - log.Print(err) - return - } - specs, err := spec.GroupFromBytes(buf) - if err != nil { - return - } - logf := func(format string, a ...interface{}) { - log.Printf(format, a...) - } - err = client.ApplyGroup(c, specs, "", logf, fleet.ApplySpecOptions{}) - if err != nil { - return - } - return -} - -type MysqlSecretEntry struct { - Endpoint string `json:"endpoint"` - Username string `json:"username"` - Password string `json:"password"` -} - -func clearActivitiesTable(c *gin.Context, id string) (err error) { - // Get connection string - svc := secretsmanager.New(session.New()) - sec, err := svc.GetSecretValue(&secretsmanager.GetSecretValueInput{ - SecretId: aws.String(options.MysqlSecret), - }) - if err != nil { - log.Print(err) - return - } - var secretEntry MysqlSecretEntry - if err = json.Unmarshal([]byte(*sec.SecretString), &secretEntry); err != nil { - log.Print(err) - return - } - connectionString := fmt.Sprintf("%s:%s@tcp(%s)/%s", secretEntry.Username, secretEntry.Password, secretEntry.Endpoint, id) - // Connect to db - db, err := sql.Open("mysql", connectionString) - if err != nil { - log.Print(err) - return - } - defer db.Close() - // truncate activities table - _, err = db.ExecContext(c, "truncate activities;") - if err != nil { - log.Print(err) - return - } - return -} - -type LifecycleRecord struct { - ID string - State string - RedisDB int `dynamodbav:"redis_db"` - Token string -} - -func getExpiry(id string) (ret time.Time, err error) { - var execArn arn.ARN - var exec *sfn.DescribeExecutionOutput - var input struct { - WaitTime int `json:"waitTime"` - } - - execArn, err = arn.Parse(options.LifecycleSFN) - if err != nil { - return - } - execArn.Resource = fmt.Sprintf("execution:%s:%s", strings.Split(execArn.Resource, ":")[1], id) - - exec, err = sfn.New(session.New()).DescribeExecution(&sfn.DescribeExecutionInput{ - ExecutionArn: aws.String(execArn.String()), - }) - if err != nil { - return - } - - if err = json.Unmarshal([]byte(*exec.Input), &input); err != nil { - return - } - var dur time.Duration - if dur, err = time.ParseDuration(fmt.Sprintf("%ds", input.WaitTime)); err != nil { - return - } - ret = exec.StartDate.Add(dur) - return -} - -func claimFleet(fleet LifecycleRecord, svc *dynamodb.DynamoDB) (err error) { - log.Printf("Claiming instance: %+v", fleet) - // Perform a conditional update to claim the item - input := &dynamodb.UpdateItemInput{ - ConditionExpression: aws.String("#fleet_state = :v1"), - TableName: aws.String(options.LifecycleTable), - Key: map[string]*dynamodb.AttributeValue{ - "ID": { - S: aws.String(fleet.ID), - }, - }, - UpdateExpression: aws.String("set #fleet_state = :v2"), - ExpressionAttributeNames: map[string]*string{"#fleet_state": aws.String("State")}, - ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{ - ":v1": { - S: aws.String("unclaimed"), - }, - ":v2": { - S: aws.String("claimed"), - }, - }, - } - if _, err = svc.UpdateItem(input); err != nil { - return - } - return -} - -func saveToken(fleet LifecycleRecord, svc *dynamodb.DynamoDB) (err error) { - log.Printf("Saving Token: %+v", fleet) - // Perform a conditional update to claim the item - input := &dynamodb.UpdateItemInput{ - TableName: aws.String(options.LifecycleTable), - Key: map[string]*dynamodb.AttributeValue{ - "ID": { - S: aws.String(fleet.ID), - }, - }, - UpdateExpression: aws.String("set #fleet_token = :v1"), - ExpressionAttributeNames: map[string]*string{"#fleet_token": aws.String("Token")}, - ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{ - ":v1": { - S: aws.String(fleet.Token), - }, - }, - } - if _, err = svc.UpdateItem(input); err != nil { - return - } - return -} - -func getToken(id string, svc *dynamodb.DynamoDB) (token string, err error) { - input := &dynamodb.GetItemInput{ - TableName: aws.String(options.LifecycleTable), - Key: map[string]*dynamodb.AttributeValue{"ID": &dynamodb.AttributeValue{ - S: aws.String(id), - }}, - } - - var result *dynamodb.GetItemOutput - if result, err = svc.GetItem(input); err != nil { - return - } - var rec LifecycleRecord - if err = dynamodbattribute.UnmarshalMap(result.Item, &rec); err != nil { - return - } - token = rec.Token - return -} - -func getFleetInstance() (ret LifecycleRecord, err error) { - log.Print("Getting fleet instance") - svc := dynamodb.New(session.New()) - // Loop until we get one - for { - input := &dynamodb.QueryInput{ - ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{ - ":v1": { - S: aws.String("unclaimed"), - }, - }, - KeyConditionExpression: aws.String("#fleet_state = :v1"), - TableName: aws.String(options.LifecycleTable), - ExpressionAttributeNames: map[string]*string{"#fleet_state": aws.String("State")}, - IndexName: aws.String("FleetState"), - } - - var result *dynamodb.QueryOutput - if result, err = svc.Query(input); err != nil { - return - } - recs := []LifecycleRecord{} - if err = dynamodbattribute.UnmarshalListOfMaps(result.Items, &recs); err != nil { - return - } - ret = recs[rand.Intn(len(recs))] - if err = claimFleet(ret, svc); err != nil { - log.Print(err) - continue - } - return - } -} - -func triggerSFN(id, expiry string) (err error) { - var endTime time.Time - log.Print("Triggering state machine") - if endTime, err = time.Parse(time.RFC3339, expiry); err != nil { - return - } - if int(endTime.Sub(time.Now()).Seconds()) < 0 { - return errors.New("Expiry time is in the past") - } - sfnInStr, err := json.Marshal(struct { - InstanceID string `json:"instanceID"` - WaitTime int `json:"waitTime"` - }{ - InstanceID: id, - WaitTime: int(endTime.Sub(time.Now()).Seconds()), - }) - if err != nil { - return - } - sfnIn := sfn.StartExecutionInput{ - Input: aws.String(string(sfnInStr)), - Name: aws.String(id), - StateMachineArn: aws.String(options.LifecycleSFN), - } - _, err = sfn.New(session.New()).StartExecution(&sfnIn) - return -} - -type HealthInput struct{} -type HealthOutput struct { - Message string `json:"message" description:"The status of the API." example:"The API is healthy"` -} - -func Health(c *gin.Context, in *HealthInput) (ret *HealthOutput, err error) { - ret = &HealthOutput{ - Message: "Healthy", - } - return -} - -type NewFleetInput struct { - Email string `json:"email" validate:"required,email"` - Name string `json:"name" validate:"required"` - SandboxExpiration string `json:"sandbox_expiration" validate:"required"` - Password string `json:"password" validate:"required"` - Authorization string `header:"Authorization" validate:"required"` -} -type NewFleetOutput struct { - URL string -} - -func NewFleet(c *gin.Context, in *NewFleetInput) (ret *NewFleetOutput, err error) { - if in.Authorization != options.AuthorizationPSK { - err = errors.New("Unauthorized") - return - } - ret = &NewFleetOutput{} - fleet, err := getFleetInstance() - if err != nil { - log.Print(err) - return - } - log.Print("Creating fleet client") - ret.URL = fmt.Sprintf("https://%s.%s", fleet.ID, options.FleetBaseURL) - log.Print(ret.URL) - client, err := service.NewClient(ret.URL, true, "", "") - if err != nil { - log.Print(err) - return - } - log.Print("Creating admin user") - var token string - if token, err = client.Setup(in.Email, in.Name, in.Password, "Fleet Sandbox"); err != nil { - log.Print(err) - return - } - fleet.Token = token - log.Print("Triggering SFN to start teardown timer") - if err = triggerSFN(fleet.ID, in.SandboxExpiration); err != nil { - log.Print(err) - return - } - log.Print("Applying basic config now that we have a user") - if err = applyConfig(c, ret.URL, token); err != nil { - log.Print(err) - return - } - log.Print("Clearing activities table") - if err = clearActivitiesTable(c, fleet.ID); err != nil { - log.Print(err) - return - } - log.Print("Saving admin token for addUser") - if err = saveToken(fleet, dynamodb.New(session.New())); err != nil { - log.Print(err) - return - } - return -} - -type AddUserInput struct { - SandboxID string `path:"SandboxID" validate:"required"` - Authorization string `header:"Authorization" validate:"required"` - Email string `json:"email" validate:"required"` - Password string `json:"password" validate:"required"` - Name string `json:"name" validate:"required"` -} - -type AddUserOutput struct{} - -func AddUser(c *gin.Context, in *AddUserInput) (ret *AddUserOutput, err error) { - if in.Authorization != options.AuthorizationPSK { - err = errors.New("Unauthorized") - return - } - client, err := service.NewClient(fmt.Sprintf("https://%s.%s", in.SandboxID, options.FleetBaseURL), true, "", "") - if err != nil { - log.Print(err) - return - } - svc := dynamodb.New(session.New()) - token, err := getToken(in.SandboxID, svc) - if err != nil { - log.Print(err) - return - } - client.SetToken(token) - err = client.CreateUser(fleet.UserPayload{ - Password: ptr.String(in.Password), - Email: ptr.String(in.Email), - Name: ptr.String(in.Name), - SSOEnabled: &[]bool{false}[0], - AdminForcedPasswordReset: &[]bool{false}[0], - APIOnly: &[]bool{false}[0], - GlobalRole: ptr.String(fleet.RoleObserver), - Teams: &[]fleet.UserTeam{}, - }) - return -} - -type ExpiryInput struct { - ID string `query:"id" validate:"required"` -} -type ExpiryOutput struct { - Timestamp time.Time `json:"timestamp"` -} - -func GetExpiry(c *gin.Context, in *ExpiryInput) (ret *ExpiryOutput, err error) { - ret = &ExpiryOutput{} - if ret.Timestamp, err = getExpiry(in.ID); err != nil { - return - } - return -} - -func main() { - rand.Seed(time.Now().Unix()) - var err error - log.SetFlags(log.LstdFlags | log.Lshortfile) - // Get config from environment - parser := flags.NewParser(&options, flags.Default) - if _, err = parser.Parse(); err != nil { - if flagsErr, ok := err.(*flags.Error); ok && flagsErr.Type == flags.ErrHelp { - return - } else { - log.Fatal(err) - } - } - - r := gin.Default() - r.Use(apmgin.Middleware(r)) - r.Use(cors.Default()) - f := fizz.NewFromEngine(r) - infos := &openapi.Info{ - Title: "Fleet Demo JITProvisioner", - Description: "Provisions new Fleet instances upon request", - Version: "1.0.0", - } - f.GET("/openapi.json", nil, f.OpenAPI(infos, "json")) - f.GET("/health", nil, tonic.Handler(Health, 200)) - f.POST("/new", nil, tonic.Handler(NewFleet, 200)) - f.GET("/expires", nil, tonic.Handler(GetExpiry, 200)) - f.POST("/addUser/:SandboxID", nil, tonic.Handler(AddUser, 200)) - algnhsa.ListenAndServe(r, nil) -} diff --git a/infrastructure/sandbox/JITProvisioner/main.tf b/infrastructure/sandbox/JITProvisioner/main.tf deleted file mode 100644 index 37534f370..000000000 --- a/infrastructure/sandbox/JITProvisioner/main.tf +++ /dev/null @@ -1,48 +0,0 @@ -terraform { - required_providers { - docker = { - source = "kreuzwerker/docker" - version = "~> 2.16.0" - } - git = { - source = "paultyng/git" - version = "~> 0.1.0" - } - } -} - -data "aws_region" "current" {} - -locals { - name = "jit" - full_name = "${var.prefix}-${local.name}" -} - -resource "aws_cloudwatch_log_group" "main" { - name = local.full_name - kms_key_id = var.kms_key.arn - retention_in_days = 30 -} - -resource "aws_kms_key" "ecr" { - deletion_window_in_days = 10 - enable_key_rotation = true -} - -resource "aws_ecr_repository" "main" { - name = var.prefix - image_tag_mutability = "IMMUTABLE" - - image_scanning_configuration { - scan_on_push = true - } - - encryption_configuration { - encryption_type = "KMS" - kms_key = aws_kms_key.ecr.arn - } -} - -data "git_repository" "main" { - path = "${path.module}/../../../" -} diff --git a/infrastructure/sandbox/JITProvisioner/variables.tf b/infrastructure/sandbox/JITProvisioner/variables.tf deleted file mode 100644 index 95b21b449..000000000 --- a/infrastructure/sandbox/JITProvisioner/variables.tf +++ /dev/null @@ -1,12 +0,0 @@ -variable "prefix" {} -variable "dynamodb_table" {} -variable "vpc" {} -variable "remote_state" {} -variable "mysql_secret" {} -variable "mysql_secret_kms" {} -variable "eks_cluster" {} -variable "redis_cluster" {} -variable "alb_listener" {} -variable "base_domain" {} -variable "ecs_cluster" {} -variable "kms_key" {} diff --git a/infrastructure/sandbox/Monitoring/.gitignore b/infrastructure/sandbox/Monitoring/.gitignore deleted file mode 100644 index acab92adf..000000000 --- a/infrastructure/sandbox/Monitoring/.gitignore +++ /dev/null @@ -1 +0,0 @@ -.lambda.zip diff --git a/infrastructure/sandbox/Monitoring/lambda/Dockerfile b/infrastructure/sandbox/Monitoring/lambda/Dockerfile deleted file mode 100644 index f8fbc99c0..000000000 --- a/infrastructure/sandbox/Monitoring/lambda/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM golang:1.21.6-alpine@sha256:bb943c432d54b16abd59d0240a4d0bbaae8781925f117a78355b908d197a1da1 AS builder -WORKDIR /build -COPY . . -RUN go get -d -v -RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-extldflags '-static'" - -#FROM scratch -#COPY --from=builder /build/lambda /build/terraform / -#COPY --from=builder /build/deploy_terraform /deploy_terraform -#COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ -ENTRYPOINT ["/build/lambda"] diff --git a/infrastructure/sandbox/Monitoring/lambda/go.mod b/infrastructure/sandbox/Monitoring/lambda/go.mod deleted file mode 100644 index f048c9f40..000000000 --- a/infrastructure/sandbox/Monitoring/lambda/go.mod +++ /dev/null @@ -1,14 +0,0 @@ -module github.com/fleetdm/fleet/infrastructure/demo/Monitoring/lambda - -go 1.21 - -require ( - github.com/aws/aws-lambda-go v1.32.1 - github.com/aws/aws-sdk-go v1.44.50 - github.com/jessevdk/go-flags v1.5.0 -) - -require ( - github.com/jmespath/go-jmespath v0.4.0 // indirect - golang.org/x/sys v0.1.0 // indirect -) diff --git a/infrastructure/sandbox/Monitoring/lambda/go.sum b/infrastructure/sandbox/Monitoring/lambda/go.sum deleted file mode 100644 index 33c18b477..000000000 --- a/infrastructure/sandbox/Monitoring/lambda/go.sum +++ /dev/null @@ -1,33 +0,0 @@ -github.com/aws/aws-lambda-go v1.32.1 h1:ls0FU8Mt7ayJszb945zFkUfzxhkQTli8mpJstVcDtCY= -github.com/aws/aws-lambda-go v1.32.1/go.mod h1:jwFe2KmMsHmffA1X2R09hH6lFzJQxzI8qK17ewzbQMM= -github.com/aws/aws-sdk-go v1.44.50 h1:dg6nbI+4734bTj1Q6FCQqiIiE+lb8HpGQJqZEvZeMrY= -github.com/aws/aws-sdk-go v1.44.50/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc= -github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= -github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= -github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= -github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= -github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/testify v1.7.2 h1:4jaiDzPyXQvSd7D0EjG45355tLlV3VOECpq10pLC+8s= -github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= -golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U= -golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= -gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/infrastructure/sandbox/Monitoring/lambda/main.go b/infrastructure/sandbox/Monitoring/lambda/main.go deleted file mode 100644 index 5a67c6f22..000000000 --- a/infrastructure/sandbox/Monitoring/lambda/main.go +++ /dev/null @@ -1,119 +0,0 @@ -package main - -import ( - "context" - "github.com/aws/aws-lambda-go/lambda" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/cloudwatch" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute" - flags "github.com/jessevdk/go-flags" - "log" -) - -type OptionsStruct struct { - LambdaExecutionEnv string `long:"lambda-execution-environment" env:"AWS_EXECUTION_ENV"` - LifecycleTable string `long:"dynamodb-lifecycle-table" env:"DYNAMODB_LIFECYCLE_TABLE" required:"true"` -} - -var options = OptionsStruct{} - -type LifecycleRecord struct { - State string -} - -func getInstancesCount(c context.Context) (int64, int64, error) { - log.Print("getInstancesCount") - svc := dynamodb.New(session.New()) - // Example iterating over at most 3 pages of a Scan operation. - var count, unclaimedCount int64 - err := svc.ScanPagesWithContext( - c, - &dynamodb.ScanInput{ - TableName: aws.String(options.LifecycleTable), - }, - func(page *dynamodb.ScanOutput, lastPage bool) bool { - count += *page.Count - recs := []LifecycleRecord{} - if err := dynamodbattribute.UnmarshalListOfMaps(page.Items, &recs); err != nil { - log.Print(err) - return false - } - for _, i := range recs { - if i.State == "unclaimed" { - unclaimedCount++ - } - } - return true - }) - if err != nil { - return 0, 0, err - } - return count, unclaimedCount, nil -} - -type NullEvent struct{} - -func handler(ctx context.Context, name NullEvent) error { - totalCount, unclaimedCount, err := getInstancesCount(ctx) - if err != nil { - log.Print(err) - return err - } - svc := cloudwatch.New(session.New()) - log.Printf("Publishing %d, %d", totalCount, unclaimedCount) - _, err = svc.PutMetricData(&cloudwatch.PutMetricDataInput{ - Namespace: aws.String("Fleet/sandbox"), - MetricData: []*cloudwatch.MetricDatum{ - &cloudwatch.MetricDatum{ - Dimensions: []*cloudwatch.Dimension{ - &cloudwatch.Dimension{ - Name: aws.String("Type"), - Value: aws.String("totalCount"), - }, - }, - MetricName: aws.String("instances"), - Value: aws.Float64(float64(totalCount)), - Unit: aws.String(cloudwatch.StandardUnitCount), - }, - &cloudwatch.MetricDatum{ - Dimensions: []*cloudwatch.Dimension{ - &cloudwatch.Dimension{ - Name: aws.String("Type"), - Value: aws.String("unclaimedCount"), - }, - }, - MetricName: aws.String("instances"), - Value: aws.Float64(float64(unclaimedCount)), - Unit: aws.String(cloudwatch.StandardUnitCount), - }, - }, - }) - if err != nil { - log.Print(err) - return err - } - return nil -} - -func main() { - var err error - log.SetFlags(log.LstdFlags | log.Lshortfile) - // Get config from environment - parser := flags.NewParser(&options, flags.Default) - if _, err = parser.Parse(); err != nil { - if flagsErr, ok := err.(*flags.Error); ok && flagsErr.Type == flags.ErrHelp { - return - } else { - log.Fatal(err) - } - } - if options.LambdaExecutionEnv != "" { - lambda.Start(handler) - } else { - if err = handler(context.Background(), NullEvent{}); err != nil { - log.Fatal(err) - } - } -} diff --git a/infrastructure/sandbox/Monitoring/main.tf b/infrastructure/sandbox/Monitoring/main.tf deleted file mode 100644 index a27bf94e6..000000000 --- a/infrastructure/sandbox/Monitoring/main.tf +++ /dev/null @@ -1,280 +0,0 @@ -terraform { - required_providers { - docker = { - source = "kreuzwerker/docker" - version = "~> 2.16.0" - } - git = { - source = "paultyng/git" - version = "~> 0.1.0" - } - } -} - -data "aws_region" "current" {} - -locals { - full_name = "${var.prefix}-monitoring" -} - -module "notify_slack" { - source = "terraform-aws-modules/notify-slack/aws" - version = "5.5.0" - - sns_topic_name = var.prefix - - slack_webhook_url = var.slack_webhook - slack_channel = "#help-p1" - slack_username = "monitoring" -} - -data "aws_iam_policy_document" "lifecycle-lambda-assume-role" { - statement { - actions = ["sts:AssumeRole"] - principals { - type = "Service" - identifiers = ["lambda.amazonaws.com"] - } - } -} - -resource "aws_iam_role_policy_attachment" "lifecycle-lambda-lambda" { - role = aws_iam_role.lifecycle-lambda.id - policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" -} - -resource "aws_iam_role_policy_attachment" "lifecycle-lambda" { - role = aws_iam_role.lifecycle-lambda.id - policy_arn = aws_iam_policy.lifecycle-lambda.arn -} - -resource "aws_iam_policy" "lifecycle-lambda" { - name = "${local.full_name}-lifecycle-lambda" - policy = data.aws_iam_policy_document.lifecycle-lambda.json -} - -data "aws_iam_policy_document" "lifecycle-lambda" { - statement { - actions = [ - "dynamodb:List*", - "dynamodb:DescribeReservedCapacity*", - "dynamodb:DescribeLimits", - "dynamodb:DescribeTimeToLive" - ] - resources = ["*"] - } - - statement { - actions = [ - "dynamodb:BatchGet*", - "dynamodb:DescribeStream", - "dynamodb:DescribeTable", - "dynamodb:Get*", - "dynamodb:Query", - "dynamodb:Scan", - "dynamodb:BatchWrite*", - "dynamodb:CreateTable", - "dynamodb:Delete*", - "dynamodb:Update*", - "dynamodb:PutItem" - ] - resources = [var.dynamodb_table.arn] - } - - statement { - actions = [ #tfsec:ignore:aws-iam-no-policy-wildcards - "kms:Encrypt*", - "kms:Decrypt*", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:Describe*" - ] - resources = [aws_kms_key.ecr.arn, var.kms_key.arn] - } - - statement { - actions = ["cloudwatch:PutMetricData"] - resources = ["*"] - } -} - -resource "aws_iam_role" "lifecycle-lambda" { - name = local.full_name - - assume_role_policy = data.aws_iam_policy_document.lifecycle-lambda-assume-role.json -} - -resource "aws_kms_key" "ecr" { - deletion_window_in_days = 10 - enable_key_rotation = true -} - -resource "aws_ecr_repository" "main" { - name = local.full_name - image_tag_mutability = "IMMUTABLE" - - image_scanning_configuration { - scan_on_push = true - } - - encryption_configuration { - encryption_type = "KMS" - kms_key = aws_kms_key.ecr.arn - } -} - -resource "random_uuid" "lifecycle-lambda" { - keepers = { - lambda = data.archive_file.lifecycle-lambda.output_sha - } -} - -data "archive_file" "lifecycle-lambda" { - type = "zip" - output_path = "${path.module}/.lambda.zip" - source_dir = "${path.module}/lambda" -} - -data "git_repository" "main" { - path = "${path.module}/../../../" -} - -resource "docker_registry_image" "lifecycle-lambda" { - name = "${aws_ecr_repository.main.repository_url}:${data.git_repository.main.branch}-${random_uuid.lifecycle-lambda.result}" - keep_remotely = true - - build { - context = "${path.module}/lambda/" - pull_parent = true - platform = "linux/amd64" - } -} - -resource "aws_cloudwatch_event_rule" "lifecycle" { - name_prefix = local.full_name - schedule_expression = "rate(5 minutes)" - is_enabled = true -} - -resource "aws_cloudwatch_event_target" "lifecycle" { - rule = aws_cloudwatch_event_rule.lifecycle.name - arn = aws_lambda_function.lifecycle.arn -} - -resource "aws_lambda_function" "lifecycle" { - # If the file is not in the current working directory you will need to include a - # path.module in the filename. - image_uri = docker_registry_image.lifecycle-lambda.name - package_type = "Image" - function_name = "${local.full_name}-lifecycle-lambda" - kms_key_arn = var.kms_key.arn - role = aws_iam_role.lifecycle-lambda.arn - reserved_concurrent_executions = -1 - timeout = 10 - memory_size = 512 - tracing_config { - mode = "Active" - } - environment { - variables = { - DYNAMODB_LIFECYCLE_TABLE = var.dynamodb_table.id - } - } -} - -resource "aws_lambda_permission" "lifecycle" { - action = "lambda:InvokeFunction" - function_name = aws_lambda_function.lifecycle.function_name - principal = "events.amazonaws.com" - source_arn = aws_cloudwatch_event_rule.lifecycle.arn -} - -resource "aws_cloudwatch_metric_alarm" "totalInstances" { - alarm_name = "${var.prefix}-lifecycle-totalCount" - comparison_operator = "GreaterThanThreshold" - evaluation_periods = "1" - metric_name = "instances" - namespace = "Fleet/sandbox" - period = "900" - statistic = "Average" - threshold = "90" - alarm_actions = [module.notify_slack.slack_topic_arn] - ok_actions = [module.notify_slack.slack_topic_arn] - treat_missing_data = "breaching" - datapoints_to_alarm = 1 - dimensions = { - Type = "totalCount" - } -} - -resource "aws_cloudwatch_metric_alarm" "unclaimed" { - alarm_name = "${var.prefix}-lifecycle-unclaimed" - comparison_operator = "LessThanThreshold" - evaluation_periods = "1" - metric_name = "instances" - namespace = "Fleet/sandbox" - period = "900" - statistic = "Average" - threshold = "10" - alarm_actions = [module.notify_slack.slack_topic_arn] - ok_actions = [module.notify_slack.slack_topic_arn] - treat_missing_data = "breaching" - datapoints_to_alarm = 1 - dimensions = { - Type = "unclaimedCount" - } -} - -resource "aws_cloudwatch_metric_alarm" "lb" { - for_each = toset(["HTTPCode_ELB_5XX_Count", "HTTPCode_Target_5XX_Count"]) - alarm_name = "${var.prefix}-lb-${each.key}" - comparison_operator = "GreaterThanThreshold" - evaluation_periods = "1" - metric_name = each.key - namespace = "AWS/ApplicationELB" - period = "120" - statistic = "Sum" - threshold = "0" - alarm_actions = [module.notify_slack.slack_topic_arn] - ok_actions = [module.notify_slack.slack_topic_arn] - treat_missing_data = "notBreaching" - dimensions = { - LoadBalancer = var.lb.arn_suffix - } -} - -resource "aws_cloudwatch_metric_alarm" "jitprovisioner" { - for_each = toset(["Errors"]) - alarm_name = "${var.prefix}-jitprovisioner-${each.key}" - comparison_operator = "GreaterThanThreshold" - evaluation_periods = "1" - metric_name = each.key - namespace = "AWS/Lambda" - period = "120" - statistic = "Sum" - threshold = "0" - alarm_actions = [module.notify_slack.slack_topic_arn] - ok_actions = [module.notify_slack.slack_topic_arn] - treat_missing_data = "notBreaching" - dimensions = { - FunctionName = var.jitprovisioner.id - } -} - -resource "aws_cloudwatch_metric_alarm" "deprovisioner" { - for_each = toset(["ExecutionsFailed"]) - alarm_name = "${var.prefix}-deprovisioner-${each.key}" - comparison_operator = "GreaterThanThreshold" - evaluation_periods = "1" - metric_name = each.key - namespace = "AWS/States" - period = "120" - statistic = "Sum" - threshold = "0" - alarm_actions = [module.notify_slack.slack_topic_arn] - ok_actions = [module.notify_slack.slack_topic_arn] - treat_missing_data = "notBreaching" - dimensions = { - StateMachineArn = var.deprovisioner.arn - } -} diff --git a/infrastructure/sandbox/Monitoring/variables.tf b/infrastructure/sandbox/Monitoring/variables.tf deleted file mode 100644 index 1885464d8..000000000 --- a/infrastructure/sandbox/Monitoring/variables.tf +++ /dev/null @@ -1,7 +0,0 @@ -variable "prefix" {} -variable "lb" {} -variable "jitprovisioner" {} -variable "deprovisioner" {} -variable "slack_webhook" {} -variable "dynamodb_table" {} -variable "kms_key" {} diff --git a/infrastructure/sandbox/PreProvisioner/.gitignore b/infrastructure/sandbox/PreProvisioner/.gitignore deleted file mode 100644 index acab92adf..000000000 --- a/infrastructure/sandbox/PreProvisioner/.gitignore +++ /dev/null @@ -1 +0,0 @@ -.lambda.zip diff --git a/infrastructure/sandbox/PreProvisioner/lambda/.gitignore b/infrastructure/sandbox/PreProvisioner/lambda/.gitignore deleted file mode 100644 index 8e44f229b..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -lambda -deploy_terraform/backend.conf diff --git a/infrastructure/sandbox/PreProvisioner/lambda/Dockerfile b/infrastructure/sandbox/PreProvisioner/lambda/Dockerfile deleted file mode 100644 index 722d2497c..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/Dockerfile +++ /dev/null @@ -1,44 +0,0 @@ -FROM rust:latest@sha256:02a53e734724bef4a58d856c694f826aa9e7ea84353516b76d9a6d241e9da60e AS builder - -ARG transporter_url=https://itunesconnect.apple.com/WebObjects/iTunesConnect.woa/ra/resources/download/public/Transporter__Linux/bin - -RUN cargo install --version 0.16.0 apple-codesign \ - && curl -sSf $transporter_url -o transporter_install.sh \ - && sh transporter_install.sh --target transporter --accept --noexec - -FROM golang:1.21.6-bullseye@sha256:fa52abd182d334cfcdffdcc934e21fcfbc71c3cde568e606193ae7db045b1b8d - -RUN apt-get update \ - && dpkg --add-architecture i386 \ - && apt update \ - && apt install -y --no-install-recommends ca-certificates cpio libxml2 wine wine32 libgtk-3-0 \ - && rm -rf /var/lib/apt/lists/* - -# copy macOS dependencies -COPY --from=fleetdm/bomutils:latest /usr/bin/mkbom /usr/local/bin/xar /usr/bin/ -COPY --from=fleetdm/bomutils:latest /usr/local/lib /usr/local/lib/ -COPY --from=builder /transporter/itms /usr/local/ -COPY --from=builder /usr/local/cargo/bin/rcodesign /usr/local/bin - -# copy Windows dependencies -COPY --from=fleetdm/wix:latest /home/wine /home/wine - -ENV FLEETCTL_NATIVE_TOOLING=1 WINEPREFIX=/home/wine/.wine WINEARCH=win32 PATH="/home/wine/bin:$PATH" WINEDEBUG=-all - -RUN apt update; apt install -y curl openssl unzip -WORKDIR /build -COPY . . -RUN go get -d -v -RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-extldflags '-static'" -RUN curl https://releases.hashicorp.com/terraform/1.1.8/terraform_1.1.8_linux_amd64.zip > terraform.zip -RUN unzip terraform.zip -RUN rm terraform.zip -RUN chmod 644 $(find . -type f) -RUN chmod 755 $(find . -type d) -RUN chmod 655 lambda terraform - -#FROM scratch -#COPY --from=builder /build/lambda /build/terraform / -#COPY --from=builder /build/deploy_terraform /deploy_terraform -#COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ -ENTRYPOINT ["/build/lambda"] diff --git a/infrastructure/sandbox/PreProvisioner/lambda/backend-template.conf b/infrastructure/sandbox/PreProvisioner/lambda/backend-template.conf deleted file mode 100644 index b60666d9a..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/backend-template.conf +++ /dev/null @@ -1,6 +0,0 @@ -bucket = "${remote_state.state_bucket.id}" -key = "terraform.tfstate" # This should be set to account_alias/unique_key/terraform.tfstate -region = "us-east-2" -encrypt = true -kms_key_id = "${remote_state.kms_key.id}" -dynamodb_table = "${remote_state.dynamodb_table.id}" diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/.terraform.lock.hcl b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/.terraform.lock.hcl deleted file mode 100644 index 5626bd0aa..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/.terraform.lock.hcl +++ /dev/null @@ -1,84 +0,0 @@ -# This file is maintained automatically by "terraform init". -# Manual edits may be lost in future updates. - -provider "registry.terraform.io/hashicorp/aws" { - version = "4.10.0" - constraints = "~> 4.10.0" - hashes = [ - "h1:S6xGPRL08YEuBdemiYZyIBf/YwM4OCvzVuaiuU6kLjc=", - "zh:0a2a7eabfeb7dbb17b7f82aff3fa2ba51e836c15e5be4f5468ea44bd1299b48d", - "zh:23409c7205d13d2d68b5528e1c49e0a0455d99bbfec61eb0201142beffaa81f7", - "zh:3adad2245d97816f3919778b52c58fb2de130938a3e9081358bfbb72ec478d9a", - "zh:5bf100aba6332f24b1ffeae7536d5d489bb907bf774a06b95f2183089eaf1a1a", - "zh:63c3a24c0c229a1d3390e6ea2454ba4d8ace9b94e086bee1dbdcf665ae969e15", - "zh:6b76f5ffd920f0a750da3a4ff1d00eab18d9cd3731b009aae3df4135613bad4d", - "zh:8cd6b1e6b51e8e9bbe2944bb169f113d20d1d72d07ccd1b7b83f40b3c958233e", - "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:c5c31f58fb5bd6aebc6c662a4693640ec763cb3399cce0b592101cf24ece1625", - "zh:cc485410be43d6ad95d81b9e54cc4d2117aadf9bf5941165a9df26565d9cce42", - "zh:cebb89c74b6a3dc6780824b1d1e2a8d16a51e75679e14ad0b830d9f7da1a3a67", - "zh:e7dc427189cb491e1f96e295101964415cbf8630395ee51e396d2a811f365237", - ] -} - -provider "registry.terraform.io/hashicorp/helm" { - version = "2.5.1" - constraints = "2.5.1" - hashes = [ - "h1:NasRPC0qqlpGqcF3dsSoOFu7uc5hM+zJm+okd8FgrnQ=", - "zh:140b9748f0ad193a20d69e59d672f3c4eda8a56cede56a92f931bd3af020e2e9", - "zh:17ae319466ed6538ad49e011998bb86565fe0e97bc8b9ad7c8dda46a20f90669", - "zh:3a8bd723c21ba70e19f0395ed7096fc8e08bfc23366f1c3f06a9107eb37c572c", - "zh:3aae3b82adbe6dca52f1a1c8cf51575446e6b0f01f1b1f3b30de578c9af4a933", - "zh:3f65221f40148df57d2888e4f31ef3bf430b8c5af41de0db39a2b964e1826d7c", - "zh:650c74c4f46f5eb01df11d8392bdb7ebee3bba59ac0721000a6ad731ff0e61e2", - "zh:930fb8ab4cd6634472dfd6aa3123f109ef5b32cbe6ef7b4695fae6751353e83f", - "zh:ae57cd4b0be4b9ca252bc5d347bc925e35b0ed74d3dcdebf06c11362c1ac3436", - "zh:d15b1732a8602b6726eac22628b2f72f72d98b75b9c6aabceec9fd696fda696a", - "zh:d730ede1656bd193e2aea5302acec47c4905fe30b96f550196be4a0ed5f41936", - "zh:f010d4f9d8cd15936be4df12bf256cb2175ca1dedb728bd3a866c03d2ee7591f", - "zh:f569b65999264a9416862bca5cd2a6177d94ccb0424f3a4ef424428912b9cb3c", - ] -} - -provider "registry.terraform.io/hashicorp/random" { - version = "3.1.3" - constraints = "~> 3.1.2" - hashes = [ - "h1:nLWniS8xhb32qRQy+n4bDPjQ7YWZPVMR3v1vSrx7QyY=", - "zh:26e07aa32e403303fc212a4367b4d67188ac965c37a9812e07acee1470687a73", - "zh:27386f48e9c9d849fbb5a8828d461fde35e71f6b6c9fc235bc4ae8403eb9c92d", - "zh:5f4edda4c94240297bbd9b83618fd362348cadf6bf24ea65ea0e1844d7ccedc0", - "zh:646313a907126cd5e69f6a9fafe816e9154fccdc04541e06fed02bb3a8fa2d2e", - "zh:7349692932a5d462f8dee1500ab60401594dddb94e9aa6bf6c4c0bd53e91bbb8", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9034daba8d9b32b35930d168f363af04cecb153d5849a7e4a5966c97c5dc956e", - "zh:bb81dfca59ef5f949ef39f19ea4f4de25479907abc28cdaa36d12ecd7c0a9699", - "zh:bcf7806b99b4c248439ae02c8e21f77aff9fadbc019ce619b929eef09d1221bb", - "zh:d708e14d169e61f326535dd08eecd3811cd4942555a6f8efabc37dbff9c6fc61", - "zh:dc294e19a46e1cefb9e557a7b789c8dd8f319beca99b8c265181bc633dc434cc", - "zh:f9d758ee53c55dc016dd736427b6b0c3c8eb4d0dbbc785b6a3579b0ffedd9e42", - ] -} - -provider "registry.terraform.io/petoju/mysql" { - version = "3.0.12" - constraints = "3.0.12" - hashes = [ - "h1:HjwoRcnjjg9ZDC/EVzBPbe76s1Ut7VmDA3QwkVCaC5A=", - "zh:03e43a5254c6bd1bade161c24b11f019f296efe395710445617ef28d7a75bf73", - "zh:05e8949f079246c17fdd1e2dbae8e313551906a13cc4488f3e35548502d477ee", - "zh:080e95478021b353c00ab7a7718801815ae49435ce4833520a391dcbd3de1137", - "zh:4497661a09ebbde569cec8d86db848ef159c7bbc5fcf21c2602d18e471604f7d", - "zh:5b03de967142d8a84710fd75d926f6293ec917685de66457c704cfc64b6bef26", - "zh:6a33f8aecd02689d89963554470a9ae704a7ae481ebabc3d7571d589b4febc37", - "zh:6e1d3e0acf2e006578ace24a38ba93b98469e0c280fb97acae40b2d2a4ec81cb", - "zh:86174e6940a4a66ad26cb88f38f68a17b8d56bf0139bc156d50e2e064a5614ef", - "zh:929370d7710e1669b0a3d386f5722280b0ff720185c6f0822432ab4cb1098cce", - "zh:9e1c0ed9530ae75c555b0f84cb0430ee03fbceb9f0726bcecc1ae1276d871be7", - "zh:bf39753d4e518857a0e149f9a5d9c034a42247114ac10582ccc24713c7b73836", - "zh:d3f6240beab52ada658314626cae16089b5a46a91a0573a2e10332bbc8873078", - "zh:e66dead39a840833386aebf2131db40b52b5d134792a0a7ec23ef69e2ef4833e", - "zh:ea22ce26f6bd4f3a8eba56a9af5ee166343a88e2769571174098f659e0ac64af", - ] -} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/Chart.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/Chart.yaml deleted file mode 100644 index abc3be200..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/Chart.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: v1 -description: A Helm chart for Fleet -name: fleet -keywords: - - fleet - - osquery -version: v4.12.0 -home: https://github.com/fleetdm/fleet -sources: - - https://github.com/fleetdm/fleet.git -appVersion: v4.12.0 diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/README.md b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/README.md deleted file mode 100644 index 287e94b65..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/README.md +++ /dev/null @@ -1,56 +0,0 @@ -## Fleet Helm Chart - -This directory contains a Helm Chart that makes deploying Fleet on Kubernetes easy. - -### Usage - -#### 1. Create namespace - -This Helm chart optionally provisions a Kubernetes namespace. Alternatively, you can add one with `kubectl create namespace ` or by creating a YAML file containing the namespace and applying it to your cluster. - -#### 2. Create the necessary secrets - -This Helm chart optionally creates Kubernetes `Secret`s for MySQL and Redis necessary for Fleet to operate. If you manually create them instead, at a minimum, secrets for the MySQL password must be created. For example, if you are deploying into a namespace called `fleet`: - -```yaml ---- -kind: Secret -apiVersion: v1 -metadata: - name: mysql - namespace: fleet -stringData: - mysql-password: this-is-a-bad-password -``` - -If you use Fleet's TLS capabilities, TLS connections to the MySQL server, or AWS access secret keys, additional secrets and keys are needed. The name of each `Secret` must match the value of `secretName` for each section in the `values.yaml` file and the key of each secret must match the related key value from the values file. For example, to configure Fleet's TLS, you would use a Secret like the one below. - -```yaml -kind: Secret -apiVersion: v1 -metadata: - name: fleet - namespace: fleet -stringData: - server.cert: | - your-pem-encoded-certificate-here - server.key: | - your-pem-encoded-key-here -``` - -Once all of your secrets are configured, use `kubectl apply -f --namespace ` to create them in the cluster. - -#### 3. Further Configuration - -To configure how Fleet runs, such as specifying the number of Fleet instances to deploy or changing the logger plugin for Fleet, edit the `values.yaml` file to your desired settings. - -#### 4. Deploy Fleet - -Once the secrets have been created and you have updated the values to match your required configuration, you can deploy with the following command. - -```sh -helm upgrade --install fleet fleet \ - --namespace \ - --repo https://fleetdm.github.io/fleet/charts \ - --values values.yaml -``` diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/cronjobs.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/cronjobs.yaml deleted file mode 100644 index f7992ac30..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/cronjobs.yaml +++ /dev/null @@ -1,351 +0,0 @@ ---- -apiVersion: batch/v1 -kind: CronJob -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} -spec: - # Forbiding concurrency prevents runaway costs on failing cronjobs stacking up - # see https://docs.google.com/document/d/1-4KmOlgfGEksNZnQo79a9nRLgM_i7ar2qovoZO3s_6g/edit. - concurrencyPolicy: Forbid - schedule: "{{ .Values.crons.vulnerabilities }}" - # EKS Fargate keeps resources running to show the job history. - # This saves significantly on compute in AWS. - # https://docs.google.com/document/d/1-4KmOlgfGEksNZnQo79a9nRLgM_i7ar2qovoZO3s_6g/edit - successfulJobsHistoryLimit: 0 - jobTemplate: - spec: - template: - spec: - restartPolicy: Never - containers: - - name: {{ .Values.fleetName }} - imagePullPolicy: Always - command: [/usr/bin/fleet] - args: ["vuln_processing"] - image: {{ .Values.imageRepo }}:{{ .Values.imageTag }} - ports: - - name: {{ .Values.fleetName }} - containerPort: {{ .Values.fleet.listenPort }} - resources: - limits: - cpu: {{ .Values.resources.limits.cpu }} - memory: "2Gi" - requests: - cpu: {{ .Values.resources.requests.cpu }} - memory: "2Gi" - env: - ## BEGIN FLEET SECTION - - name: FLEET_SERVER_SANDBOX_ENABLED - value: "1" - - name: FLEET_LICENSE_ENFORCE_HOST_LIMIT - value: "true" - - name: FLEET_VULNERABILITIES_DATABASES_PATH - value: /tmp/vuln - {{- if ne .Values.packaging.enrollSecret "" }} - - name: FLEET_PACKAGING_GLOBAL_ENROLL_SECRET - value: "{{ .Values.packaging.enrollSecret }}" - - name: FLEET_PACKAGING_S3_BUCKET - value: "{{ .Values.packaging.s3.bucket }}" - - name: FLEET_PACKAGING_S3_PREFIX - value: "{{ .Values.packaging.s3.prefix }}" - {{- end }} - - name: FLEET_SERVER_ADDRESS - value: "0.0.0.0:{{ .Values.fleet.listenPort }}" - - name: FLEET_AUTH_BCRYPT_COST - value: "{{ .Values.fleet.auth.bcryptCost }}" - - name: FLEET_AUTH_SALT_KEY_SIZE - value: "{{ .Values.fleet.auth.saltKeySize }}" - - name: FLEET_APP_TOKEN_KEY_SIZE - value: "{{ .Values.fleet.app.tokenKeySize }}" - - name: FLEET_APP_TOKEN_VALIDITY_PERIOD - value: "{{ .Values.fleet.app.inviteTokenValidityPeriod }}" - - name: FLEET_SESSION_KEY_SIZE - value: "{{ .Values.fleet.session.keySize }}" - - name: FLEET_SESSION_DURATION - value: "{{ .Values.fleet.session.duration }}" - - name: FLEET_LOGGING_DEBUG - value: "{{ .Values.fleet.logging.debug }}" - - name: FLEET_LOGGING_JSON - value: "{{ .Values.fleet.logging.json }}" - - name: FLEET_LOGGING_DISABLE_BANNER - value: "{{ .Values.fleet.logging.disableBanner }}" - - name: FLEET_SERVER_TLS - value: "{{ .Values.fleet.tls.enabled }}" - {{- if .Values.fleet.tls.enabled }} - - name: FLEET_SERVER_TLS_COMPATIBILITY - value: "{{ .Values.fleet.tls.compatibility }}" - - name: FLEET_SERVER_CERT - value: "/secrets/tls/{{ .Values.fleet.tls.certSecretKey }}" - - name: FLEET_SERVER_KEY - value: "/secrets/tls/{{ .Values.fleet.tls.keySecretKey }}" - {{- end }} - {{- if ne .Values.fleet.carving.s3.bucketName "" }} - - name: FLEET_S3_BUCKET - value: "{{ .Values.fleet.carving.s3.bucketName }}" - - name: FLEET_S3_PREFIX - value: "{{ .Values.fleet.carving.s3.prefix }}" - {{- if ne .Values.fleet.carving.s3.accessKeyID "" }} - - name: FLEET_S3_ACCESS_KEY_ID - value: "{{ .Values.fleet.carving.s3.accessKeyID }}" - - name: FLEET_S3_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.fleet.secretName }}" - key: "{{ .Values.fleet.carving.s3.secretKey }}" - {{ else }} - - name: FLEET_S3_STS_ASSUME_ROLE_ARN - value: "{{ .Values.fleet.carving.s3.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - ## END FLEET SECTION - ## BEGIN MYSQL SECTION - - name: FLEET_MYSQL_ADDRESS - value: "{{ .Values.mysql.address }}" - - name: FLEET_MYSQL_DATABASE - value: "{{ .Values.mysql.database }}" - - name: FLEET_MYSQL_USERNAME - value: "{{ .Values.mysql.username }}" - - name: FLEET_MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: {{ .Values.mysql.secretName }} - key: {{ .Values.mysql.passwordKey }} - - name: FLEET_MYSQL_MAX_OPEN_CONNS - value: "{{ .Values.mysql.maxOpenConns }}" - - name: FLEET_MYSQL_MAX_IDLE_CONNS - value: "{{ .Values.mysql.maxIdleConns }}" - - name: FLEET_MYSQL_CONN_MAX_LIFETIME - value: "{{ .Values.mysql.connMaxLifetime }}" - {{- if .Values.mysql.tls.enabled }} - - name: FLEET_MYSQL_TLS_CA - value: "/secrets/mysql/{{ .Values.mysql.tls.caCertKey }}" - - name: FLEET_MYSQL_TLS_CERT - value: "/secrets/mysql/{{ .Values.mysql.tls.certKey }}" - - name: FLEET_MYSQL_TLS_KEY - value: "/secrets/mysql/{{ .Values.mysql.tls.keyKey }}" - - name: FLEET_MYSQL_TLS_CONFIG - value: "{{ .Values.mysql.tls.config }}" - - name: FLEET_MYSQL_TLS_SERVER_NAME - value: "{{ .Values.mysql.tls.serverName }}" - {{- end }} - ## END MYSQL SECTION - ## BEGIN REDIS SECTION - - name: FLEET_REDIS_ADDRESS - value: "{{ .Values.redis.address }}" - - name: FLEET_REDIS_DATABASE - value: "{{ .Values.redis.database }}" - {{- if .Values.redis.usePassword }} - - name: FLEET_REDIS_PASSWORD - valueFrom: - secretKeyRef: - name: "{{ .Values.redis.secretName }}" - key: "{{ .Values.redis.passwordKey }}" - {{- end }} - ## END REDIS SECTION - ## BEGIN OSQUERY SECTION - - name: FLEET_OSQUERY_NODE_KEY_SIZE - value: "{{ .Values.osquery.nodeKeySize }}" - - name: FLEET_OSQUERY_LABEL_UPDATE_INTERVAL - value: "{{ .Values.osquery.labelUpdateInterval }}" - - name: FLEET_OSQUERY_DETAIL_UPDATE_INTERVAL - value: "{{ .Values.osquery.detailUpdateInterval }}" - - name: FLEET_OSQUERY_STATUS_LOG_PLUGIN - value: "{{ .Values.osquery.logging.statusPlugin }}" - - name: FLEET_OSQUERY_RESULT_LOG_PLUGIN - value: "{{ .Values.osquery.logging.resultPlugin }}" - {{- if eq .Values.osquery.logging.statusPlugin "filesystem" }} - - name: FLEET_FILESYSTEM_STATUS_LOG_FILE - value: "/logs/{{ .Values.osquery.logging.filesystem.statusLogFile }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "filesystem" }} - - name: FLEET_FILESYSTEM_RESULT_LOG_FILE - value: "/logs/{{ .Values.osquery.logging.filesystem.resultLogFile }}" - {{- end }} - {{- if or (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - - name: FLEET_FILESYSTEM_ENABLE_LOG_ROTATION - value: "{{ .Values.osquery.logging.filesystem.enableRotation }}" - - name: FLEET_FILESYSTEM_ENABLE_LOG_COMPRESSION - value: "{{ .Values.osquery.logging.filesystem.enableCompression }}" - {{- end }} - - {{- if or (eq .Values.osquery.logging.statusPlugin "firehose") (eq .Values.osquery.logging.resultPlugin "firehose") }} - - name: FLEET_FIREHOSE_REGION - value: "{{ .Values.osquery.logging.firehose.region }}" - {{- if eq .Values.osquery.logging.statusPlugin "firehose" }} - - name: FLEET_FIREHOSE_STATUS_STREAM - value: "{{ .Values.osquery.logging.firehose.statusStream }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "firehose" }} - - name: FLEET_FIREHOSE_RESULT_STREAM - value: "{{ .Values.osquery.logging.firehose.resultStream }}" - {{- end }} - {{- if ne .Values.osquery.logging.firehose.accessKeyID "" }} - - name: FLEET_FIREHOSE_ACCESS_KEY_ID - value: "{{ .Values.osquery.logging.firehose.accessKeyID }}" - - name: FLEET_FIREHOSE_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.osquery.secretName }}" - key: "{{ .Values.osquery.logging.firehose.secretKey }}" - {{ else }} - - name: FLEET_FIREHOSE_STS_ASSUME_ROLE_ARN - value: "{{ .Values.osquery.logging.firehose.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - - {{- if or (eq .Values.osquery.logging.statusPlugin "kinesis") (eq .Values.osquery.logging.resultPlugin "kinesis") }} - - name: FLEET_KINESIS_REGION - value: "{{ .Values.osquery.logging.kinesis.region }}" - {{- if eq .Values.osquery.logging.statusPlugin "kinesis" }} - - name: FLEET_KINESIS_STATUS_STREAM - value: "{{ .Values.osquery.logging.kinesis.statusStream }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "kinesis" }} - - name: FLEET_KINESIS_RESULT_STREAM - value: "{{ .Values.osquery.logging.kinesis.resultStream }}" - {{- end }} - {{- if ne .Values.osquery.logging.kinesis.accessKeyID "" }} - - name: FLEET_KINESIS_ACCESS_KEY_ID - value: "{{ .Values.osquery.logging.kinesis.accessKeyID }}" - - name: FLEET_KINESIS_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.osquery.secretName }}" - key: "{{ .Values.osquery.logging.kinesis.secretKey }}" - {{ else }} - - name: FLEET_KINESIS_STS_ASSUME_ROLE_ARN - value: "{{ .Values.osquery.logging.kinesis.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - - {{- if or (eq .Values.osquery.logging.statusPlugin "lambda") (eq .Values.osquery.logging.resultPlugin "lambda") }} - - name: FLEET_LAMBDA_REGION - value: "{{ .Values.osquery.logging.lambda.region }}" - {{- if eq .Values.osquery.logging.statusPlugin "lambda" }} - - name: FLEET_LAMBDA_STATUS_FUNCTION - value: "{{ .Values.osquery.logging.lambda.statusFunction }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "lambda" }} - - name: FLEET_LAMBDA_RESULT_FUNCTION - value: "{{ .Values.osquery.logging.lambda.resultFunction }}" - {{- end }} - {{- if ne .Values.osquery.logging.lambda.accessKeyID "" }} - - name: FLEET_LAMBDA_ACCESS_KEY_ID - value: "{{ .Values.osquery.logging.lambda.accessKeyID }}" - - name: FLEET_LAMBDA_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.osquery.secretName }}" - key: "{{ .Values.osquery.logging.lambda.secretKey }}" - {{ else }} - - name: FLEET_LAMBDA_STS_ASSUME_ROLE_ARN - value: "{{ .Values.osquery.logging.lambda.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - - - {{- if or (eq .Values.osquery.logging.statusPlugin "pubsub") (eq .Values.osquery.logging.resultPlugin "pubsub") }} - - name: FLEET_PUBSUB_PROJECT - value: "{{ .Values.osquery.logging.pubsub.project }}" - {{- end }} - {{- if eq .Values.osquery.logging.statusPlugin "pubsub" }} - - name: FLEET_PUBSUB_STATUS_TOPIC - value: "{{ .Values.osquery.logging.pubsub.statusTopic }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "pubsub" }} - - name: FLEET_PUBSUB_RESULT_TOPIC - value: "{{ .Values.osquery.logging.pubsub.resultTopic }}" - {{- end }} - ## END OSQUERY SECTION - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: [ALL] - privileged: false - readOnlyRootFilesystem: true - runAsGroup: 3333 - runAsUser: 3333 - runAsNonRoot: true - livenessProbe: - httpGet: - path: /healthz - port: {{ .Values.fleet.listenPort }} - timeoutSeconds: 10 - readinessProbe: - httpGet: - path: /healthz - port: {{ .Values.fleet.listenPort }} - timeoutSeconds: 10 - {{- if or (.Values.fleet.tls.enabled) (.Values.mysql.tls.enabled) (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - volumeMounts: - {{- if .Values.fleet.tls.enabled }} - - name: {{ .Values.fleetName }}-tls - readOnly: true - mountPath: /secrets/tls - {{- end }} - {{- if .Values.mysql.tls.enabled }} - - name: mysql-tls - readOnly: true - mountPath: /secrets/mysql - {{- end }} - {{- if or (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - - name: osquery-logs - mountPath: /logs - {{- end }} - - name: tmp - mountPath: /tmp - {{- end }} - {{- if .Values.gke.cloudSQL.enableProxy }} - - name: cloudsql-proxy - image: "gcr.io/cloudsql-docker/gce-proxy:{{ .Values.gke.cloudSQL.imageTag }}" - command: - - "/cloud_sql_proxy" - - "-verbose={{ .Values.gke.cloudSQL.verbose}}" - - "-instances={{ .Values.gke.cloudSQL.instanceName }}=tcp:3306" - resources: - limits: - cpu: 0.5 # 500Mhz - memory: 150Mi - requests: - cpu: 0.1 # 100Mhz - memory: 50Mi - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: [ALL] - privileged: false - readOnlyRootFilesystem: true - runAsGroup: 3333 - runAsUser: 3333 - runAsNonRoot: true - {{- end }} - hostPID: false - hostNetwork: false - hostIPC: false - serviceAccountName: {{ .Values.fleetName }} - {{- if or (.Values.fleet.tls.enabled) (.Values.mysql.tls.enabled) (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - volumes: - {{- if .Values.fleet.tls.enabled }} - - name: {{ .Values.fleetName }}-tls - secret: - secretName: "{{ .Values.fleet.secretName }}" - {{- end }} - {{- if .Values.mysql.tls.enabled }} - - name: mysql-tls - secret: - secretName: "{{ .Values.mysql.secretName }}" - {{- end }} - {{- if or (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - - name: osquery-logs - emptyDir: - sizeLimit: "{{ .Values.osquery.logging.filesystem.volumeSize }}" - {{- end }} - - name: tmp - emptyDir: - {{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/deployment.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/deployment.yaml deleted file mode 100644 index 137243dd0..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/deployment.yaml +++ /dev/null @@ -1,390 +0,0 @@ ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} -spec: - replicas: {{ .Values.replicas }} - selector: - matchLabels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - template: - metadata: -{{- with .Values.podAnnotations }} - annotations: -{{- toYaml . | trim | nindent 8 }} -{{- end }} - labels: -{{- with .Values.podLabels }} -{{- toYaml . | trim | nindent 8 }} -{{- end }} - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - spec: - containers: - - name: {{ .Values.fleetName }} - imagePullPolicy: Always - command: [/usr/bin/fleet] - args: ["serve"] - image: {{ .Values.imageRepo }}:{{ .Values.imageTag }} - ports: - - name: {{ .Values.fleetName }} - containerPort: {{ .Values.fleet.listenPort }} - resources: - limits: - cpu: {{ .Values.resources.limits.cpu }} - memory: {{ .Values.resources.limits.memory }} - requests: - cpu: {{ .Values.resources.requests.cpu }} - memory: {{ .Values.resources.requests.memory }} - env: - ## BEGIN FLEET SECTION - - name: ELASTIC_APM_SERVER_URL - value: "{{ .Values.apm.url }}" - - name: ELASTIC_APM_SECRET_TOKEN - value: "{{ .Values.apm.token }}" - - name: ELASTIC_APM_SERVICE_NAME - value: "sandbox" - - name: ELASTIC_APM_ENVIRONMENT - value: "{{ .Values.fleetName }}" - - name: FLEET_LOGGING_TRACING_TYPE - value: elasticapm - - name: FLEET_LOGGING_TRACING_ENABLED - value: "true" - - name: FLEET_VULNERABILITIES_DISABLE_SCHEDULE - value: "true" - - name: FLEET_SESSION_DURATION - value: "1y" - - name: FLEET_SERVER_SANDBOX_ENABLED - value: "1" - - name: FLEET_LICENSE_ENFORCE_HOST_LIMIT - value: "true" - - name: FLEET_LICENSE_KEY - value: "{{ .Values.fleet.licenseKey }}" - - name: FLEET_VULNERABILITIES_DATABASES_PATH - value: /tmp/vuln - {{- if ne .Values.packaging.enrollSecret "" }} - - name: FLEET_PACKAGING_GLOBAL_ENROLL_SECRET - value: "{{ .Values.packaging.enrollSecret }}" - - name: FLEET_PACKAGING_S3_BUCKET - value: "{{ .Values.packaging.s3.bucket }}" - - name: FLEET_PACKAGING_S3_PREFIX - value: "{{ .Values.packaging.s3.prefix }}" - {{- end }} - - name: FLEET_SERVER_ADDRESS - value: "0.0.0.0:{{ .Values.fleet.listenPort }}" - - name: FLEET_AUTH_BCRYPT_COST - value: "{{ .Values.fleet.auth.bcryptCost }}" - - name: FLEET_AUTH_SALT_KEY_SIZE - value: "{{ .Values.fleet.auth.saltKeySize }}" - - name: FLEET_APP_TOKEN_KEY_SIZE - value: "{{ .Values.fleet.app.tokenKeySize }}" - - name: FLEET_APP_TOKEN_VALIDITY_PERIOD - value: "{{ .Values.fleet.app.inviteTokenValidityPeriod }}" - - name: FLEET_SESSION_KEY_SIZE - value: "{{ .Values.fleet.session.keySize }}" - - name: FLEET_SESSION_DURATION - value: "{{ .Values.fleet.session.duration }}" - - name: FLEET_LOGGING_DEBUG - value: "{{ .Values.fleet.logging.debug }}" - - name: FLEET_LOGGING_JSON - value: "{{ .Values.fleet.logging.json }}" - - name: FLEET_LOGGING_DISABLE_BANNER - value: "{{ .Values.fleet.logging.disableBanner }}" - - name: FLEET_SERVER_TLS - value: "{{ .Values.fleet.tls.enabled }}" - {{- if .Values.fleet.tls.enabled }} - - name: FLEET_SERVER_TLS_COMPATIBILITY - value: "{{ .Values.fleet.tls.compatibility }}" - - name: FLEET_SERVER_CERT - value: "/secrets/tls/{{ .Values.fleet.tls.certSecretKey }}" - - name: FLEET_SERVER_KEY - value: "/secrets/tls/{{ .Values.fleet.tls.keySecretKey }}" - {{- end }} - {{- if ne .Values.fleet.carving.s3.bucketName "" }} - - name: FLEET_S3_BUCKET - value: "{{ .Values.fleet.carving.s3.bucketName }}" - - name: FLEET_S3_PREFIX - value: "{{ .Values.fleet.carving.s3.prefix }}" - {{- if ne .Values.fleet.carving.s3.accessKeyID "" }} - - name: FLEET_S3_ACCESS_KEY_ID - value: "{{ .Values.fleet.carving.s3.accessKeyID }}" - - name: FLEET_S3_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.fleet.secretName }}" - key: "{{ .Values.fleet.carving.s3.secretKey }}" - {{ else }} - - name: FLEET_S3_STS_ASSUME_ROLE_ARN - value: "{{ .Values.fleet.carving.s3.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - ## END FLEET SECTION - ## BEGIN MYSQL SECTION - - name: FLEET_MYSQL_ADDRESS - value: "{{ .Values.mysql.address }}" - - name: FLEET_MYSQL_DATABASE - value: "{{ .Values.mysql.database }}" - - name: FLEET_MYSQL_USERNAME - value: "{{ .Values.mysql.username }}" - - name: FLEET_MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: {{ .Values.mysql.secretName }} - key: {{ .Values.mysql.passwordKey }} - - name: FLEET_MYSQL_MAX_OPEN_CONNS - value: "{{ .Values.mysql.maxOpenConns }}" - - name: FLEET_MYSQL_MAX_IDLE_CONNS - value: "{{ .Values.mysql.maxIdleConns }}" - - name: FLEET_MYSQL_CONN_MAX_LIFETIME - value: "{{ .Values.mysql.connMaxLifetime }}" - {{- if .Values.mysql.tls.enabled }} - - name: FLEET_MYSQL_TLS_CA - value: "/secrets/mysql/{{ .Values.mysql.tls.caCertKey }}" - - name: FLEET_MYSQL_TLS_CERT - value: "/secrets/mysql/{{ .Values.mysql.tls.certKey }}" - - name: FLEET_MYSQL_TLS_KEY - value: "/secrets/mysql/{{ .Values.mysql.tls.keyKey }}" - - name: FLEET_MYSQL_TLS_CONFIG - value: "{{ .Values.mysql.tls.config }}" - - name: FLEET_MYSQL_TLS_SERVER_NAME - value: "{{ .Values.mysql.tls.serverName }}" - {{- end }} - ## END MYSQL SECTION - ## BEGIN REDIS SECTION - - name: FLEET_REDIS_ADDRESS - value: "{{ .Values.redis.address }}" - - name: FLEET_REDIS_DATABASE - value: "{{ .Values.redis.database }}" - {{- if .Values.redis.usePassword }} - - name: FLEET_REDIS_PASSWORD - valueFrom: - secretKeyRef: - name: "{{ .Values.redis.secretName }}" - key: "{{ .Values.redis.passwordKey }}" - {{- end }} - ## END REDIS SECTION - ## BEGIN OSQUERY SECTION - - name: FLEET_OSQUERY_NODE_KEY_SIZE - value: "{{ .Values.osquery.nodeKeySize }}" - - name: FLEET_OSQUERY_LABEL_UPDATE_INTERVAL - value: "{{ .Values.osquery.labelUpdateInterval }}" - - name: FLEET_OSQUERY_DETAIL_UPDATE_INTERVAL - value: "{{ .Values.osquery.detailUpdateInterval }}" - - name: FLEET_OSQUERY_STATUS_LOG_PLUGIN - value: "{{ .Values.osquery.logging.statusPlugin }}" - - name: FLEET_OSQUERY_RESULT_LOG_PLUGIN - value: "{{ .Values.osquery.logging.resultPlugin }}" - {{- if eq .Values.osquery.logging.statusPlugin "filesystem" }} - - name: FLEET_FILESYSTEM_STATUS_LOG_FILE - value: "/logs/{{ .Values.osquery.logging.filesystem.statusLogFile }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "filesystem" }} - - name: FLEET_FILESYSTEM_RESULT_LOG_FILE - value: "/logs/{{ .Values.osquery.logging.filesystem.resultLogFile }}" - {{- end }} - {{- if or (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - - name: FLEET_FILESYSTEM_ENABLE_LOG_ROTATION - value: "{{ .Values.osquery.logging.filesystem.enableRotation }}" - - name: FLEET_FILESYSTEM_ENABLE_LOG_COMPRESSION - value: "{{ .Values.osquery.logging.filesystem.enableCompression }}" - {{- end }} - - {{- if or (eq .Values.osquery.logging.statusPlugin "firehose") (eq .Values.osquery.logging.resultPlugin "firehose") }} - - name: FLEET_FIREHOSE_REGION - value: "{{ .Values.osquery.logging.firehose.region }}" - {{- if eq .Values.osquery.logging.statusPlugin "firehose" }} - - name: FLEET_FIREHOSE_STATUS_STREAM - value: "{{ .Values.osquery.logging.firehose.statusStream }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "firehose" }} - - name: FLEET_FIREHOSE_RESULT_STREAM - value: "{{ .Values.osquery.logging.firehose.resultStream }}" - {{- end }} - {{- if ne .Values.osquery.logging.firehose.accessKeyID "" }} - - name: FLEET_FIREHOSE_ACCESS_KEY_ID - value: "{{ .Values.osquery.logging.firehose.accessKeyID }}" - - name: FLEET_FIREHOSE_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.osquery.secretName }}" - key: "{{ .Values.osquery.logging.firehose.secretKey }}" - {{ else }} - - name: FLEET_FIREHOSE_STS_ASSUME_ROLE_ARN - value: "{{ .Values.osquery.logging.firehose.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - - {{- if or (eq .Values.osquery.logging.statusPlugin "kinesis") (eq .Values.osquery.logging.resultPlugin "kinesis") }} - - name: FLEET_KINESIS_REGION - value: "{{ .Values.osquery.logging.kinesis.region }}" - {{- if eq .Values.osquery.logging.statusPlugin "kinesis" }} - - name: FLEET_KINESIS_STATUS_STREAM - value: "{{ .Values.osquery.logging.kinesis.statusStream }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "kinesis" }} - - name: FLEET_KINESIS_RESULT_STREAM - value: "{{ .Values.osquery.logging.kinesis.resultStream }}" - {{- end }} - {{- if ne .Values.osquery.logging.kinesis.accessKeyID "" }} - - name: FLEET_KINESIS_ACCESS_KEY_ID - value: "{{ .Values.osquery.logging.kinesis.accessKeyID }}" - - name: FLEET_KINESIS_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.osquery.secretName }}" - key: "{{ .Values.osquery.logging.kinesis.secretKey }}" - {{ else }} - - name: FLEET_KINESIS_STS_ASSUME_ROLE_ARN - value: "{{ .Values.osquery.logging.kinesis.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - - {{- if or (eq .Values.osquery.logging.statusPlugin "lambda") (eq .Values.osquery.logging.resultPlugin "lambda") }} - - name: FLEET_LAMBDA_REGION - value: "{{ .Values.osquery.logging.lambda.region }}" - {{- if eq .Values.osquery.logging.statusPlugin "lambda" }} - - name: FLEET_LAMBDA_STATUS_FUNCTION - value: "{{ .Values.osquery.logging.lambda.statusFunction }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "lambda" }} - - name: FLEET_LAMBDA_RESULT_FUNCTION - value: "{{ .Values.osquery.logging.lambda.resultFunction }}" - {{- end }} - {{- if ne .Values.osquery.logging.lambda.accessKeyID "" }} - - name: FLEET_LAMBDA_ACCESS_KEY_ID - value: "{{ .Values.osquery.logging.lambda.accessKeyID }}" - - name: FLEET_LAMBDA_SECRET_ACCESS_KEY - valueFrom: - secretKeyRef: - name: "{{ .Values.osquery.secretName }}" - key: "{{ .Values.osquery.logging.lambda.secretKey }}" - {{ else }} - - name: FLEET_LAMBDA_STS_ASSUME_ROLE_ARN - value: "{{ .Values.osquery.logging.lambda.stsAssumeRoleARN }}" - {{- end }} - {{- end }} - - - {{- if or (eq .Values.osquery.logging.statusPlugin "pubsub") (eq .Values.osquery.logging.resultPlugin "pubsub") }} - - name: FLEET_PUBSUB_PROJECT - value: "{{ .Values.osquery.logging.pubsub.project }}" - {{- end }} - {{- if eq .Values.osquery.logging.statusPlugin "pubsub" }} - - name: FLEET_PUBSUB_STATUS_TOPIC - value: "{{ .Values.osquery.logging.pubsub.statusTopic }}" - {{- end }} - {{- if eq .Values.osquery.logging.resultPlugin "pubsub" }} - - name: FLEET_PUBSUB_RESULT_TOPIC - value: "{{ .Values.osquery.logging.pubsub.resultTopic }}" - {{- end }} - ## END OSQUERY SECTION - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: [ALL] - privileged: false - readOnlyRootFilesystem: true - runAsGroup: 3333 - runAsUser: 3333 - runAsNonRoot: true - livenessProbe: - httpGet: - path: /healthz - port: {{ .Values.fleet.listenPort }} - timeoutSeconds: 10 - readinessProbe: - httpGet: - path: /healthz - port: {{ .Values.fleet.listenPort }} - timeoutSeconds: 10 - {{- if or (.Values.fleet.tls.enabled) (.Values.mysql.tls.enabled) (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - volumeMounts: - {{- if .Values.fleet.tls.enabled }} - - name: {{ .Values.fleetName }}-tls - readOnly: true - mountPath: /secrets/tls - {{- end }} - {{- if .Values.mysql.tls.enabled }} - - name: mysql-tls - readOnly: true - mountPath: /secrets/mysql - {{- end }} - {{- if or (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - - name: osquery-logs - mountPath: /logs - {{- end }} - - name: tmp - mountPath: /tmp - {{- end }} - {{- if .Values.gke.cloudSQL.enableProxy }} - - name: cloudsql-proxy - image: "gcr.io/cloudsql-docker/gce-proxy:{{ .Values.gke.cloudSQL.imageTag }}" - command: - - "/cloud_sql_proxy" - - "-verbose={{ .Values.gke.cloudSQL.verbose}}" - - "-instances={{ .Values.gke.cloudSQL.instanceName }}=tcp:3306" - resources: - limits: - cpu: 0.5 # 500Mhz - memory: 150Mi - requests: - cpu: 0.1 # 100Mhz - memory: 50Mi - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: [ALL] - privileged: false - readOnlyRootFilesystem: true - runAsGroup: 3333 - runAsUser: 3333 - runAsNonRoot: true - {{- end }} - hostPID: false - hostNetwork: false - hostIPC: false - serviceAccountName: {{ .Values.fleetName }} - {{- if or (.Values.fleet.tls.enabled) (.Values.mysql.tls.enabled) (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - volumes: - {{- if .Values.fleet.tls.enabled }} - - name: {{ .Values.fleetName }}-tls - secret: - secretName: "{{ .Values.fleet.secretName }}" - {{- end }} - {{- if .Values.mysql.tls.enabled }} - - name: mysql-tls - secret: - secretName: "{{ .Values.mysql.secretName }}" - {{- end }} - {{- if or (eq .Values.osquery.logging.statusPlugin "filesystem") (eq .Values.osquery.logging.resultPlugin "filesystem") }} - - name: osquery-logs - emptyDir: - sizeLimit: "{{ .Values.osquery.logging.filesystem.volumeSize }}" - {{- end }} - - name: tmp - emptyDir: - {{- end }} - {{- with .Values.nodeSelector }} - nodeSelector: - {{- toYaml . | nindent 8 }} - {{- end }} - {{- with .Values.affinity }} - affinity: - {{- toYaml . | nindent 8 }} - {{- end }} - {{- with .Values.tolerations }} - tolerations: - {{- toYaml . | nindent 8 }} - {{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/gke-managedcertificate.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/gke-managedcertificate.yaml deleted file mode 100644 index 51c84eb3b..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/gke-managedcertificate.yaml +++ /dev/null @@ -1,9 +0,0 @@ -{{- if .Values.gke.ingress.useManagedCertificate }} -apiVersion: networking.gke.io/v1 -kind: ManagedCertificate -metadata: - name: {{ .Values.fleetName }} -spec: - domains: - - {{ .Values.hostName }} -{{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/ingress.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/ingress.yaml deleted file mode 100644 index c72809a8e..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/ingress.yaml +++ /dev/null @@ -1,39 +0,0 @@ -{{- if .Values.createIngress }} -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: -{{- if or .Values.ingressAnnotations .Values.gke.useGKEIngress }} - annotations: -{{- range $key, $value := $.Values.ingressAnnotations }} - {{ $key }}: {{ $value | quote }} -{{- end }} - {{- if .Values.gke.ingress.useGKEIngress }} - kubernetes.io/ingress.class: gce - {{- if .Values.gke.ingress.useManagedCertificate }} - kubernetes.io/ingress.allow-http: "false" - networking.gke.io/managed-certificates: fleet - {{- end }} - {{- end }} -{{- end }} - labels: - app: {{ .Values.fleetName }} - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} -spec: - rules: - - host: {{ .Values.hostName }} - http: - paths: - - path: / - # Next line required in k8s 1.19 and not supported in <=1.17 - # pathType: Exact - backend: - service: - name: {{ .Values.fleetName }} - port: - number: 8080 - pathType: Prefix -{{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/job-migration.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/job-migration.yaml deleted file mode 100644 index bf05ff2d4..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/job-migration.yaml +++ /dev/null @@ -1,139 +0,0 @@ -{{- if .Values.fleet.autoApplySQLMigrations }} -apiVersion: batch/v1 -kind: Job -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }}-migration - namespace: {{ .Release.Namespace }} -spec: - # This will clean up the job to prevent excess costs when using - # EKS/Fargate. See - # https://docs.google.com/document/d/1-4KmOlgfGEksNZnQo79a9nRLgM_i7ar2qovoZO3s_6g/edit - ttlSecondsAfterFinished: 100 - template: - metadata: -{{- with .Values.podAnnotations }} - annotations: -{{- toYaml . | trim | nindent 8 }} -{{- end }} - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - spec: - restartPolicy: Never - containers: - - name: {{ .Values.fleetName }}-migration - command: [/usr/bin/fleet] - args: ["prepare","db","--no-prompt"] - image: {{ .Values.imageRepo }}:{{ .Values.imageTag }} - imagePullPolicy: Always - resources: - limits: - cpu: {{ .Values.resources.limits.cpu }} - memory: {{ .Values.resources.limits.memory }} - requests: - cpu: {{ .Values.resources.requests.cpu }} - memory: {{ .Values.resources.requests.memory }} - env: - - name: FLEET_SERVER_ADDRESS - value: "0.0.0.0:{{ .Values.fleet.listenPort }}" - - name: FLEET_AUTH_BCRYPT_COST - value: "{{ .Values.fleet.auth.bcryptCost }}" - - name: FLEET_AUTH_SALT_KEY_SIZE - value: "{{ .Values.fleet.auth.saltKeySize }}" - - name: FLEET_APP_TOKEN_KEY_SIZE - value: "{{ .Values.fleet.app.tokenKeySize }}" - - name: FLEET_APP_TOKEN_VALIDITY_PERIOD - value: "{{ .Values.fleet.app.inviteTokenValidityPeriod }}" - - name: FLEET_SESSION_KEY_SIZE - value: "{{ .Values.fleet.session.keySize }}" - - name: FLEET_SESSION_DURATION - value: "{{ .Values.fleet.session.duration }}" - - name: FLEET_LOGGING_DEBUG - value: "{{ .Values.fleet.logging.debug }}" - - name: FLEET_LOGGING_JSON - value: "{{ .Values.fleet.logging.json }}" - - name: FLEET_LOGGING_DISABLE_BANNER - value: "{{ .Values.fleet.logging.disableBanner }}" - - name: FLEET_SERVER_TLS - value: "{{ .Values.fleet.tls.enabled }}" - {{- if .Values.fleet.tls.enabled }} - - name: FLEET_SERVER_TLS_COMPATIBILITY - value: "{{ .Values.fleet.tls.compatibility }}" - - name: FLEET_SERVER_CERT - value: "/secrets/tls/{{ .Values.fleet.tls.certSecretKey }}" - - name: FLEET_SERVER_KEY - value: "/secrets/tls/{{ .Values.fleet.tls.keySecretKey }}" - {{- end }} - ## END FLEET SECTION - ## BEGIN MYSQL SECTION - - name: FLEET_MYSQL_ADDRESS - value: "{{ .Values.mysql.address }}" - - name: FLEET_MYSQL_DATABASE - value: "{{ .Values.mysql.database }}" - - name: FLEET_MYSQL_USERNAME - value: "{{ .Values.mysql.username }}" - - name: FLEET_MYSQL_PASSWORD - valueFrom: - secretKeyRef: - name: {{ .Values.mysql.secretName }} - key: {{ .Values.mysql.passwordKey }} - - name: FLEET_MYSQL_MAX_OPEN_CONNS - value: "{{ .Values.mysql.maxOpenConns }}" - - name: FLEET_MYSQL_MAX_IDLE_CONNS - value: "{{ .Values.mysql.maxIdleConns }}" - - name: FLEET_MYSQL_CONN_MAX_LIFETIME - value: "{{ .Values.mysql.connMaxLifetime }}" - {{- if .Values.mysql.tls.enabled }} - - name: FLEET_MYSQL_TLS_CA - value: "/secrets/mysql/{{ .Values.mysql.tls.caCertKey }}" - - name: FLEET_MYSQL_TLS_CERT - value: "/secrets/mysql/{{ .Values.mysql.tls.certKey }}" - - name: FLEET_MYSQL_TLS_KEY - value: "/secrets/mysql/{{ .Values.mysql.tls.keyKey }}" - - name: FLEET_MYSQL_TLS_CONFIG - value: "{{ .Values.mysql.tls.config }}" - - name: FLEET_MYSQL_TLS_SERVER_NAME - value: "{{ .Values.mysql.tls.serverName }}" - {{- end }} - ## END MYSQL SECTION - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: [ALL] - privileged: false - readOnlyRootFilesystem: true - runAsGroup: 3333 - runAsUser: 3333 - runAsNonRoot: true - volumeMounts: - {{- if .Values.mysql.tls.enabled }} - - name: mysql-tls - readOnly: true - mountPath: /secrets/mysql - {{- end }} - volumes: - {{- if .Values.mysql.tls.enabled }} - - name: mysql-tls - secret: - secretName: "{{ .Values.mysql.secretName }}" - {{- end }} - {{- with .Values.nodeSelector }} - nodeSelector: - {{- toYaml . | nindent 8 }} - {{- end }} - {{- with .Values.affinity }} - affinity: - {{- toYaml . | nindent 8 }} - {{- end }} - {{- with .Values.tolerations }} - tolerations: - {{- toYaml . | nindent 8 }} - {{- end }} -{{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/namespace.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/namespace.yaml deleted file mode 100644 index f20ba781c..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/namespace.yaml +++ /dev/null @@ -1,11 +0,0 @@ -{{- if .Values.createNamespace }} -apiVersion: v1 -kind: Namespace -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Release.Namespace }} -{{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/rbac.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/rbac.yaml deleted file mode 100644 index affc259d8..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/rbac.yaml +++ /dev/null @@ -1,42 +0,0 @@ -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} -rules: -- apiGroups: - - core - resources: - - secrets - resourceNames: - - {{ .Values.mysql.secretName }} - - {{ .Values.redis.secretName }} - - {{ .Values.fleet.secretName }} - - {{ .Values.osquery.secretName }} - verbs: - - get ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: {{ .Values.fleetName }} -subjects: -- apiGroup: "" - kind: ServiceAccount - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/sa.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/sa.yaml deleted file mode 100644 index 6282b00a2..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/sa.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: -{{- if or .Values.serviceAccountAnnotations .Values.gke.workloadIdentityEmail }} - annotations: - {{- with .Values.serviceAccountAnnotations}} - {{ toYaml . | trim | indent 2}} - {{- end }} - {{- if ne .Values.gke.workloadIdentityEmail "" }} - iam.gke.io/gcp-service-account: {{ .Values.gke.workloadIdentityEmail }} - {{- end }} -{{- end }} - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/secrets.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/secrets.yaml deleted file mode 100644 index 7d7adb4a9..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/secrets.yaml +++ /dev/null @@ -1,31 +0,0 @@ -{{- if .Values.mysql.createSecret }} -apiVersion: v1 -kind: Secret -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.mysql.secretName }} - namespace: {{ .Release.Namespace }} -stringData: - {{ .Values.mysql.passwordKey }}: {{ .Values.mysql.password | quote }} -type: Opaque ---- -{{- end }} -{{- if .Values.redis.createSecret }} -apiVersion: v1 -kind: Secret -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.redis.secretName }} - namespace: {{ .Release.Namespace }} -stringData: - {{ .Values.redis.passwordKey }}: {{ .Values.redis.password }} -type: Opaque -{{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/service.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/service.yaml deleted file mode 100644 index d4d848b2d..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/templates/service.yaml +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - labels: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - name: {{ .Values.fleetName }} - namespace: {{ .Release.Namespace }} -spec: - selector: - app: fleet - chart: fleet - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - ports: - - name: {{ .Values.fleetName }} - port: {{ .Values.fleet.listenPort }} - {{- if .Values.gke.ingress.useGKEIngress }} - type: NodePort - {{- end }} diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/values.yaml b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/values.yaml deleted file mode 100644 index 3f8ad4c02..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/fleet/values.yaml +++ /dev/null @@ -1,197 +0,0 @@ -## Section: Kubernetes -# All settings related to how Fleet is deployed in Kubernetes -# The name used for deployment/role/sa/etc. Useful for when deploying multiple separate -# fleet instances into the same Namespace. -fleetName: fleet -hostName: fleet.localhost -replicas: 3 # The number of Fleet instances to deploy -imageTag: v4.12.0 # Version of Fleet to deploy -imageRepo: fleetdm/fleet -createNamespace: false # Whether or not to automatically create the Namespace -createIngress: true # Whether or not to automatically create an Ingress -ingressAnnotations: {} # Additional annotation to add to the Ingress -packaging: - enrollSecret: "" - s3: - bucket: "" - prefix: "" -podLabels: {} # Additional labels to add to the Fleet pod -podAnnotations: {} # Additional annotations to add to the Fleet pod -serviceAccountAnnotations: {} # Additional annotations to add to the Fleet service account -resources: - limits: - cpu: 1 # 1GHz - memory: 2Gi - requests: - cpu: 1 # 100Mhz - memory: 2Gi - -# Node labels for pod assignment -# ref: https://kubernetes.io/docs/user-guide/node-selection/ -nodeSelector: {} - -# Tolerations for pod assignment -# ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ -tolerations: [] - -# Configurable affinity for pod assignment -affinity: - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - podAffinityTerm: - labelSelector: - matchExpressions: - - key: app - operator: In - values: - - fleet - topologyKey: kubernetes.io/hostname - weight: 100 - -## Section: Fleet -# All of the settings relating to configuring the Fleet server -fleet: - listenPort: 8080 - # Name of the Secret resource storing TLS and S3 bucket secrets - secretName: fleet - licenseKey: "" - # Whether or not to run `fleet db prepare` to run SQL migrations before starting Fleet - autoApplySQLMigrations: true - tls: - enabled: true - compatibility: modern - certSecretKey: server.cert - keySecretKey: server.key - auth: - bcryptCost: 12 - saltKeySize: 24 - app: - tokenKeySize: 24 - inviteTokenValidityPeriod: 120h # 5 days - session: - keySize: 64 - duration: 2160h # 90 days - logging: - debug: false - json: false - disableBanner: false - carving: - s3: - bucketName: "" - prefix: "" - accessKeyID: "" - secretKey: s3-bucket - stsAssumeRoleARN: "" - -## Section: osquery -# All of the settings related to osquery's interactions with the Fleet server -osquery: - # Name of the secret resource containing optional secrets for AWS credentials - secretName: osquery - nodeKeySize: 24 - labelUpdateInterval: 30m - detailUpdateInterval: 30m - - # To change where Fleet store the logs sent from osquery, set the values below - logging: - statusPlugin: filesystem - resultPlugin: filesystem - - # To congigure the filesystem logger, change the values below - filesystem: - statusLogFile: osquery_status # will be placed in the /logs volume - resultLogFile: osquery_result # will be placed in the /logs volume - enableRotation: false - enableCompression: false - volumeSize: 20Gi # the maximum size of the volume - - # To configure the AWS Firehose logger, change the values below - firehose: - region: "" - accessKeyID: "" - secretKey: firehose - stsAssumeRoleARN: "" - statusStream: "" - resultStream: "" - - # To configure the AWS Kinesis logger, change the values below - kinesis: - region: "" - accessKeyID: "" - secretKey: kinesis - stsAssumeRoleARN: "" - statusStream: "" - resultStream: "" - - # To configure the AWS Lambda logger, change the values below - lambda: - region: "" - accessKeyID: "" - secretKey: lambda - stsAssumeRoleARN: "" - statusFunction: "" - resultFunction: "" - - # To configure the GCP PubSub logger, change the values below - pubsub: - project: "" - statusTopic: "" - resultTopic: "" - -apm: - url: "" - token: "" - -## Section: MySQL -# All of the connection settings for MySQL -mysql: - createSecret: false - # Name of the Secret resource containing MySQL password and TLS secrets - secretName: mysql - address: 127.0.0.1:3306 - database: fleet - username: fleet - # Only needed if creating secret. - password: default - passwordKey: mysql-password - maxOpenConns: 5 - maxIdleConns: 5 - connMaxLifetime: 0 - tls: - enabled: false - caCertKey: ca.cert - certKey: client.cert - keyKey: client.key - config: "" - serverName: "" - -## Section: Redis -# All of the connection settings for Redis -redis: - createSecret: false - address: 127.0.0.1:6379 - database: "0" - usePassword: false - secretName: redis - # Only needed if creating secret. - password: default - passwordKey: redis-password - -## Section: GKE -# Settings that make running on Google Kubernetes Engine easier -gke: - # The CloudSQL Proxy runs as a container in the Fleet Pod that proxies connections to a Cloud SQL instance - cloudSQL: - enableProxy: false - imageTag: 1.17-alpine - verbose: true - instanceName: "" - # The GKE Ingress requires a few changes that other ingress controllers don't - ingress: - useGKEIngress: false - useManagedCertificate: false - # Workload Identity allows the K8s service account to assume the IAM permissions of a GCP service account - workloadIdentityEmail: "" - -crons: - vulnerabilities: "0,15,30,45 * * * *" diff --git a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/main.tf b/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/main.tf deleted file mode 100644 index 9c84cd519..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/deploy_terraform/main.tf +++ /dev/null @@ -1,297 +0,0 @@ -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - version = "~> 4.10.0" - } - random = { - source = "hashicorp/random" - version = "~> 3.1.2" - } - mysql = { - source = "petoju/mysql" - version = "3.0.12" - } - helm = { - source = "hashicorp/helm" - version = "2.5.1" - } - } - backend "s3" {} -} - -provider "helm" { - kubernetes { - host = data.aws_eks_cluster.cluster.endpoint - token = data.aws_eks_cluster_auth.cluster.token - cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) - } -} - -data "aws_eks_cluster" "cluster" { - name = var.eks_cluster -} - -data "aws_eks_cluster_auth" "cluster" { - name = var.eks_cluster -} - -provider "mysql" { - endpoint = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["endpoint"] - username = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["username"] - password = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["password"] -} - -variable "mysql_secret" {} -variable "eks_cluster" {} -variable "redis_address" {} -variable "redis_database" {} -variable "lifecycle_table" {} -variable "base_domain" {} -variable "enroll_secret" {} -variable "installer_bucket" {} -variable "installer_bucket_arn" {} -variable "oidc_provider_arn" {} -variable "oidc_provider" {} -variable "kms_key_arn" {} -variable "ecr_url" {} -variable "license_key" {} -variable "apm_url" {} -variable "apm_token" {} - -resource "mysql_user" "main" { - user = terraform.workspace - host = "%" - plaintext_password = random_password.db.result -} - -resource "mysql_database" "main" { - name = terraform.workspace -} - -resource "mysql_grant" "main" { - user = mysql_user.main.user - database = mysql_database.main.name - host = "%" - privileges = ["ALL"] -} - -data "aws_secretsmanager_secret_version" "mysql" { - secret_id = var.mysql_secret -} - -resource "random_password" "db" { - length = 8 -} - -resource "random_integer" "cron_offset" { - min = 0 - max = 14 -} - -resource "helm_release" "main" { - name = terraform.workspace - chart = "${path.module}/fleet" - - set { - name = "fleetName" - value = terraform.workspace - } - - set { - name = "mysql.password" - value = random_password.db.result - } - - set { - name = "mysql.createSecret" - value = true - } - - set { - name = "mysql.secretName" - value = terraform.workspace - } - - set { - name = "mysql.username" - value = mysql_user.main.user - } - - set { - name = "mysql.database" - value = terraform.workspace - } - - set { - name = "mysql.address" - value = jsondecode(data.aws_secretsmanager_secret_version.mysql.secret_string)["endpoint"] - } - - set { - name = "fleet.tls.enabled" - value = false - } - - set { - name = "redis.address" - value = var.redis_address - } - - set { - name = "redis.database" - value = var.redis_database - } - - set { - name = "kubernetes.io/ingress.class" - value = "nginx" - } - - set { - name = "hostName" - value = "${terraform.workspace}.${var.base_domain}" - } - - set { - name = "ingressAnnotations.kubernetes\\.io/ingress\\.class" - value = "haproxy" - } - - set { - name = "replicas" - value = "1" - } - - set { - name = "imageTag" - value = "v4.44.1" - } - - set { - name = "imageRepo" - value = var.ecr_url - } - - set { - name = "packaging.enrollSecret" - value = var.enroll_secret - } - - set { - name = "packaging.s3.bucket" - value = var.installer_bucket - } - - set { - name = "packaging.s3.prefix" - value = terraform.workspace - } - - set { - name = "serviceAccountAnnotations.eks\\.amazonaws\\.com/role-arn" - value = aws_iam_role.main.arn - } - - set { - name = "crons.vulnerabilities" - value = "${random_integer.cron_offset.result}\\,${random_integer.cron_offset.result + 15}\\,${random_integer.cron_offset.result + 30}\\,${random_integer.cron_offset.result + 45} * * * *" - } - - set { - name = "fleet.licenseKey" - value = var.license_key - } - - set { - name = "apm.url" - value = var.apm_url - } - - set { - name = "apm.token" - value = var.apm_token - } - - set { - name = "resources.limits.memory" - value = "512Mi" - } - - set { - name = "resources.requests.memory" - value = "512Mi" - } -} - -data "aws_iam_policy_document" "main" { - statement { - actions = [ - "s3:*Object", - "s3:ListBucket", - ] - resources = [ - var.installer_bucket_arn, - "${var.installer_bucket_arn}/${terraform.workspace}/*" - ] - } - statement { - actions = [ - "kms:DescribeKey", - "kms:GenerateDataKey", - "kms:Decrypt", - ] - resources = [var.kms_key_arn] - } -} - -resource "aws_iam_policy" "main" { - name = terraform.workspace - policy = data.aws_iam_policy_document.main.json -} - -resource "aws_iam_role_policy_attachment" "main" { - role = aws_iam_role.main.id - policy_arn = aws_iam_policy.main.arn -} - -data "aws_iam_policy_document" "main-assume-role" { - statement { - principals { - type = "Federated" - identifiers = [var.oidc_provider_arn] - } - actions = ["sts:AssumeRoleWithWebIdentity"] - condition { - test = "StringEquals" - variable = "${var.oidc_provider}:aud" - values = ["sts.amazonaws.com"] - } - condition { - test = "StringEquals" - variable = "${var.oidc_provider}:sub" - values = ["system:serviceaccount:default:${terraform.workspace}"] - } - } -} - -resource "aws_iam_role" "main" { - name_prefix = terraform.workspace - path = "/sandbox/" - assume_role_policy = data.aws_iam_policy_document.main-assume-role.json -} - -resource "aws_dynamodb_table_item" "main" { - table_name = var.lifecycle_table - hash_key = "ID" - - item = < github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24 - -replace github.com/micromdm/scep/v2 => github.com/fleetdm/scep/v2 v2.1.1-0.20220729212655-4f19f0a10a03 diff --git a/infrastructure/sandbox/PreProvisioner/lambda/go.sum b/infrastructure/sandbox/PreProvisioner/lambda/go.sum deleted file mode 100644 index e0ba05038..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/go.sum +++ /dev/null @@ -1,1077 +0,0 @@ -cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= -cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= -cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU= -cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= -cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc= -cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0= -cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To= -cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4= -cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M= -cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc= -cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk= -cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs= -cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc= -cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY= -cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI= -cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk= -cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg= -cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8= -cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0= -cloud.google.com/go v0.94.0 h1:QDB2MZHqjTt0hGKnoEWyG/iWykue/lvkLdogLgrg10U= -cloud.google.com/go v0.94.0/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4= -cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= -cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= -cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= -cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= -cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= -cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= -cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= -cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk= -cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= -cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= -cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= -cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU= -cloud.google.com/go/pubsub v1.16.0 h1:N2WVmm3vmoBo8+cbBgwACB8ZKUP/YQvG2ujHx47/oXY= -cloud.google.com/go/pubsub v1.16.0/go.mod h1:6A8EfoWZ/lUvCWStKGwAWauJZSiuV0Mkmu6WilK/TxQ= -cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw= -cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos= -cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk= -cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs= -cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= -dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk= -dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= -dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= -github.com/AlekSi/pointer v1.2.0 h1:glcy/gc4h8HnG2Z3ZECSzZ1IX1x2JxRVuDzaJwQE0+w= -github.com/AlekSi/pointer v1.2.0/go.mod h1:gZGfd3dpW4vEc/UlyfKKi1roIqcCgwOIvb0tSNSBle0= -github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= -github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= -github.com/DataDog/zstd v1.4.5 h1:EndNeuB0l9syBZhut0wns3gV1hL8zX8LIu6ZiVHWLIQ= -github.com/DataDog/zstd v1.4.5/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= -github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= -github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= -github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= -github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww= -github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y= -github.com/Masterminds/semver/v3 v3.1.0/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs= -github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc= -github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs= -github.com/Masterminds/sprig v2.22.0+incompatible h1:z4yfnGrZ7netVz+0EDJ0Wi+5VZCSYp4Z0m2dk6cEM60= -github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o= -github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA= -github.com/Microsoft/go-winio v0.4.15/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw= -github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY= -github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow= -github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM= -github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= -github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371 h1:kkhsdkhsCvIsutKu5zLMgWtgh9YxGCNAw8Ad8hjwfYg= -github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371/go.mod h1:EjAoLdwvbIOoOQr3ihjnSoLZRtE8azugULFRteWMNc0= -github.com/ProtonMail/go-mime v0.0.0-20190923161245-9b5a4261663a h1:W6RrgN/sTxg1msqzFFb+G80MFmpjMw61IU+slm+wln4= -github.com/ProtonMail/go-mime v0.0.0-20190923161245-9b5a4261663a/go.mod h1:NYt+V3/4rEeDuaev/zw1zCq8uqVEuPHzDPo3OZrlGJ4= -github.com/ProtonMail/gopenpgp/v2 v2.2.2 h1:u2m7xt+CZWj88qK1UUNBoXeJCFJwJCZ/Ff4ymGoxEXs= -github.com/ProtonMail/gopenpgp/v2 v2.2.2/go.mod h1:ajUlBGvxMH1UBZnaYO3d1FSVzjiC6kK9XlZYGiDCvpM= -github.com/RobotsAndPencils/buford v0.14.0/go.mod h1:F5FvdB/nkMby8Pge6HFpPHgLOeUZne/iE5wKzvx64Y0= -github.com/WatchBeam/clock v0.0.0-20170901150240-b08e6b4da7ea h1:C9Xwp9fZf9BFJMsTqs8P+4PETXwJPUOuJZwBfVci+4A= -github.com/WatchBeam/clock v0.0.0-20170901150240-b08e6b4da7ea/go.mod h1:N5eJIl14rhNCrE5I3O10HIyhZ1HpjaRHT9WDg1eXxtI= -github.com/aai/gocrypto v0.0.0-20160205191751-93df0c47f8b8/go.mod h1:nE/FnVUmtbP0EbgMVCUtDrm1+86H47QfJIdcmZb+J1s= -github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= -github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7/go.mod h1:6zEj6s6u/ghQa61ZWa/C2Aw3RkjiTBOix7dkqa1VLIs= -github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= -github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= -github.com/andygrunwald/go-jira v1.16.0 h1:PU7C7Fkk5L96JvPc6vDVIrd99vdPnYudHu4ju2c2ikQ= -github.com/andygrunwald/go-jira v1.16.0/go.mod h1:UQH4IBVxIYWbgagc0LF/k9FRs9xjIiQ8hIcC6HfLwFU= -github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c= -github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8= -github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4= -github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= -github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM= -github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk= -github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= -github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= -github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= -github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio= -github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= -github.com/aws/aws-lambda-go v1.29.0 h1:u+sfZkvNBUgt0ZkO8Q/jOMBV22DqMDMbZu04oomM2no= -github.com/aws/aws-lambda-go v1.29.0/go.mod h1:aakqVz9vDHhtbt0U2zegh/z9SI2+rJ+yRREZYNQLmWY= -github.com/aws/aws-sdk-go v1.43.37 h1:kyZ7UjaPZaCik+asF33UFOOYSwr9liDRr/UM/vuw8yY= -github.com/aws/aws-sdk-go v1.43.37/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= -github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= -github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= -github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= -github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= -github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM= -github.com/blakesmith/ar v0.0.0-20190502131153-809d4375e1fb h1:m935MPodAbYS46DG4pJSv7WO+VECIWUQ7OJYSoTrMh4= -github.com/blakesmith/ar v0.0.0-20190502131153-809d4375e1fb/go.mod h1:PkYb9DJNAwrSvRx5DYA+gUcOIgTGVMNkfSCbZM8cWpI= -github.com/bwesterb/go-ristretto v1.2.3/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0= -github.com/caarlos0/go-rpmutils v0.2.1-0.20211112020245-2cd62ff89b11 h1:IRrDwVlWQr6kS1U8/EtyA1+EHcc4yl8pndcqXWrEamg= -github.com/caarlos0/go-rpmutils v0.2.1-0.20211112020245-2cd62ff89b11/go.mod h1:je2KZ+LxaCNvCoKg32jtOIULcFogJKcL1ZWUaIBjKj0= -github.com/caarlos0/testfs v0.4.3 h1:q1zEM5hgsssqWanAfevJYYa0So60DdK6wlJeTc/yfUE= -github.com/caarlos0/testfs v0.4.3/go.mod h1:bRN55zgG4XCUVVHZCeU+/Tz1Q6AxEJOEJTliBy+1DMk= -github.com/cavaliercoder/go-cpio v0.0.0-20180626203310-925f9528c45e h1:hHg27A0RSSp2Om9lubZpiMgVbvn39bsUmW9U5h0twqc= -github.com/cavaliercoder/go-cpio v0.0.0-20180626203310-925f9528c45e/go.mod h1:oDpT4efm8tSYHXV5tHSdRvBet/b/QzxZ+XyyPehvm3A= -github.com/cenkalti/backoff/v4 v4.1.3 h1:cFAlzYUlVYDysBEH2T5hyJZMh3+5+WCBvSnK6Q8UtC4= -github.com/cenkalti/backoff/v4 v4.1.3/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw= -github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= -github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= -github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= -github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= -github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= -github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs= -github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA= -github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= -github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= -github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= -github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= -github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4= -github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= -github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= -github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= -github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= -github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= -github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= -github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= -github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= -github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= -github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q= -github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A= -github.com/elazarl/goproxy v0.0.0-20230808193330-2592e75ae04a h1:mATvB/9r/3gvcejNsXKSkQ6lcIaNec2nyfOdlTBR2lU= -github.com/elazarl/goproxy v0.0.0-20230808193330-2592e75ae04a/go.mod h1:Ro8st/ElPeALwNFlcTpWmkr6IoMFfkjXAvTHpevnDsM= -github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o= -github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc= -github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ= -github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= -github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= -github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po= -github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk= -github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= -github.com/fatih/color v1.12.0 h1:mRhaKNwANqRgUBGKmnI5ZxEk7QXmjQeCcuYFMX2bfcc= -github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM= -github.com/fatih/structs v1.1.0 h1:Q7juDM0QtcnhCpeyLGQKyg4TOIghuNXrkL32pHAUMxo= -github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= -github.com/fleetdm/fleet/v4 v4.28.0 h1:vz+0JsidQTx5pMeIR9GPb4qCn2vD43lYcID6kTV1X5M= -github.com/fleetdm/fleet/v4 v4.28.0/go.mod h1:ak7lFtmbW4SAE6jKYbcoXqQ1ydG+Fbesu+t2LMFkAwM= -github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24 h1:XhczaxKV3J4NjztroidSnYKyq5xtxF+amBYdBWeik58= -github.com/fleetdm/nanodep v0.1.1-0.20221221202251-71b67ab1da24/go.mod h1:QzQrCUTmSr9HotzKZAcfmy+czbEGK8Mq26hA+0DN4ag= -github.com/fleetdm/scep/v2 v2.1.1-0.20220729212655-4f19f0a10a03 h1:oW24iL1GfGWW1VhyWrVdw7VdnyvFePyzR88zvfnTqCo= -github.com/fleetdm/scep/v2 v2.1.1-0.20220729212655-4f19f0a10a03/go.mod h1:PajjVSF3LaELUh847MlOtanfqrF8R2DOO4oS3NSPemI= -github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc= -github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= -github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= -github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI= -github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU= -github.com/getsentry/sentry-go v0.12.0 h1:era7g0re5iY13bHSdN/xMkyV+5zZppjRVQhZrXCaEIk= -github.com/getsentry/sentry-go v0.12.0/go.mod h1:NSap0JBYWzHND8oMbyi0+XZhUalc1TBdRL1M71JZW2c= -github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= -github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= -github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0= -github.com/gliderlabs/ssh v0.3.5 h1:OcaySEmAQJgyYcArR+gGGTHCyE7nvhEMTlYY+Dp8CpY= -github.com/gliderlabs/ssh v0.3.5/go.mod h1:8XB4KraRrX39qHhT6yxPsHedjA08I/uBVwj4xC+/+z4= -github.com/go-git/gcfg v1.5.0/go.mod h1:5m20vg6GwYabIxaOonVkTdrILxQMpEShl1xiMF4ua+E= -github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI= -github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic= -github.com/go-git/go-billy/v5 v5.0.0/go.mod h1:pmpqyWchKfYfrkb/UVH4otLvyi/5gJlGI4Hb3ZqZ3W0= -github.com/go-git/go-billy/v5 v5.5.0 h1:yEY4yhzCDuMGSv83oGxiBotRzhwhNr8VZyphhiu+mTU= -github.com/go-git/go-billy/v5 v5.5.0/go.mod h1:hmexnoNsr2SJU1Ju67OaNz5ASJY3+sHgFRpCtpDCKow= -github.com/go-git/go-git-fixtures/v4 v4.0.2-0.20200613231340-f56387b50c12/go.mod h1:m+ICp2rF3jDhFgEZ/8yziagdT1C+ZpZcrJjappBCDSw= -github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4= -github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII= -github.com/go-git/go-git/v5 v5.2.0/go.mod h1:kh02eMX+wdqqxgNMEyq8YgwlIOsDOa9homkUq1PoTMs= -github.com/go-git/go-git/v5 v5.11.0 h1:XIZc1p+8YzypNr34itUfSvYJcv+eYdTnTvOZ2vD3cA4= -github.com/go-git/go-git/v5 v5.11.0/go.mod h1:6GFcX2P3NM7FPBfpePbpLd21XxsgdAt+lKqXmCUiUCY= -github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-kit/kit v0.7.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= -github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4= -github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs= -github.com/go-kit/log v0.2.0 h1:7i2K3eKTos3Vc0enKCfnVcgHh2olr/MyfboYq7cAcFw= -github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= -github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= -github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= -github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA= -github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs= -github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY= -github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= -github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= -github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= -github.com/go-stack/stack v1.7.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= -github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= -github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA= -github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y= -github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8= -github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= -github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= -github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= -github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= -github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs= -github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= -github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= -github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= -github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= -github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y= -github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw= -github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4= -github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8= -github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs= -github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= -github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw= -github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk= -github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= -github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= -github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= -github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w= -github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= -github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= -github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= -github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM= -github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= -github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= -github.com/gomodule/oauth1 v0.2.0 h1:/nNHAD99yipOEspQFbAnNmwGTZ1UNXiD/+JLxwx79fo= -github.com/gomodule/oauth1 v0.2.0/go.mod h1:4r/a8/3RkhMBxJQWL5qzbOEcaQmNPIkNoI7P8sXeI08= -github.com/gomodule/redigo v1.8.9 h1:Sl3u+2BI/kk+VEatbj0scLdrFhjPmbxOc1myhDP41ws= -github.com/gomodule/redigo v1.8.9/go.mod h1:7ArFNvsTjH8GMMzB4uy1snslv2BwmginuMs06a1uzZE= -github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= -github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= -github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= -github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= -github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= -github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= -github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= -github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= -github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= -github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0= -github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= -github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= -github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM= -github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= -github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/google/rpmpack v0.0.0-20210518075352-dc539ef4f2ea h1:Fv9Ni1vIq9+Gv4Sm0Xq+NnPYcnsMbdNhJ4Cu4rkbPBM= -github.com/google/rpmpack v0.0.0-20210518075352-dc539ef4f2ea/go.mod h1:+y9lKiqDhR4zkLl+V9h4q0rdyrYVsWWm6LLCQP33DIk= -github.com/google/uuid v0.0.0-20161128191214-064e2069ce9c/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I= -github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= -github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= -github.com/googleapis/gax-go/v2 v2.1.0 h1:6DWmvNpomjL1+3liNSZbVns3zsYzzCjm6pRBO1tLeso= -github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0= -github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8= -github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= -github.com/goreleaser/chglog v0.1.2 h1:tdzAb/ILeMnphzI9zQ7Nkq+T8R9qyXli8GydD8plFRY= -github.com/goreleaser/chglog v0.1.2/go.mod h1:tTZsFuSZK4epDXfjMkxzcGbrIOXprf0JFp47BjIr3B8= -github.com/goreleaser/fileglob v1.2.0 h1:OErqbdzeg/eibfDGPHDQDN8jL5u1jNyxA5IQzNPLLoU= -github.com/goreleaser/fileglob v1.2.0/go.mod h1:rFyb2pXaK3YdnYnSjn6lifw0h2Q6s8OfOsx6I6bXkKE= -github.com/goreleaser/nfpm/v2 v2.10.0 h1:SshT2D1MTzCifmjaagQA+5XW9Iq+qvXUavrgP0HvmWg= -github.com/goreleaser/nfpm/v2 v2.10.0/go.mod h1:Bj/ztLvdnBnEgMae0fl/bLF6By1+yFFKeL97WiS6ZJg= -github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs= -github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc= -github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= -github.com/groob/plist v0.0.0-20220217120414-63fa881b19a5 h1:saaSiB25B1wgaxrshQhurfPKUGJ4It3OxNJUy0rdOjU= -github.com/groob/plist v0.0.0-20220217120414-63fa881b19a5/go.mod h1:itkABA+w2cw7x5nYUS/pLRef6ludkZKOigbROmCTaFw= -github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= -github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= -github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= -github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= -github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q= -github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8= -github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= -github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I= -github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= -github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80= -github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= -github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= -github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= -github.com/hashicorp/go-hclog v0.9.3-0.20191025211905-234833755cb2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= -github.com/hashicorp/go-hclog v0.16.2 h1:K4ev2ib4LdQETX5cSZBG0DVLk1jwGqSPXBjdah3veNs= -github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= -github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60= -github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= -github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= -github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= -github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= -github.com/hashicorp/go-retryablehttp v0.6.3/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER/9wtLZZ8meHqQvEYWY= -github.com/hashicorp/go-retryablehttp v0.6.8 h1:92lWxgpa+fF3FozM4B3UZtHZMJX8T5XT+TFdCxsPyWs= -github.com/hashicorp/go-retryablehttp v0.6.8/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER/9wtLZZ8meHqQvEYWY= -github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU= -github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU= -github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4= -github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro= -github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= -github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= -github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4= -github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= -github.com/hashicorp/hcl/v2 v2.0.0/go.mod h1:oVVDG71tEinNGYCxinCYadcmKU9bglqW9pV3txagJ90= -github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64= -github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ= -github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I= -github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc= -github.com/hectane/go-acl v0.0.0-20190604041725-da78bae5fc95 h1:S4qyfL2sEm5Budr4KVMyEniCy+PbS55651I/a+Kn/NQ= -github.com/hectane/go-acl v0.0.0-20190604041725-da78bae5fc95/go.mod h1:QiyDdbZLaJ/mZP4Zwc9g2QsfaEA4o7XvvgZegSci5/E= -github.com/huandu/xstrings v1.3.2 h1:L18LIDzqlW6xN2rEkpdV8+oL/IXWJ1APd+vsdYy4Wdw= -github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE= -github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= -github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= -github.com/igm/sockjs-go/v3 v3.0.0 h1:4wLoB9WCnQ8RI87cmqUH778ACDFVmRpkKRCWBeuc+Ww= -github.com/igm/sockjs-go/v3 v3.0.0/go.mod h1:UqchsOjeagIBFHvd+RZpLaVRbCwGilEC08EDHsD1jYE= -github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= -github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= -github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU= -github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= -github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM= -github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= -github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A= -github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= -github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= -github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc= -github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4= -github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= -github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= -github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= -github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= -github.com/jmoiron/sqlx v0.0.0-20180406164412-2aeb6a910c2b/go.mod h1:IiEW3SEiiErVyFdH8NTuWjSifiEQKUoyK3LNqr2kCHU= -github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= -github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= -github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= -github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= -github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= -github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo= -github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= -github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= -github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= -github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= -github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4= -github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= -github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= -github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= -github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.15.0 h1:xqfchp4whNFxn5A4XFyyYtitiWI8Hy5EW59jEwcyL6U= -github.com/klauspost/compress v1.15.0/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= -github.com/kolide/kit v0.0.0-20191023141830-6312ecc11c23 h1:7rykD5+Wf11u+03TOsunGbg7f4gZEBgS0gwIRR+Han4= -github.com/kolide/kit v0.0.0-20191023141830-6312ecc11c23/go.mod h1:OYYulo9tUqRadRLwB0+LE914sa1ui2yL7OrcU3Q/1XY= -github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= -github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg= -github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= -github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= -github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= -github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= -github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= -github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= -github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= -github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= -github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4= -github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I= -github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= -github.com/magiconair/properties v1.8.4/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60= -github.com/magiconair/properties v1.8.5 h1:b6kJs+EmPFMYGkow9GiUyCyOvIwYetYJ3fSaWak/Gls= -github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60= -github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= -github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE= -github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc= -github.com/mattn/go-colorable v0.1.11 h1:nQ+aFkoE2TMGc0b68U2OKSexC+eq46+XwZzWXHRmPYs= -github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4= -github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= -github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= -github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84= -github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU= -github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y= -github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94= -github.com/mattn/go-sqlite3 v1.10.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= -github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= -github.com/micromdm/nanomdm v0.3.0 h1:njAC9+sQy9SpgyZhyVAJYzhRD7dt4pv7m9Z8wlUIY2o= -github.com/micromdm/nanomdm v0.3.0/go.mod h1:03+qFjfaTE6Ye9QvrHfhCKgqjSVSeWzdfNHXCIFRrLg= -github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= -github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc= -github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw= -github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= -github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= -github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= -github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc= -github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg= -github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= -github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= -github.com/mitchellh/gon v0.2.3 h1:fObN7hD14VacGG++t27GzTW6opP0lwI7TsgTPL55wBo= -github.com/mitchellh/gon v0.2.3/go.mod h1:Ua18ZhqjZHg8VyqZo8kNHAY331ntV6nNJ9mT3s2mIo8= -github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg= -github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY= -github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/mitchellh/mapstructure v1.3.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/mitchellh/mapstructure v1.4.2 h1:6h7AQ0yhTcIsmFmnAwQls75jp2Gzs4iB8W7pjMO+rqo= -github.com/mitchellh/mapstructure v1.4.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= -github.com/mitchellh/reflectwalk v1.0.1/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= -github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= -github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= -github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= -github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= -github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= -github.com/nukosuke/go-zendesk v0.13.1 h1:EdYpn+FxROLguADEJK5reOHcpysM8wyWPOWO96SIc0A= -github.com/nukosuke/go-zendesk v0.13.1/go.mod h1:86Cg7RhSvPfOqZOtQXteJEV9yIQVQsy2HVDk++Yf3jA= -github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA= -github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU= -github.com/oklog/ulid v0.3.0/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= -github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= -github.com/onsi/gomega v1.27.10 h1:naR28SdDFlqrG6kScpT8VWpu1xWY5nJRCF3XaYyBjhI= -github.com/onsi/gomega v1.27.10/go.mod h1:RsS8tutOdbdgzbPtzzATp12yT7kM5I5aElG3evPbQ0M= -github.com/opencensus-integrations/ocsql v0.1.1/go.mod h1:ozPYpNVBHZsX33jfoQPO5TlI5lqh0/3R36kirEqJKAM= -github.com/oschwald/geoip2-golang v1.8.0 h1:KfjYB8ojCEn/QLqsDU0AzrJ3R5Qa9vFlx3z6SLNcKTs= -github.com/oschwald/geoip2-golang v1.8.0/go.mod h1:R7bRvYjOeaoenAp9sKRS8GX5bJWcZ0laWO5+DauEktw= -github.com/oschwald/maxminddb-golang v1.10.0 h1:Xp1u0ZhqkSuopaKmk1WwHtjF0H9Hd9181uj2MQ5Vndg= -github.com/oschwald/maxminddb-golang v1.10.0/go.mod h1:Y2ELenReaLAZ0b400URyGwvYxHV1dLIxBuyOsyYjHK0= -github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= -github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= -github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc= -github.com/pelletier/go-toml v1.9.3 h1:zeC5b1GviRUyKYd6OJPvBU/mcVDVoL1OhT17FCt5dSQ= -github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c= -github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4= -github.com/pjbgf/sha1cd v0.3.0/go.mod h1:nZ1rrWOcGJ5uZgEEVL1VUM9iRQiZvWdbZjkKyFzPPsI= -github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= -github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= -github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI= -github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= -github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI= -github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw= -github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE= -github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= -github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= -github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= -github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= -github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= -github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= -github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= -github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= -github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= -github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= -github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= -github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M= -github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA= -github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ= -github.com/rs/zerolog v1.20.0 h1:38k9hgtUBdxFwE34yS8rTHmHBa4eN16E4DJlv177LNs= -github.com/rs/zerolog v1.20.0/go.mod h1:IzD0RJ65iWH0w97OQQebJEvTZYvsCUm9WVLWBQrJRjo= -github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= -github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts= -github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc= -github.com/sebdah/goldie v1.0.0 h1:9GNhIat69MSlz/ndaBg48vl9dF5fI+NBB6kfOxgfkMc= -github.com/sebdah/goldie v1.0.0/go.mod h1:jXP4hmWywNEwZzhMuv2ccnqTSFpuq8iyQhtQdkkZBH4= -github.com/secure-systems-lab/go-securesystemslib v0.4.0 h1:b23VGrQhTA8cN2CbBw7/FulN9fTtqYUdS5+Oxzt+DUE= -github.com/secure-systems-lab/go-securesystemslib v0.4.0/go.mod h1:FGBZgq2tXWICsxWQW1msNf49F0Pf2Op5Htayx335Qbs= -github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= -github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= -github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= -github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= -github.com/shirou/gopsutil/v3 v3.22.8 h1:a4s3hXogo5mE2PfdfJIonDbstO/P+9JszdfhAHSzD9Y= -github.com/shirou/gopsutil/v3 v3.22.8/go.mod h1:s648gW4IywYzUfE/KjXxUsqrqx/T2xO5VqOXxONeRfI= -github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= -github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= -github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= -github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= -github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= -github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/skeema/knownhosts v1.2.1 h1:SHWdIUa82uGZz+F+47k8SY4QhhI291cXCpopT1lK2AQ= -github.com/skeema/knownhosts v1.2.1/go.mod h1:xYbVRSPxqBZFrdmDyMmsOs+uX1UZC3nTN3ThzgDxUwo= -github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= -github.com/smartystreets/assertions v1.0.0 h1:UVQPSSmc3qtTi+zPPkCXvZX9VvW/xT/NsRvKfwY81a8= -github.com/smartystreets/assertions v1.0.0/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM= -github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s= -github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= -github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= -github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= -github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ= -github.com/spf13/afero v1.4.1/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I= -github.com/spf13/afero v1.6.0 h1:xoax2sJ2DT8S8xA2paPFjDCScCNeWsg75VG0DLRreiY= -github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I= -github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= -github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng= -github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE= -github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI= -github.com/spf13/cobra v1.5.0 h1:X+jTBEBqF0bHN+9cSMgmfuvv2VHJ9ezmFNf9Y/XstYU= -github.com/spf13/cobra v1.5.0/go.mod h1:dWXEIy2H428czQCjInthrTRUg7yKbok+2Qi/yBIJoUM= -github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo= -github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk= -github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo= -github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= -github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= -github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= -github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= -github.com/spf13/viper v1.7.1/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg= -github.com/spf13/viper v1.8.1 h1:Kq1fyeebqsBfbjZj4EL7gj2IO0mMaiyjYUWcUsl2O44= -github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns= -github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= -github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= -github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= -github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= -github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= -github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= -github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= -github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= -github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= -github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= -github.com/theupdateframework/go-tuf v0.5.0 h1:aQ7i9CBw4q9QEZifCaW6G8qGQwoN23XGaZkOA+F50z4= -github.com/theupdateframework/go-tuf v0.5.0/go.mod h1:vAqWV3zEs89byeFsAYoh/Q14vJTgJkHwnnRCWBBBINY= -github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw= -github.com/tklauser/go-sysconf v0.3.10/go.mod h1:C8XykCvCb+Gn0oNCWPIlcb0RuglQTYaQ2hGm7jmxEFk= -github.com/tklauser/numcpus v0.4.0 h1:E53Dm1HjH1/R2/aoCtXtPgzmElmn51aOkhCFSuZq//o= -github.com/tklauser/numcpus v0.4.0/go.mod h1:1+UI3pD8NW14VMwdgJNJ1ESk2UnwhAnz5hMwiKKqXCQ= -github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= -github.com/trivago/tgo v1.0.7 h1:uaWH/XIy9aWYWpjm2CU3RpcqZXmX2ysQ9/Go+d9gyrM= -github.com/trivago/tgo v1.0.7/go.mod h1:w4dpD+3tzNIIiIfkWWa85w5/B77tlvdZckQ+6PkFnhc= -github.com/ulikunitz/xz v0.5.7/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14= -github.com/ulikunitz/xz v0.5.10 h1:t92gobL9l3HE202wg3rlk19F6X+JOxl9BBrCCMYEYd8= -github.com/ulikunitz/xz v0.5.10/go.mod h1:nbz6k7qbPmH4IRqmfOplQw/tblSgqTqBwxkY0oWt/14= -github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk= -github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4= -github.com/xanzy/ssh-agent v0.3.0/go.mod h1:3s9xbODqPuuhK9JV1R321M/FlMZSBvE5aY6eAcqrDh0= -github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM= -github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw= -github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 h1:nIPpBwaJSVYIxUFsDv3M8ofmx9yWTog9BfvIu0q41lo= -github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8/go.mod h1:HUYIGzjTL3rfEspMxjDjgmT5uz5wzYJKVo23qUhYTos= -github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= -github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= -github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -github.com/yusufpapurcu/wmi v1.2.2 h1:KBNDSne4vP5mbSWnJbO+51IMOXJB67QiYCSBrubbPRg= -github.com/yusufpapurcu/wmi v1.2.2/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0= -github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= -go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= -go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs= -go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g= -go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ= -go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 h1:CCriYyAfq1Br1aIYettdHZTy8mBTIPo7We18TuO/bak= -go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk= -go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= -go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= -go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA= -go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= -go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= -go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M= -go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E= -go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= -go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= -go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= -go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= -go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= -go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo= -golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= -golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= -golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4= -golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= -golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k= -golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= -golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= -golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek= -golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY= -golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= -golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= -golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= -golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= -golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= -golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= -golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= -golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= -golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs= -golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= -golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= -golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o= -golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc= -golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY= -golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= -golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc= -golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= -golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= -golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= -golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= -golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= -golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= -golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc= -golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= -golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= -golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY= -golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= -golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c= -golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= -golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= -golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= -golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg= -golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E= -golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y= -golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190529164535-6a60838ec259/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201109165425-215b40eba54c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220330033206-e17cdc41300f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc= -golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc= -golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= -golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= -golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4= -golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= -golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= -golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= -golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= -golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= -golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= -golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= -golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= -golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= -golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= -golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20190828213141-aed303cbaa74/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= -golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= -golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= -golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8= -golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= -golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA= -golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE= -golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= -golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= -golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ= -golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= -golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= -google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= -google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= -google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= -google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI= -google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE= -google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= -google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE= -google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM= -google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc= -google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg= -google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE= -google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8= -google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU= -google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94= -google.golang.org/api v0.44.0/go.mod h1:EBOGZqzyhtvMDoxwS97ctnh0zUmYY6CxqXsc1AvkYD8= -google.golang.org/api v0.56.0 h1:08F9XVYTLOGeSQb3xI9C0gXMuQanhdGed0cWFhDozbI= -google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= -google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= -google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= -google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= -google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= -google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= -google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= -google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= -google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= -google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8= -google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc= -google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA= -google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= -google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U= -google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= -google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA= -google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= -google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A= -google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0= -google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368 h1:Et6SkiuvnBn+SgrSYXs/BrUpGB4mbdwt4R3vaPIlicA= -google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= -google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= -google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= -google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= -google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= -google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= -google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60= -google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk= -google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak= -google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0= -google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= -google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8= -google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU= -google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM= -google.golang.org/grpc v1.49.0 h1:WTLtQzmQori5FUH25Pq4WT22oCsv8USpQ+F6rqtsmxw= -google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI= -google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= -google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= -google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= -google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= -google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= -google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= -google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= -google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= -google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= -google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w= -google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= -gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= -gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= -gopkg.in/guregu/null.v3 v3.5.0 h1:xTcasT8ETfMcUHn0zTvIYtQud/9Mx5dJqD554SZct0o= -gopkg.in/guregu/null.v3 v3.5.0/go.mod h1:E4tX2Qe3h7QdL+uZ3a0vqvYwKQsRSQKM5V4YltdgH9Y= -gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= -gopkg.in/ini.v1 v1.62.0 h1:duBzk771uxoUuOlyRLkHsygud9+5lrlGjdFBb4mSKDU= -gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k= -gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8= -gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k= -gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo= -gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= -gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= -gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74= -gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= -gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= -gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= -gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= -honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= -honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -howett.net/plist v0.0.0-20181124034731-591f970eefbb h1:jhnBjNi9UFpfpl8YZhA9CrOqpnJdvzuiHsl/dnxl11M= -howett.net/plist v0.0.0-20181124034731-591f970eefbb/go.mod h1:vMygbs4qMhSZSc4lCUl2OEE+rDiIIJAIdR4m7MiMcm0= -rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= -rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= -rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= diff --git a/infrastructure/sandbox/PreProvisioner/lambda/main.go b/infrastructure/sandbox/PreProvisioner/lambda/main.go deleted file mode 100644 index 29c53cf3f..000000000 --- a/infrastructure/sandbox/PreProvisioner/lambda/main.go +++ /dev/null @@ -1,345 +0,0 @@ -package main - -import ( - "context" - "fmt" - "github.com/aws/aws-lambda-go/lambda" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/session" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute" - "github.com/fleetdm/fleet/v4/orbit/pkg/packaging" - "github.com/fleetdm/fleet/v4/server" - "github.com/fleetdm/fleet/v4/server/config" - "github.com/fleetdm/fleet/v4/server/datastore/s3" - "github.com/fleetdm/fleet/v4/server/fleet" - "github.com/google/uuid" - flags "github.com/jessevdk/go-flags" - "log" - "math/rand" - "os" - "os/exec" - "path/filepath" - "time" -) - -type OptionsStruct struct { - LambdaExecutionEnv string `long:"lambda-execution-environment" env:"AWS_EXECUTION_ENV"` - LifecycleTable string `long:"dynamodb-lifecycle-table" env:"DYNAMODB_LIFECYCLE_TABLE" required:"true"` - MaxInstances int64 `long:"max-instances" env:"MAX_INSTANCES" required:"true"` - QueuedInstances int64 `long:"queued-instances" env:"QUEUED_INSTANCES" required:"true"` - FleetBaseURL string `long:"fleet-base-url" env:"FLEET_BASE_URL" required:"true"` - InstallerBucket string `long:"installer-bucket" env:"INSTALLER_BUCKET" required:"true"` - MacOSDevIDCertificateContent string `long:"macos-dev-id-certificate-content" env:"MACOS_DEV_ID_CERTIFICATE_CONTENT" required:"true"` - AppStoreConnectAPIKeyID string `long:"app-store-connect-api-key-id" env:"APP_STORE_CONNECT_API_KEY_ID" required:"true"` - AppStoreConnectAPIKeyIssuer string `long:"app-store-connect-api-key-issuer" env:"APP_STORE_CONNECT_API_KEY_ISSUER" required:"true"` - AppStoreConnectAPIKeyContent string `long:"app-store-connect-api-key-content" env:"APP_STORE_CONNECT_API_KEY_CONTENT" required:"true"` -} - -var options = OptionsStruct{} - -func FinishFleet(instanceID string) (err error) { - log.Printf("Finishing instance: %s", instanceID) - svc := dynamodb.New(session.New()) - // Perform a conditional update to claim the item - input := &dynamodb.UpdateItemInput{ - ConditionExpression: aws.String("#fleet_state = :v1"), - TableName: aws.String(options.LifecycleTable), - Key: map[string]*dynamodb.AttributeValue{ - "ID": { - S: aws.String(instanceID), - }, - }, - UpdateExpression: aws.String("set #fleet_state = :v2"), - ExpressionAttributeNames: map[string]*string{"#fleet_state": aws.String("State")}, - ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{ - ":v1": { - S: aws.String("provisioned"), - }, - ":v2": { - S: aws.String("unclaimed"), - }, - }, - } - if _, err = svc.UpdateItem(input); err != nil { - return - } - return -} - -func buildPackages(instanceID, enrollSecret string) (err error) { - funcs := []func(packaging.Options) (string, error){ - packaging.BuildPkg, - packaging.BuildDeb, - packaging.BuildRPM, - packaging.BuildMSI, - } - pkgopts := packaging.Options{ - FleetURL: fmt.Sprintf("https://%s.%s", instanceID, options.FleetBaseURL), - EnrollSecret: enrollSecret, - UpdateURL: "https://tuf.fleetctl.com", - Identifier: "com.fleetdm.orbit", - StartService: true, - NativeTooling: true, - OrbitChannel: "stable", - OsquerydChannel: "stable", - DesktopChannel: "stable", - OrbitUpdateInterval: 15 * time.Minute, - Notarize: true, - MacOSDevIDCertificateContent: options.MacOSDevIDCertificateContent, - AppStoreConnectAPIKeyID: options.AppStoreConnectAPIKeyID, - AppStoreConnectAPIKeyIssuer: options.AppStoreConnectAPIKeyIssuer, - AppStoreConnectAPIKeyContent: options.AppStoreConnectAPIKeyContent, - } - store, err := s3.NewInstallerStore(config.S3Config{ - Bucket: options.InstallerBucket, - Prefix: instanceID, - }) - - // Build non-desktop - for _, buildFunc := range funcs { - var filename string - filename, err = buildFunc(pkgopts) - if err != nil { - log.Print(err) - return - } - var r *os.File - r, err = os.Open(filename) - defer r.Close() - if err != nil { - return err - } - _, err = store.Put(context.Background(), fleet.Installer{ - EnrollSecret: enrollSecret, - Kind: filepath.Ext(filename)[1:], - Desktop: pkgopts.Desktop, - Content: r, - }) - if err != nil { - return - } - } - - // Build desktop - pkgopts.Desktop = true - for _, buildFunc := range funcs { - var filename string - filename, err = buildFunc(pkgopts) - if err != nil { - log.Print(err) - return - } - var r *os.File - r, err = os.Open(filename) - defer r.Close() - if err != nil { - return err - } - _, err = store.Put(context.Background(), fleet.Installer{ - EnrollSecret: enrollSecret, - Kind: filepath.Ext(filename)[1:], - Desktop: pkgopts.Desktop, - Content: r, - }) - if err != nil { - return - } - } - return FinishFleet(instanceID) -} - -type LifecycleRecord struct { - ID string - State string -} - -func getInstancesCount() (int64, int64, error) { - log.Print("getInstancesCount") - svc := dynamodb.New(session.New()) - // Example iterating over at most 3 pages of a Scan operation. - var count, unclaimedCount int64 - err := svc.ScanPages( - &dynamodb.ScanInput{ - TableName: aws.String(options.LifecycleTable), - }, - func(page *dynamodb.ScanOutput, lastPage bool) bool { - log.Print(page) - count += *page.Count - recs := []LifecycleRecord{} - if err := dynamodbattribute.UnmarshalListOfMaps(page.Items, &recs); err != nil { - log.Print(err) - return false - } - for _, i := range recs { - if i.State == "unclaimed" { - unclaimedCount++ - } - } - return true - }) - if err != nil { - return 0, 0, err - } - return count, unclaimedCount, nil -} - -type NullEvent struct{} - -func min(a, b int64) int64 { - // I really have to implement this myself? - if a < b { - return a - } - return b -} - -func runCmd(args []string) error { - cmd := exec.Cmd{ - Path: "/build/terraform", - Dir: "/build/deploy_terraform", - Stdout: os.Stdout, - Stderr: os.Stderr, - Args: append([]string{"/build/terraform"}, args...), - } - log.Printf("%+v\n", cmd) - return cmd.Run() -} - -func initTerraform() error { - err := runCmd([]string{ - "init", - "-backend-config=backend.conf", - }) - return err -} - -func runTerraform(workspace string, redis_database int, enrollSecret string) error { - err := runCmd([]string{ - "workspace", - "new", - workspace, - }) - if err != nil { - return err - } - err = runCmd([]string{ - "apply", - "-auto-approve", - "-no-color", - "-var", - fmt.Sprintf("redis_database=%d", redis_database), - "-var", - fmt.Sprintf("enroll_secret=%s", enrollSecret), - }) - return err -} - -func idExists(id int) (bool, error) { - svc := dynamodb.New(session.New()) - input := &dynamodb.QueryInput{ - ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{ - ":v1": { - N: aws.String(fmt.Sprintf("%d", id)), - }, - }, - KeyConditionExpression: aws.String("redis_db = :v1"), - TableName: aws.String(options.LifecycleTable), - IndexName: aws.String("RedisDatabases"), - } - - result, err := svc.Query(input) - if err != nil { - return false, err - } - return *result.Count != 0, nil -} - -func getRedisDatabase() (int, error) { - for { - ret := rand.Intn(65536) - exists, err := idExists(ret) - if err != nil { - return 0, err - } - if !exists { - return ret, nil - } - } -} - -func handler(ctx context.Context, name NullEvent) error { - // check if we need to do anything - totalCount, unclaimedCount, err := getInstancesCount() - if err != nil { - return err - } - if totalCount >= options.MaxInstances { - return nil - } - if unclaimedCount >= options.QueuedInstances { - return nil - } - has_init := false - // deploy terraform to initialize everything - // If there's an error during spinup, the program exits, so it either makes progress or fails completely, never running forever - for min(options.MaxInstances-totalCount, options.QueuedInstances-unclaimedCount) > 0 { - if !has_init { - has_init = true - if err := initTerraform(); err != nil { - return err - } - } - redisDatabase, err := getRedisDatabase() - if err != nil { - return err - } - enrollSecret, err := server.GenerateRandomText(fleet.EnrollSecretDefaultLength) - if err != nil { - return err - } - instanceID := fmt.Sprintf("t%s", uuid.New().String()[:8]) - // This should fail if the instance id we pick already exists since it will collide with the primary key in dynamodb - // This also actually puts the claim in place - if err := runTerraform(instanceID, redisDatabase, enrollSecret); err != nil { - return err - } - if err = buildPackages(instanceID, enrollSecret); err != nil { - return err - } - - // Refresh the count variables - totalCount, unclaimedCount, err = getInstancesCount() - if err != nil { - return err - } - if totalCount >= options.MaxInstances { - return nil - } - if unclaimedCount >= options.QueuedInstances { - return nil - } - } - return nil -} - -func main() { - var err error - log.SetFlags(log.LstdFlags | log.Lshortfile) - // Get config from environment - parser := flags.NewParser(&options, flags.Default) - if _, err = parser.Parse(); err != nil { - if flagsErr, ok := err.(*flags.Error); ok && flagsErr.Type == flags.ErrHelp { - return - } else { - log.Fatal(err) - } - } - if options.LambdaExecutionEnv == "AWS_Lambda_go1.x" { - lambda.Start(handler) - } else { - if err = handler(context.Background(), NullEvent{}); err != nil { - log.Fatal(err) - } - } -} diff --git a/infrastructure/sandbox/PreProvisioner/main.tf b/infrastructure/sandbox/PreProvisioner/main.tf deleted file mode 100644 index f196c7931..000000000 --- a/infrastructure/sandbox/PreProvisioner/main.tf +++ /dev/null @@ -1,414 +0,0 @@ -terraform { - required_providers { - docker = { - source = "kreuzwerker/docker" - version = "~> 2.16.0" - } - git = { - source = "paultyng/git" - version = "~> 0.1.0" - } - } -} - -data "aws_region" "current" {} - -data "aws_caller_identity" "current" {} - -locals { - name = "preprovisioner" - full_name = "${var.prefix}-${local.name}" -} - -resource "aws_cloudwatch_log_group" "main" { - name = local.full_name - kms_key_id = var.kms_key.arn - retention_in_days = 30 -} - -data "aws_iam_policy_document" "events-assume-role" { - statement { - actions = ["sts:AssumeRole"] - principals { - type = "Service" - identifiers = ["events.amazonaws.com"] - } - } -} - -resource "aws_iam_role_policy_attachment" "events" { - role = aws_iam_role.events.id - policy_arn = aws_iam_policy.events.arn -} - -resource "aws_iam_policy" "events" { - name = "${local.full_name}-events" - policy = data.aws_iam_policy_document.events.json -} - -data "aws_iam_policy_document" "events" { - statement { - actions = ["ecs:RunTask"] - resources = [replace(aws_ecs_task_definition.main.arn, "/:\\d+$/", ":*"), replace(aws_ecs_task_definition.main.arn, "/:\\d+$/", "")] - condition { - test = "ArnLike" - variable = "ecs:cluster" - values = [var.ecs_cluster.arn] - } - } - statement { - actions = ["iam:PassRole"] - resources = ["*"] - condition { - test = "StringLike" - variable = "iam:PassedToService" - values = ["ecs-tasks.amazonaws.com"] - } - } -} - -resource "aws_iam_role" "events" { - name = "${local.full_name}-events" - path = "/service-role/" - - assume_role_policy = data.aws_iam_policy_document.events-assume-role.json -} - -data "aws_iam_policy_document" "lambda-assume-role" { - statement { - actions = ["sts:AssumeRole"] - principals { - type = "Service" - identifiers = ["ecs-tasks.amazonaws.com"] - } - } -} - - -resource "aws_iam_role_policy_attachment" "lambda" { - role = aws_iam_role.lambda.id - policy_arn = aws_iam_policy.lambda.arn -} - -resource "aws_iam_role_policy_attachment" "lambda-ecs" { - role = aws_iam_role.lambda.id - policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy" -} - -resource "aws_iam_policy" "lambda" { - name = "${var.prefix}-lambda" - policy = data.aws_iam_policy_document.lambda.json -} - -data "aws_iam_policy_document" "lambda" { - statement { - actions = [ - "dynamodb:List*", - "dynamodb:DescribeReservedCapacity*", - "dynamodb:DescribeLimits", - "dynamodb:DescribeTimeToLive" - ] - resources = ["*"] - } - - statement { - actions = [ - "dynamodb:BatchGet*", - "dynamodb:DescribeStream", - "dynamodb:DescribeTable", - "dynamodb:Get*", - "dynamodb:Query", - "dynamodb:Scan", - "dynamodb:BatchWrite*", - "dynamodb:CreateTable", - "dynamodb:Delete*", - "dynamodb:Update*", - "dynamodb:PutItem" - ] - resources = [var.dynamodb_table.arn] - } - - statement { - actions = [ #tfsec:ignore:aws-iam-no-policy-wildcards - "kms:Encrypt*", - "kms:Decrypt*", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:Describe*" - ] - resources = [aws_kms_key.ecr.arn, var.kms_key.arn] - } - - statement { - actions = [ - "s3:*Object", - "s3:ListBucket", - ] - resources = [ - var.installer_bucket.arn, - "${var.installer_bucket.arn}/*" - ] - } - - statement { - actions = ["secretsmanager:GetSecretValue"] - resources = [aws_secretsmanager_secret.apple-signing-secrets.arn] - } - - # TODO: limit this, this is for terraform - statement { - actions = ["*"] - resources = ["*"] - } -} - -resource "aws_iam_role" "lambda" { - name = local.full_name - - assume_role_policy = data.aws_iam_policy_document.lambda-assume-role.json -} - -output "lambda_role" { - value = aws_iam_role.lambda -} - -resource "aws_security_group" "lambda" { - name = local.full_name - description = "security group for ${local.full_name}" - vpc_id = var.vpc.vpc_id - - egress { - description = "egress to all" - from_port = 0 - to_port = 0 - protocol = "-1" - cidr_blocks = ["0.0.0.0/0"] - ipv6_cidr_blocks = ["::/0"] - } -} - -data "aws_eks_cluster" "cluster" { - name = var.eks_cluster.eks_cluster_id -} - -resource "aws_secretsmanager_secret" "apple-signing-secrets" { - name = "${local.full_name}-apple-signing-secrets" - kms_key_id = var.kms_key.id - recovery_window_in_days = 0 -} - -data "aws_secretsmanager_secret_version" "apple-signing-secrets" { - secret_id = aws_secretsmanager_secret.apple-signing-secrets.id -} - -resource "aws_ecs_task_definition" "main" { - family = local.full_name - network_mode = "awsvpc" - requires_compatibilities = ["FARGATE"] - execution_role_arn = aws_iam_role.lambda.arn - task_role_arn = aws_iam_role.lambda.arn - cpu = 1024 - memory = 4096 - container_definitions = jsonencode( - [ - { - name = local.name - image = docker_registry_image.main.name - mountPoints = [] - volumesFrom = [] - essential = true - networkMode = "awsvpc" - logConfiguration = { - logDriver = "awslogs" - options = { - awslogs-group = aws_cloudwatch_log_group.main.name - awslogs-region = data.aws_region.current.name - awslogs-stream-prefix = local.full_name - } - }, - environment = concat([ - { - name = "TF_VAR_mysql_secret" - value = var.mysql_secret.id - }, - { - name = "TF_VAR_mysql_cluster_name" - value = var.eks_cluster.eks_cluster_id - }, - { - name = "TF_VAR_eks_cluster" - value = var.eks_cluster.eks_cluster_id - }, - { - name = "DYNAMODB_LIFECYCLE_TABLE" - value = var.dynamodb_table.id - }, - { - name = "TF_VAR_lifecycle_table" - value = var.dynamodb_table.id - }, - { - name = "TF_VAR_base_domain" - value = var.base_domain - }, - { - name = "MAX_INSTANCES" - value = "500" - }, - { - name = "QUEUED_INSTANCES" - value = "20" - }, - { - name = "TF_VAR_redis_address" - value = "${var.redis_cluster.primary_endpoint_address}:6379" - }, - { - name = "FLEET_BASE_URL" - value = var.base_domain - }, - { - name = "INSTALLER_BUCKET" - value = var.installer_bucket.id - }, - { - name = "TF_VAR_installer_bucket" - value = var.installer_bucket.id - }, - { - name = "TF_VAR_installer_bucket_arn" - value = var.installer_bucket.arn - }, - { - name = "TF_VAR_oidc_provider_arn" - value = var.oidc_provider_arn - }, - { - name = "TF_VAR_oidc_provider" - value = var.oidc_provider - }, - { - name = "TF_VAR_kms_key_arn" - value = var.kms_key.arn - }, - { - name = "TF_VAR_ecr_url" - value = var.ecr.repository_url - }, - { - name = "TF_VAR_license_key" - value = var.license_key - }, - { - name = "TF_VAR_apm_url" - value = var.apm_url - }, - { - name = "TF_VAR_apm_token" - value = var.apm_token - }, - ]), - secrets = concat([ - { - name = "MACOS_DEV_ID_CERTIFICATE_CONTENT" - valueFrom = "${aws_secretsmanager_secret.apple-signing-secrets.arn}:MACOS_DEV_ID_CERTIFICATE_CONTENT::" - }, - { - name = "APP_STORE_CONNECT_API_KEY_ID" - valueFrom = "${aws_secretsmanager_secret.apple-signing-secrets.arn}:APP_STORE_CONNECT_API_KEY_ID::" - }, - { - name = "APP_STORE_CONNECT_API_KEY_ISSUER" - valueFrom = "${aws_secretsmanager_secret.apple-signing-secrets.arn}:APP_STORE_CONNECT_API_KEY_ISSUER::" - }, - { - name = "APP_STORE_CONNECT_API_KEY_CONTENT" - valueFrom = "${aws_secretsmanager_secret.apple-signing-secrets.arn}:APP_STORE_CONNECT_API_KEY_CONTENT::" - } - ]) - } - ]) - lifecycle { - create_before_destroy = true - } -} - -resource "aws_kms_key" "ecr" { - deletion_window_in_days = 10 - enable_key_rotation = true -} - -resource "aws_ecr_repository" "main" { - name = "${var.prefix}-lambda" - image_tag_mutability = "IMMUTABLE" - - image_scanning_configuration { - scan_on_push = true - } - - encryption_configuration { - encryption_type = "KMS" - kms_key = aws_kms_key.ecr.arn - } -} - -resource "random_uuid" "main" { - keepers = { - lambda = data.archive_file.main.output_sha - } -} - -resource "local_file" "backend-config" { - content = templatefile("${path.module}/lambda/backend-template.conf", - { - remote_state = var.remote_state - }) - filename = "${path.module}/lambda/deploy_terraform/backend.conf" -} - -data "archive_file" "main" { - type = "zip" - output_path = "${path.module}/.lambda.zip" - source_dir = "${path.module}/lambda" -} - -data "git_repository" "main" { - path = "${path.module}/../../../" -} - -resource "docker_registry_image" "main" { - name = "${aws_ecr_repository.main.repository_url}:${data.git_repository.main.branch}-${random_uuid.main.result}" - keep_remotely = true - - build { - context = "${path.module}/lambda/" - pull_parent = true - platform = "linux/amd64" - } - - depends_on = [ - local_file.backend-config - ] -} - -resource "aws_cloudwatch_event_rule" "main" { - name_prefix = var.prefix - schedule_expression = "rate(1 hour)" - is_enabled = true -} - -resource "aws_cloudwatch_event_target" "main" { - rule = aws_cloudwatch_event_rule.main.name - arn = var.ecs_cluster.arn - role_arn = aws_iam_role.events.arn - ecs_target { - task_count = 1 - task_definition_arn = aws_ecs_task_definition.main.arn - launch_type = "FARGATE" - network_configuration { - subnets = var.vpc.private_subnets - security_groups = [aws_security_group.lambda.id] - assign_public_ip = false - } - } -} diff --git a/infrastructure/sandbox/PreProvisioner/outputs.tf b/infrastructure/sandbox/PreProvisioner/outputs.tf deleted file mode 100644 index 45508667d..000000000 --- a/infrastructure/sandbox/PreProvisioner/outputs.tf +++ /dev/null @@ -1,3 +0,0 @@ -output "lambda_security_group" { - value = aws_security_group.lambda -} diff --git a/infrastructure/sandbox/PreProvisioner/variables.tf b/infrastructure/sandbox/PreProvisioner/variables.tf deleted file mode 100644 index 64e858588..000000000 --- a/infrastructure/sandbox/PreProvisioner/variables.tf +++ /dev/null @@ -1,17 +0,0 @@ -variable "prefix" {} -variable "dynamodb_table" {} -variable "vpc" {} -variable "remote_state" {} -variable "mysql_secret" {} -variable "eks_cluster" {} -variable "redis_cluster" {} -variable "base_domain" {} -variable "ecs_cluster" {} -variable "kms_key" {} -variable "installer_bucket" {} -variable "oidc_provider_arn" {} -variable "oidc_provider" {} -variable "ecr" {} -variable "license_key" {} -variable "apm_url" {} -variable "apm_token" {} diff --git a/infrastructure/sandbox/SharedInfrastructure/alb.tf b/infrastructure/sandbox/SharedInfrastructure/alb.tf deleted file mode 100644 index 54fa5949d..000000000 --- a/infrastructure/sandbox/SharedInfrastructure/alb.tf +++ /dev/null @@ -1,248 +0,0 @@ -resource "aws_lb" "main" { - name = var.prefix - internal = false - load_balancer_type = "application" - security_groups = [aws_security_group.lb.id] - subnets = var.vpc.public_subnets - enable_deletion_protection = true - - access_logs { - bucket = module.s3_bucket_for_logs.s3_bucket_id - prefix = var.prefix - enabled = true - } -} - -output "lb" { - value = aws_lb.main -} - -resource "aws_security_group" "lb" { - name = "${var.prefix}-lb" - vpc_id = var.vpc.vpc_id - description = "${var.prefix}-lb" - - ingress { - from_port = 80 - to_port = 80 - protocol = "tcp" - cidr_blocks = ["0.0.0.0/0"] - } - - ingress { - from_port = 443 - to_port = 443 - protocol = "tcp" - cidr_blocks = ["0.0.0.0/0"] - } - - egress { - from_port = 0 - to_port = 0 - protocol = "-1" - cidr_blocks = ["0.0.0.0/0"] - ipv6_cidr_blocks = ["::/0"] - } -} - -resource "aws_lb_listener" "main" { - load_balancer_arn = aws_lb.main.arn - port = "443" - protocol = "HTTPS" - ssl_policy = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06" - certificate_arn = aws_acm_certificate.main.arn - - default_action { - type = "forward" - target_group_arn = aws_lb_target_group.eks.arn - } -} - -resource "aws_lb_listener" "redirect" { - load_balancer_arn = aws_lb.main.arn - port = "80" - protocol = "HTTP" - - default_action { - type = "redirect" - redirect { - port = "443" - protocol = "HTTPS" - status_code = "HTTP_301" - } - } -} - -output "alb_listener" { - value = aws_lb_listener.main -} - -resource "aws_acm_certificate" "main" { - domain_name = "*.${var.base_domain}" - subject_alternative_names = [var.base_domain] - validation_method = "DNS" - - lifecycle { - create_before_destroy = true - } -} - -resource "aws_acm_certificate_validation" "main" { - certificate_arn = aws_acm_certificate.main.arn - validation_record_fqdns = [for r in cloudflare_record.cert : r.hostname] -} - -data "cloudflare_zone" "main" { - name = "fleetdm.com" -} - -resource "cloudflare_record" "cert" { - for_each = { for o in aws_acm_certificate.main.domain_validation_options.* : o.resource_record_name => o... } - zone_id = data.cloudflare_zone.main.id - name = replace(each.value[0].resource_record_name, ".fleetdm.com.", "") - type = each.value[0].resource_record_type - value = replace(each.value[0].resource_record_value, "/.$/", "") - ttl = 1 - proxied = false -} - -resource "cloudflare_record" "main" { - zone_id = data.cloudflare_zone.main.id - name = local.env_specific[data.aws_caller_identity.current.account_id]["dns_name"] - type = "CNAME" - value = aws_lb.main.dns_name - proxied = false -} - -resource "cloudflare_record" "wildcard" { - zone_id = data.cloudflare_zone.main.id - name = "*.${local.env_specific[data.aws_caller_identity.current.account_id]["dns_name"]}" - type = "CNAME" - value = aws_lb.main.dns_name - proxied = false -} - -module "s3_bucket_for_logs" { - source = "terraform-aws-modules/s3-bucket/aws" - version = "3.6.0" - - bucket = "${var.prefix}-alb-logs" - acl = "log-delivery-write" - - # Allow deletion of non-empty bucket - force_destroy = true - - attach_elb_log_delivery_policy = true # Required for ALB logs - attach_lb_log_delivery_policy = true # Required for ALB/NLB logs - attach_deny_insecure_transport_policy = true - attach_require_latest_tls_policy = true - block_public_acls = true - block_public_policy = true - ignore_public_acls = true - restrict_public_buckets = true - server_side_encryption_configuration = { - rule = { - apply_server_side_encryption_by_default = { - kms_master_key_id = var.kms_key.arn - sse_algorithm = "aws:kms" - } - } - } - lifecycle_rule = [ - { - id = "log" - enabled = true - - transition = [ - { - days = 30 - storage_class = "ONEZONE_IA" - } - ] - expiration = { - days = 90 - expired_object_delete_marker = true - } - noncurrent_version_expiration = { - newer_noncurrent_versions = 5 - days = 30 - } - } - ] -} - -output "access_logs_s3_bucket" { - value = module.s3_bucket_for_logs -} - -resource "aws_athena_database" "logs" { - name = replace("${var.prefix}-alb-logs", "-", "_") - bucket = module.athena-s3-bucket.s3_bucket_id -} - -module "athena-s3-bucket" { - source = "terraform-aws-modules/s3-bucket/aws" - version = "3.6.0" - - bucket = "${var.prefix}-alb-logs-athena" - acl = "log-delivery-write" - - # Allow deletion of non-empty bucket - force_destroy = true - - attach_elb_log_delivery_policy = true # Required for ALB logs - attach_lb_log_delivery_policy = true # Required for ALB/NLB logs - attach_deny_insecure_transport_policy = true - attach_require_latest_tls_policy = true - block_public_acls = true - block_public_policy = true - ignore_public_acls = true - restrict_public_buckets = true - server_side_encryption_configuration = { - rule = { - apply_server_side_encryption_by_default = { - kms_master_key_id = var.kms_key.arn - sse_algorithm = "aws:kms" - } - } - } - lifecycle_rule = [ - { - id = "log" - enabled = true - - transition = [ - { - days = 30 - storage_class = "ONEZONE_IA" - } - ] - expiration = { - days = 90 - expired_object_delete_marker = true - } - noncurrent_version_expiration = { - newer_noncurrent_versions = 5 - days = 30 - } - } - ] -} - -resource "aws_athena_workgroup" "logs" { - name = "${var.prefix}-logs" - - configuration { - enforce_workgroup_configuration = true - publish_cloudwatch_metrics_enabled = true - - result_configuration { - output_location = "s3://${module.athena-s3-bucket.s3_bucket_id}/output/" - - encryption_configuration { - encryption_option = "SSE_KMS" - kms_key_arn = var.kms_key.arn - } - } - } -} diff --git a/infrastructure/sandbox/SharedInfrastructure/eks.tf b/infrastructure/sandbox/SharedInfrastructure/eks.tf deleted file mode 100644 index 11a916145..000000000 --- a/infrastructure/sandbox/SharedInfrastructure/eks.tf +++ /dev/null @@ -1,429 +0,0 @@ -provider "kubernetes" { - experiments { - manifest_resource = true - } - host = data.aws_eks_cluster.cluster.endpoint - cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) - token = data.aws_eks_cluster_auth.cluster.token -} - -provider "helm" { - kubernetes { - host = data.aws_eks_cluster.cluster.endpoint - token = data.aws_eks_cluster_auth.cluster.token - cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) - } -} - -provider "kubectl" { - host = data.aws_eks_cluster.cluster.endpoint - cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) - token = data.aws_eks_cluster_auth.cluster.token - load_config_file = false - apply_retry_count = 5 -} - -locals { - cluster_version = "1.23" - account_role_mapping = { - # Add nonprod or other deployed accounts here - 411315989055 = "AWSReservedSSO_SandboxProdAdmins_9ccaa4f25c2eada0" - 968703308407 = "AWSReservedSSO_SandboxDevAdmins_6cfa1b6052653825" - } - # Role Generated by SSO but needs admin to EKS - # This hack is needed because "aws_iam_role" returns an unsusable ARN for EKS on SSO roles. - sandbox_sso_role = { - id = local.account_role_mapping[data.aws_caller_identity.current.account_id] - arn = join("", ["arn:aws:iam::", data.aws_caller_identity.current.account_id, ":role/", local.account_role_mapping[data.aws_caller_identity.current.account_id]]) - } - env_specific = { - 411315989055 = { - "dns_name" = "sandbox", - } - 968703308407 = { - "dns_name" = "sandbox-dev", - } - } -} - -output "eks_cluster" { - value = module.aws-eks-accelerator-for-terraform -} - -terraform { - required_providers { - kubectl = { - source = "gavinbunney/kubectl" - version = "1.14.0" - } - cloudflare = { - source = "cloudflare/cloudflare" - version = "4.11.0" - } - } -} - -data "aws_caller_identity" "current" {} - -data "aws_iam_role" "admin" { - name = "admin" -} - -resource "aws_iam_policy" "fluentbit_logs" { - name = "${var.prefix}-fluentbit" - policy = data.aws_iam_policy_document.fluentbit_logs.json -} - -data "aws_iam_policy_document" "fluentbit_logs" { - statement { - actions = [ - "logs:CreateLogStream", - "logs:CreateLogGroup", - "logs:DescribeLogStreams", - "logs:PutLogEvents", - ] - resources = ["*"] - } -} - -module "aws-eks-accelerator-for-terraform" { - source = "github.com/aws-ia/terraform-aws-eks-blueprints.git?ref=v4.32.1" - cluster_name = var.prefix - - # EKS Cluster VPC and Subnets - vpc_id = var.vpc.vpc_id - private_subnet_ids = var.vpc.private_subnets - - # EKS CONTROL PLANE VARIABLES - cluster_version = local.cluster_version - - # EKS MANAGED NODE GROUPS - managed_node_groups = { - mg_4 = { - node_group_name = "managed-ondemand" - instance_types = ["t3.medium"] - subnet_ids = var.vpc.private_subnets - max_size = 20 - min_size = 20 - } - } - - map_roles = [for i in concat(var.eks_allowed_roles, [data.aws_iam_role.admin, local.sandbox_sso_role]) : { - rolearn = i.arn - username = i.id - groups = ["system:masters"] - }] - - fargate_profiles = { - default = { - additional_iam_policies = [aws_iam_policy.ecr.arn, aws_iam_policy.fluentbit_logs.arn] - fargate_profile_name = "default" - fargate_profile_namespaces = [ - { - namespace = "default" - } - ] - subnet_ids = flatten([var.vpc.private_subnets]) - } - } -} - -output "oidc_provider_arn" { - value = module.aws-eks-accelerator-for-terraform.eks_oidc_provider_arn -} - -output "oidc_provider" { - value = module.aws-eks-accelerator-for-terraform.oidc_provider -} - -data "aws_eks_cluster" "cluster" { - name = module.aws-eks-accelerator-for-terraform.eks_cluster_id -} - -data "aws_eks_cluster_auth" "cluster" { - name = module.aws-eks-accelerator-for-terraform.eks_cluster_id -} - -module "kubernetes-addons" { - source = "github.com/aws-ia/terraform-aws-eks-blueprints.git//modules/kubernetes-addons?ref=v4.32.1" - - eks_cluster_id = module.aws-eks-accelerator-for-terraform.eks_cluster_id - eks_cluster_endpoint = module.aws-eks-accelerator-for-terraform.eks_cluster_endpoint - eks_cluster_version = local.cluster_version - eks_oidc_provider = module.aws-eks-accelerator-for-terraform.eks_oidc_issuer_url - eks_worker_security_group_id = module.aws-eks-accelerator-for-terraform.worker_node_security_group_id - - # EKS Managed Add-ons - enable_amazon_eks_vpc_cni = true - amazon_eks_vpc_cni_config = { - addon_version = "v1.11.5-eksbuild.1" - } - enable_amazon_eks_coredns = true - amazon_eks_coredns_config = { - addon_version = "v1.8.7-eksbuild.7" - } - enable_amazon_eks_kube_proxy = true - amazon_eks_kube_proxy_config = { - addon_version = "v1.23.17-eksbuild.2" - } - enable_amazon_eks_aws_ebs_csi_driver = true - - #K8s Add-ons - enable_aws_load_balancer_controller = true - enable_metrics_server = false - enable_cluster_autoscaler = true - enable_vpa = true - enable_prometheus = false - enable_ingress_nginx = false - enable_aws_for_fluentbit = false - enable_argocd = false - enable_fargate_fluentbit = true - enable_argo_rollouts = false - enable_kubernetes_dashboard = false - enable_yunikorn = false - - #depends_on = [module.aws-eks-accelerator-for-terraform.managed_node_groups] -} - -resource "helm_release" "haproxy_ingress" { - name = "haproxy-ingress-controller" - namespace = "kube-system" - - repository = "https://haproxy-ingress.github.io/charts" - chart = "haproxy-ingress" - - set { - name = "controller.hostNetwork" - value = "true" - } - - set { - name = "controller.kind" - value = "DaemonSet" - } - - set { - name = "controller.service.type" - value = "NodePort" - } - - set { - name = "controller.defaultBackendService" - value = "kube-system/default-redirect" - } -} - -resource "aws_lb_target_group" "eks" { - name = var.prefix - port = 80 - protocol = "HTTP" - vpc_id = var.vpc.vpc_id - health_check { - matcher = "302" - } -} - -resource "kubernetes_manifest" "targetgroupbinding" { - manifest = { - "apiVersion" = "elbv2.k8s.aws/v1beta1" - "kind" = "TargetGroupBinding" - "metadata" = { - "name" = "haproxy" - "namespace" = "kube-system" - } - "spec" = { - "targetGroupARN" = aws_lb_target_group.eks.arn - "serviceRef" = { - "name" = helm_release.haproxy_ingress.name - "port" = 80 - } - "targetType" = "instance" - "networking" = { - "ingress" = [{ - "from" = [{ - "securityGroup" = { - "groupID" = aws_security_group.lb.id - } - }] - "ports" = [{ - "protocol" = "TCP" - }] - }] - } - } - } -} - -resource "kubernetes_service" "redirect" { - metadata { - name = "default-redirect" - namespace = "kube-system" - } - - spec { - selector = { - app = kubernetes_deployment.redirect.metadata.0.labels.app - } - port { - port = 80 - name = "http" - } - } -} - -resource "kubernetes_deployment" "redirect" { - metadata { - name = "default-redirect" - namespace = "kube-system" - labels = { - app = "default-redirect" - } - } - - spec { - replicas = 1 - - selector { - match_labels = { - app = "default-redirect" - } - } - - template { - metadata { - labels = { - app = "default-redirect" - } - } - - spec { - container { - image = "nginx:1.25.2" - name = "nginx" - - port { - name = "http" - container_port = 80 - } - - resources { - limits = { - cpu = "0.5" - memory = "512Mi" - } - requests = { - cpu = "250m" - memory = "50Mi" - } - } - - volume_mount { - mount_path = "/etc/nginx" - read_only = true - name = "nginx-conf" - } - } - volume { - name = "nginx-conf" - config_map { - name = "default-redirect-config" - items { - key = "nginx.conf" - path = "nginx.conf" - } - } - } - } - } - } -} - -resource "kubernetes_config_map" "redirect" { - metadata { - name = "default-redirect-config" - namespace = "kube-system" - } - - data = { - "nginx.conf" = <<-EOT - user nginx; - worker_processes 1; - error_log /dev/stderr; - events { - worker_connections 10240; - } - http { - log_format main - 'remote_addr:$remote_addr\t' - 'time_local:$time_local\t' - 'method:$request_method\t' - 'uri:$request_uri\t' - 'host:$host\t' - 'status:$status\t' - 'bytes_sent:$body_bytes_sent\t' - 'referer:$http_referer\t' - 'useragent:$http_user_agent\t' - 'forwardedfor:$http_x_forwarded_for\t' - 'request_time:$request_time'; - access_log /dev/stderr main; - server { - listen 80; - server_name _; - location / { - return 302 https://fleetdm.com/try-fleet/sandbox-expired; - } - } - } - EOT - } -} - -resource "aws_iam_policy" "ecr" { - name = "${var.prefix}-ecr" - policy = data.aws_iam_policy_document.ecr.json -} - -data "aws_iam_policy_document" "ecr" { - statement { - actions = [ - "ecr:BatchCheckLayerAvailability", - "ecr:BatchGetImage", - "ecr:GetDownloadUrlForLayer", - "ecr:GetAuthorizationToken" - ] - resources = ["*"] - } - statement { - actions = [ #tfsec:ignore:aws-iam-no-policy-wildcards - "kms:Encrypt*", - "kms:Decrypt*", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:Describe*" - ] - resources = [aws_kms_key.ecr.arn] - } -} - -resource "aws_ecr_repository" "main" { - name = "${var.prefix}-eks" - image_tag_mutability = "IMMUTABLE" - - image_scanning_configuration { - scan_on_push = true - } - - encryption_configuration { - encryption_type = "KMS" - kms_key = aws_kms_key.ecr.arn - } -} - -output "ecr" { - value = aws_ecr_repository.main -} - -resource "aws_kms_key" "ecr" { - deletion_window_in_days = 10 - enable_key_rotation = true -} diff --git a/infrastructure/sandbox/SharedInfrastructure/rds.tf b/infrastructure/sandbox/SharedInfrastructure/rds.tf deleted file mode 100644 index 8d97bea0a..000000000 --- a/infrastructure/sandbox/SharedInfrastructure/rds.tf +++ /dev/null @@ -1,99 +0,0 @@ -resource "random_password" "database_password" { - length = 16 - special = false -} - -resource "aws_kms_key" "main" { - description = "${var.prefix}-${random_pet.db_secret_postfix.id}" - deletion_window_in_days = 10 - enable_key_rotation = true -} - -resource "random_pet" "db_secret_postfix" { - length = 1 -} - -resource "aws_secretsmanager_secret" "database_password_secret" { - name = "/fleet/database/password/master-2-${random_pet.db_secret_postfix.id}" - kms_key_id = aws_kms_key.main.id -} - -resource "aws_secretsmanager_secret_version" "database_password_secret_version" { - secret_id = aws_secretsmanager_secret.database_password_secret.id - secret_string = random_password.database_password.result -} - -resource "aws_secretsmanager_secret" "mysql" { - name = "/fleet/database/password/mysql-${random_pet.db_secret_postfix.id}" - kms_key_id = aws_kms_key.main.id -} - -output "mysql_secret" { - value = aws_secretsmanager_secret.mysql -} - -output "mysql_secret_kms" { - value = aws_kms_key.main -} - -resource "aws_secretsmanager_secret_version" "mysql" { - secret_id = aws_secretsmanager_secret.mysql.id - secret_string = jsonencode({ - endpoint = module.main.cluster_endpoint - username = module.main.cluster_master_username - password = module.main.cluster_master_password - }) -} - -module "main" { - source = "terraform-aws-modules/rds-aurora/aws" - version = "7.6.0" - - name = var.prefix - engine = "aurora-mysql" - engine_version = "5.7.mysql_aurora.2.11.3" - engine_mode = "serverless" - - storage_encrypted = true - master_username = "fleet" - master_password = random_password.database_password.result - create_random_password = false - enable_http_endpoint = false - performance_insights_enabled = true - - vpc_id = var.vpc.vpc_id - subnets = var.vpc.database_subnets - create_security_group = true - allowed_security_groups = var.allowed_security_groups - allowed_cidr_blocks = ["10.0.0.0/8"] - kms_key_id = aws_kms_key.main.arn - performance_insights_kms_key_id = aws_kms_key.main.arn - - monitoring_interval = 60 - - apply_immediately = true - skip_final_snapshot = true - - db_parameter_group_name = aws_db_parameter_group.main.id - db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.main.id - - scaling_configuration = { - auto_pause = true - min_capacity = 32 - max_capacity = 64 - seconds_until_auto_pause = 300 - timeout_action = "ForceApplyCapacityChange" - } -} - -resource "aws_db_parameter_group" "main" { - name = "${var.prefix}-aurora-db-mysql-parameter-group" - family = "aurora-mysql5.7" - description = "${var.prefix}-aurora-db-mysql-parameter-group" -} - -resource "aws_rds_cluster_parameter_group" "main" { - name = "${var.prefix}-aurora-mysql-cluster-parameter-group" - family = "aurora-mysql5.7" - description = "${var.prefix}-aurora-mysql-cluster-parameter-group" -} diff --git a/infrastructure/sandbox/SharedInfrastructure/redis.tf b/infrastructure/sandbox/SharedInfrastructure/redis.tf deleted file mode 100644 index 617619cf6..000000000 --- a/infrastructure/sandbox/SharedInfrastructure/redis.tf +++ /dev/null @@ -1,55 +0,0 @@ -resource "aws_elasticache_replication_group" "main" { - preferred_cache_cluster_azs = ["us-east-2a", "us-east-2b", "us-east-2c"] - engine = "redis" - parameter_group_name = aws_elasticache_parameter_group.main.id - subnet_group_name = var.vpc.elasticache_subnet_group_name - security_group_ids = [aws_security_group.redis.id] - replication_group_id = var.prefix - num_cache_clusters = 3 - node_type = "cache.m6g.large" - engine_version = "5.0.6" - port = "6379" - snapshot_retention_limit = 0 - automatic_failover_enabled = true - at_rest_encryption_enabled = false #tfsec:ignore:aws-elasticache-enable-at-rest-encryption - transit_encryption_enabled = false #tfsec:ignore:aws-elasticache-enable-in-transit-encryption - apply_immediately = true - description = var.prefix - -} - -resource "aws_elasticache_parameter_group" "main" { #tfsec:ignore:aws-vpc-add-description-to-security-group-rule - name = var.prefix - family = "redis5.0" - - parameter { - name = "client-output-buffer-limit-pubsub-hard-limit" - value = "0" - } - parameter { - name = "client-output-buffer-limit-pubsub-soft-limit" - value = "0" - } - - parameter { - name = "databases" - value = "65536" - } -} - -resource "aws_security_group" "redis" { #tfsec:ignore:aws-cloudwatch-log-group-customer-key tfsec:ignore:aws-vpc-add-description-to-security-group - name = "${var.prefix}-redis" - vpc_id = var.vpc.vpc_id - description = "${var.prefix}-redis" - - ingress { - from_port = 6379 - to_port = 6397 - protocol = "TCP" - cidr_blocks = var.vpc.private_subnets_cidr_blocks - } -} - -output "redis_cluster" { - value = aws_elasticache_replication_group.main -} diff --git a/infrastructure/sandbox/SharedInfrastructure/s3.tf b/infrastructure/sandbox/SharedInfrastructure/s3.tf deleted file mode 100644 index 8f8708fc3..000000000 --- a/infrastructure/sandbox/SharedInfrastructure/s3.tf +++ /dev/null @@ -1,25 +0,0 @@ -resource "aws_s3_bucket" "installers" { - bucket = "${var.prefix}-installers" -} - -resource "aws_s3_bucket_public_access_block" "installers" { - bucket = aws_s3_bucket.installers.id - - block_public_acls = true - block_public_policy = true -} - -resource "aws_s3_bucket_server_side_encryption_configuration" "installers" { - bucket = aws_s3_bucket.installers.id - - rule { - apply_server_side_encryption_by_default { - kms_master_key_id = var.kms_key.arn - sse_algorithm = "aws:kms" - } - } -} - -output "installer_bucket" { - value = aws_s3_bucket.installers -} diff --git a/infrastructure/sandbox/SharedInfrastructure/variables.tf b/infrastructure/sandbox/SharedInfrastructure/variables.tf deleted file mode 100644 index 0f8d71a8d..000000000 --- a/infrastructure/sandbox/SharedInfrastructure/variables.tf +++ /dev/null @@ -1,15 +0,0 @@ -variable "prefix" {} - -variable "allowed_security_groups" { - type = list(string) - default = [] -} - -variable "eks_allowed_roles" { - type = list(any) - default = [] -} - -variable "vpc" {} -variable "base_domain" {} -variable "kms_key" {} diff --git a/infrastructure/sandbox/backend-prod.conf b/infrastructure/sandbox/backend-prod.conf deleted file mode 100644 index 16e358957..000000000 --- a/infrastructure/sandbox/backend-prod.conf +++ /dev/null @@ -1,8 +0,0 @@ -bucket = "fleet-terraform-state20220408141538466600000002" -key = "fleet-cloud-sandbox-prod/sandbox/terraform.tfstate" # This should be set to account_alias/unique_key/terraform.tfstate -workspace_key_prefix = "fleet-cloud-sandbox-prod" # This should be set to the account alias -region = "us-east-2" -encrypt = true -kms_key_id = "9f98a443-ffd7-4dbe-a9c3-37df89b2e42a" -dynamodb_table = "tf-remote-state-lock" -role_arn = "arn:aws:iam::353365949058:role/terraform-fleet-cloud-sandbox-prod" diff --git a/infrastructure/sandbox/main.tf b/infrastructure/sandbox/main.tf deleted file mode 100644 index d6f1e2df1..000000000 --- a/infrastructure/sandbox/main.tf +++ /dev/null @@ -1,303 +0,0 @@ -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - version = "~> 5.10.0" - } - docker = { - source = "kreuzwerker/docker" - version = "~> 2.16.0" - } - git = { - source = "paultyng/git" - version = "~> 0.1.0" - } - random = { - source = "hashicorp/random" - version = "~> 3.5.1" - } - cloudflare = { - source = "cloudflare/cloudflare" - version = "~> 4.11.0" - } - } - backend "s3" {} -} - -provider "aws" { - region = "us-east-2" - default_tags { - tags = { - environment = "fleet-demo-${terraform.workspace}" - terraform = "https://github.com/fleetdm/fleet/tree/main/infrastructure/sandbox" - state = "s3://fleet-terraform-state20220408141538466600000002/${local.env_specific[data.aws_caller_identity.current.account_id]["state_name"]}/sandbox/terraform.tfstate" - } - } -} -provider "aws" { - alias = "replica" - region = "us-west-1" - default_tags { - tags = { - environment = "fleet-demo-${terraform.workspace}" - terraform = "https://github.com/fleetdm/fleet/tree/main/infrastructure/sandbox" - state = "s3://fleet-terraform-state20220408141538466600000002/${local.env_specific[data.aws_caller_identity.current.account_id]["state_name"]}/sandbox/terraform.tfstate" - } - } -} - -provider "aws" { - alias = "tmp" - region = "us-east-2" -} - -provider "cloudflare" {} - -provider "random" {} - -data "aws_ecr_authorization_token" "token" {} -provider "docker" { - # Configuration options - registry_auth { - address = "${data.aws_caller_identity.current.account_id}.dkr.ecr.us-east-2.amazonaws.com" - username = data.aws_ecr_authorization_token.token.user_name - password = data.aws_ecr_authorization_token.token.password - } -} - -provider "git" {} - -data "aws_caller_identity" "current" { - provider = aws.tmp -} - -data "git_repository" "tf" { - path = "${path.module}/../../" -} - -locals { - env_specific = { - 411315989055 = { - "state_name" = "fleet-cloud-sandbox-prod" - "prefix" = "sandbox-prod", - "base_domain" = "sandbox.fleetdm.com", - "subnet" = "11", - }, - 968703308407 = { - "state_name" = "fleet-cloud-sandbox-dev" - "prefix" = "sandbox-dev", - "base_domain" = "sandbox-dev.fleetdm.com", - "subnet" = "13", - }, - } - prefix = local.env_specific[data.aws_caller_identity.current.account_id]["prefix"] - base_domain = local.env_specific[data.aws_caller_identity.current.account_id]["base_domain"] -} - -data "aws_iam_policy_document" "kms" { - statement { - actions = ["kms:*"] - principals { - type = "AWS" - identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"] - } - resources = ["*"] - } - statement { - actions = [ - "kms:Encrypt*", - "kms:Decrypt*", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:Describe*", - ] - resources = ["*"] - principals { - type = "Service" - # TODO hard coded region - identifiers = ["logs.us-east-2.amazonaws.com"] - } - } -} - -resource "aws_kms_key" "main" { - policy = data.aws_iam_policy_document.kms.json - enable_key_rotation = true -} - -module "vpc" { - source = "terraform-aws-modules/vpc/aws" - version = "5.1.1" - - name = local.prefix - cidr = "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.0.0/16" - - # TODO hard coded AZs - azs = ["us-east-2a", "us-east-2b", "us-east-2c"] - private_subnets = [ - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.16.0/20", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.32.0/20", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.48.0/20", - ] - public_subnets = [ - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.128.0/24", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.129.0/24", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.130.0/24", - ] - database_subnets = [ - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.131.0/24", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.132.0/24", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.133.0/24", - ] - elasticache_subnets = [ - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.134.0/24", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.135.0/24", - "10.${local.env_specific[data.aws_caller_identity.current.account_id]["subnet"]}.136.0/24", - ] - - create_database_subnet_group = false - create_database_subnet_route_table = true - - create_elasticache_subnet_group = true - create_elasticache_subnet_route_table = true - - enable_vpn_gateway = false - one_nat_gateway_per_az = false - - single_nat_gateway = true - enable_nat_gateway = true - - manage_default_network_acl = false - manage_default_route_table = false - manage_default_security_group = false -} - -module "shared-infrastructure" { - source = "./SharedInfrastructure" - prefix = local.prefix - vpc = module.vpc - allowed_security_groups = [module.pre-provisioner.lambda_security_group.id] - eks_allowed_roles = [module.pre-provisioner.lambda_role, module.jit-provisioner.deprovisioner_role] - base_domain = local.base_domain - kms_key = aws_kms_key.main -} - -module "pre-provisioner" { - source = "./PreProvisioner" - prefix = local.prefix - vpc = module.vpc - kms_key = aws_kms_key.main - dynamodb_table = aws_dynamodb_table.lifecycle-table - remote_state = module.remote_state - mysql_secret = module.shared-infrastructure.mysql_secret - eks_cluster = module.shared-infrastructure.eks_cluster - redis_cluster = module.shared-infrastructure.redis_cluster - ecs_cluster = aws_ecs_cluster.main - base_domain = local.base_domain - installer_bucket = module.shared-infrastructure.installer_bucket - oidc_provider_arn = module.shared-infrastructure.oidc_provider_arn - oidc_provider = module.shared-infrastructure.oidc_provider - ecr = module.shared-infrastructure.ecr - license_key = var.license_key - apm_url = var.apm_url - apm_token = var.apm_token -} - -module "jit-provisioner" { - source = "./JITProvisioner" - prefix = local.prefix - vpc = module.vpc - kms_key = aws_kms_key.main - dynamodb_table = aws_dynamodb_table.lifecycle-table - remote_state = module.remote_state - mysql_secret = module.shared-infrastructure.mysql_secret - mysql_secret_kms = module.shared-infrastructure.mysql_secret_kms - eks_cluster = module.shared-infrastructure.eks_cluster - redis_cluster = module.shared-infrastructure.redis_cluster - alb_listener = module.shared-infrastructure.alb_listener - ecs_cluster = aws_ecs_cluster.main - base_domain = local.base_domain -} - -module "monitoring" { - source = "./Monitoring" - prefix = local.prefix - slack_webhook = var.slack_webhook - kms_key = aws_kms_key.main - lb = module.shared-infrastructure.lb - jitprovisioner = module.jit-provisioner.jitprovisioner - deprovisioner = module.jit-provisioner.deprovisioner - dynamodb_table = aws_dynamodb_table.lifecycle-table -} - -module "data" { - source = "./Data" - prefix = "${local.prefix}-data" - vpc = module.vpc - access_logs_s3_bucket = module.shared-infrastructure.access_logs_s3_bucket - kms_key = aws_kms_key.main -} - -resource "aws_dynamodb_table" "lifecycle-table" { - name = "${local.prefix}-lifecycle" - billing_mode = "PAY_PER_REQUEST" - hash_key = "ID" - - server_side_encryption { - enabled = true - kms_key_arn = aws_kms_key.main.arn - } - point_in_time_recovery { - enabled = true - } - - attribute { - name = "ID" - type = "S" - } - - attribute { - name = "State" - type = "S" - } - - attribute { - name = "redis_db" - type = "N" - } - - global_secondary_index { - name = "RedisDatabases" - hash_key = "redis_db" - projection_type = "KEYS_ONLY" - } - global_secondary_index { - name = "FleetState" - hash_key = "State" - projection_type = "ALL" - } -} - -module "remote_state" { - source = "nozaq/remote-state-s3-backend/aws" - tags = {} - - providers = { - aws = aws - aws.replica = aws.replica - } -} - -resource "aws_ecs_cluster" "main" { - name = local.prefix - - setting { - name = "containerInsights" - value = "enabled" - } -} - -variable "slack_webhook" {} -variable "license_key" {} -variable "apm_url" {} -variable "apm_token" {} diff --git a/infrastructure/sandbox/readme.md b/infrastructure/sandbox/readme.md deleted file mode 100644 index 3507828e6..000000000 --- a/infrastructure/sandbox/readme.md +++ /dev/null @@ -1,125 +0,0 @@ -## Terraform for the Fleet Demo Environment -This folder holds the infrastructure code for Fleet's demo environment. - -This readme itself is intended for infrastructure developers. If you aren't an infrastructure developer, please see https://sandbox.fleetdm.com/openapi.json for documentation. - -### Instance state machine -``` -provisioned -> unclaimed -> claimed -> [destroyed] -``` -provisioned means an instance was "terraform apply'ed" but no installers were generated. -unclaimed means its ready for a customer. claimed means its already in use by a customer. [destroyed] isn't a state you'll see in dynamodb, but it means that everything has been torn down. - -### Bugs -1. module.shared-infrastructure.kubernetes_manifest.targetgroupbinding is bugged sometimes, if it gives issues just comment it out -1. on a fresh apply, module.shared-infrastructure.aws_acm_certificate.main will have to be targeted first, then a normal apply can follow -1. If errors happen, see if applying again will fix it -1. There is a secret for apple signing whos values are not provided by this code. If you destroy/apply this secret, then it will have to be filled in manually. - -### Environment Access -#### AWS SSO Console -1. You will need to be in the group "AWS Sandbox Prod Admins" in the Fleet Google Workspace -1. From Google Apps, select "AWS SSO" -1. Under "AWS Account" select "Fleet Cloud Sandbox Prod" -1. Choose "Management console" under "SandboxProdAdmins" - -#### AWS CLI Access -1. Add the following to your `~/.aws/config`: - ``` - [profile sandbox_prod] - region = us-east-2 - sso_start_url = https://d-9a671703a6.awsapps.com/start - sso_region = us-east-2 - sso_account_id = 411315989055 - sso_role_name = SandboxProdAdmins - ``` -1. Login to sso on the cli via `aws sso login --profile=sandbox_prod` -1. To automatically use this profile, `export AWS_PROFILE=sandbox_prod` -1. For more help with AWS SSO Configuration see https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html - -#### VPN Access -You will need to be in the proper group in the Fleet Google Workspace to access this environment. Access to this environment will "just work" once added. - -#### Database Access -If you need to access the MySQL database backing Fleet Cloud Sandbox, do the following: - -1. Obtain database hostname - ```bash - aws rds describe-db-clusters --filter Name=db-cluster-id,Values=sandbox-prod --query "DBClusters[0].Endpoint" --output=text - ``` -1. Obtain database master username - ```bash - aws rds describe-db-clusters --filter Name=db-cluster-id,Values=sandbox-prod --query "DBClusters[0].MasterUsername" --output=text - ``` -1. Obtain database master password secret name (terraform adds a secret pet name, so we can obtain it from state data) - ```bash - terraform show -json | jq -r '.values.root_module.child_modules[].resources | flatten | .[] | select(.address == "module.shared-infrastructure.aws_secretsmanager_secret.database_password_secret").values.name' - ``` -1. Obtain database master password - ```bash - aws secretsmanager get-secret-value --secret-id "$(terraform show -json | jq -r '.values.root_module.child_modules[].resources | flatten | .[] | select(.address == "module.shared-infrastructure.aws_secretsmanager_secret.database_password_secret").values.name')" --query "SecretString" --output text - ``` -1. TL;DR -- Put it all together to get into MySQL. Just copy-paste the part below if you just want the credentials without understanding where they come from. - ```bash - DBPASSWORD="$(aws secretsmanager get-secret-value --secret-id "$(terraform show -json | jq -r '.values.root_module.child_modules[].resources | flatten | .[] | select(.address == "module.shared-infrastructure.aws_secretsmanager_secret.database_password_secret").values.name')" --query "SecretString" --output text)" - aws rds describe-db-clusters --filter Name=db-cluster-id,Values=sandbox-prod --query "DBClusters[0].[Endpoint,MasterUsername]" --output=text | read DBHOST DBUSER - mysql -h"${DBHOST}" -u"${DBUSER}" -p"${DBPASSWORD}" - ``` - -### Maintenance commands -#### Referesh fleet instances -```bash -for i in $(aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "unclaimed") | .ID.S'); do helm uninstall $i; aws dynamodb delete-item --table-name sandbox-prod-lifecycle --key "{\"ID\": {\"S\": \"${i}\"}}"; done -``` - -#### Cleanup instances that are running, but not tracked -```bash -for i in $((aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; helm list | tail -n +2 | cut -f 1) | sort | uniq -u); do helm uninstall $i; done -``` - -#### Cleanup instances that failed to provision -```bash -for i in $(aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "provisioned") | .ID.S'); do helm uninstall $i; aws dynamodb delete-item --table-name sandbox-prod-lifecycle --key "{\"ID\": {\"S\": \"${i}\"}}"; done -``` - -#### Cleanup untracked instances fully -This needs to be run in the deprovisioner terraform directory! -```bash -for i in $((aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | .ID.S'; terraform workspace list | sed 's/ //g' | grep -v '.*default' | sed '/^$/d') | sort | uniq -u); do (terraform workspace select $i && terraform apply -destroy -auto-approve && terraform workspace select default && terraform workspace delete $i); [ $? = 0 ] || break; done -``` - -#### Useful scripts - -1. [tools/upgrade_ecr_ecs.sh](tools/upgrade_ecr_ecs.sh) - Updates the ECR repo with the `FLEET_VERSION` specified and re-runs terraform to ensure the ecs PreProvisioner task uses it in the helm charts. -1. [tools/upgrade_unclaimed.sh](tools/upgrade_unclaimed.sh) - With the changes applied above, this script will replace unclaimed instances with ones upgraded to the new `FLEET_VERSION`. - - -### Runbooks -#### 5xx errors -If you are seeing 5xx errors, find out what instance its from via the saved query here: https://us-east-2.console.aws.amazon.com/athena/home?region=us-east-2#/query-editor -Make sure you set the workgroup to sandbox-prod-logs otherwise you won't be able to see the saved query. - -You can also see errors via the target groups here: https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#TargetGroups: - -#### Fleet Logs -Fleet logs can be accessed via kubectl. Setup kubectl by following these instructions: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html#create-kubeconfig-automatically -Examples: -```bash -# Obtain kubeconfig -aws eks update-kubeconfig --region us-east-2 --name sandbox-prod -# List pods (We currently use the default namespace) -kubectl get pods # Search in there which one it is. There will be 2 instances + a migrations one -# Obtain Logs from all pods for the release. You can also use `--previous` to obtain logs from a previous pod crash if desired. -kubectl logs -l release= -``` -We do not use eksctl since we use terraform managed resources. - -#### Database debugging -Database debugging is accessed through the rds console: https://us-east-2.console.aws.amazon.com/rds/home?region=us-east-2#database:id=sandbox-prod;is-cluster=true -Currently only database metrics are available because performance insights is not available for serverless RDS - -If you need to access a specific database for any reason (such as to obtain an email address to reach out in case of an issue), the database name is the same as the instance id. Using the database access method above, you could use the following example to obtain said email address: - -```bash -mysql -h"${DBHOST}" -u"${DBUSER}" -p"${DBPASSWORD}" -D"" <<<"SELECT email FROM users;" -``` diff --git a/infrastructure/sandbox/tools/cleanup_failed.sh b/infrastructure/sandbox/tools/cleanup_failed.sh deleted file mode 100755 index 9613ca32e..000000000 --- a/infrastructure/sandbox/tools/cleanup_failed.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/bin/bash - -set -e -set -x - -function is_in() { - ITEM="${1}" - LIST="${2}" - for VALUE in ${LIST}; do - if [ "${ITEM}" = "${VALUE}" ]; then - return 0 - fi - done - return 1 -} - -pushd "$(dirname ${0})/../JITProvisioner/deprovisioner/deploy_terraform" - -terraform init -backend-config=backend.conf - -export TF_VAR_eks_cluster="sandbox-prod" -export TF_VAR_mysql_secret="arn:aws:secretsmanager:us-east-2:411315989055:secret:/fleet/database/password/mysql-boxer-QGmEeA" - -terraform workspace select default - -FAILED_EXECUTIONS="$(aws stepfunctions list-executions --state-machine-arn arn:aws:states:us-east-2:411315989055:stateMachine:sandbox-prod | jq -r '.executions[] | select(.status=="FAILED") | .name' | awk -F- '{ print $1 }')" - -EXISTING_WORKSPACES="$(terraform workspace list | grep -v default | awk '{ print $1 }')" - -TO_DELETE="$( (echo "${FAILED_EXECUTIONS:?}"; echo "${EXISTING_WORKSPACES:?}") | sort | uniq -d)" - -set +x -echo "You must be connected to the VPN to continue." -echo "To Delete: $(wc -l <<<"${TO_DELETE:?}")" -echo "Failed Executions: $(wc -l <<<"${FAILED_EXECUTIONS:?}")" -echo "Existing Workspaces: $(wc -l <<<"${EXISTING_WORKSPACES}")" -echo "Press ENTER to continue, CTRL+C to abort" -read -set -x - -for INSTANCE in ${TO_DELETE:?}; do - if ! is_in "${INSTANCE}" "${EXISTING_WORKSPACES}"; then - echo ${INSTANCE} is not in the existing workspaces, continuing. - continue; - fi - terraform workspace select ${INSTANCE:?} - echo "Destroying ${INSTANCE:?}" - terraform apply -destroy -auto-approve - terraform workspace select default - echo "Deleting Workspace ${INSTANCE:?}" - terraform workspace delete ${INSTANCE:?} -done - -popd diff --git a/infrastructure/sandbox/tools/upgrade_ecr_ecs.sh b/infrastructure/sandbox/tools/upgrade_ecr_ecs.sh deleted file mode 100755 index dab23dbc5..000000000 --- a/infrastructure/sandbox/tools/upgrade_ecr_ecs.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/bin/bash - -set -e - -function check_for_variable() { - VARNAME="${1:?}" - if [ -z "${!VARNAME}" ]; then - echo -n "Please enter the value for ${VARNAME:?} -=> " - read ${VARNAME} - export ${VARNAME} - fi -} - -# Note that this cannot currently run on Darwin ARM64, but maybe -# someday. - -case "$(uname)" in - Darwin) - SED=gsed - ;; - Linux) - SED=sed - ;; - *) - echo "Unknown Operating System Unable to Continue" - exit 1 - ;; -esac - -# TF_VAR_slack_webhook is redundant, but let's provide a common -# interface. Since we tag Sandcastle builds separate from the -# released version, we need the separate ECR_IMAGE_VERSION -# variable. - -EXPECTED_VARIABLES=( - TF_VAR_slack_webhook - TF_VAR_apm_token - TF_VAR_apm_url - TF_VAR_license_key - CLOUDFLARE_API_TOKEN - FLEET_VERSION - ECR_IMAGE_VERSION -) - -for VARIABLE in ${EXPECTED_VARIABLES[@]}; do - check_for_variable "${VARIABLE:?}" -done - -FLEET_ECR_REPO="411315989055.dkr.ecr.us-east-2.amazonaws.com" -FLEET_ECR_IMAGE="${FLEET_ECR_REPO:?}/sandbox-prod-eks:${ECR_IMAGE_VERSION:?}" -FLEET_DOCKERHUB_IMAGE="fleetdm/fleet:${FLEET_VERSION:?}" - -pushd "$(dirname ${0})/.." - - -# Docker Prereqs - -aws ecr get-login-password | docker login --username AWS --password-stdin "${FLEET_ECR_REPO:?}" - -docker pull "${FLEET_DOCKERHUB_IMAGE:?}" -docker tag "${FLEET_DOCKERHUB_IMAGE:?}" "${FLEET_ECR_IMAGE:?}" -docker push "${FLEET_ECR_IMAGE:?}" - -# Update the terraform to deploy FLEET_VERSION. Requires gsed on Darwin! -# This assumes the ECR_IMAGE_VERSION matches "fleet-${ECR_IMAGE_VERSION}". -# If this is not correct for any reason, this will fail. Manually correct -# and apply. - -${SED:?} -i '/name = "imageTag"/!b;n;c\ value = "'${ECR_IMAGE_VERSION:?}'"' PreProvisioner/lambda/deploy_terraform/main.tf -${SED:?} -i 's/^\( fleet_tag = \).*/\1"'${ECR_IMAGE_VERSION:?}'"/g' JITProvisioner/jitprovisioner.tf - -# Before running terraform, clean up the deprovisioner just in case -rm -rf ./JITProvisioner/deprovisioner/deploy_terraform/.terraform - -terraform init --backend-config=backend-prod.conf - -terraform apply - -echo <<-EOTEXT - Script complete. Please note this updated PreProvisioner/lambda/deploy_terraform/main.tf - in order to start using the new version of fleet. - - Please ensure your changes are committed to the repo! -EOTEXT - -popd - - diff --git a/infrastructure/sandbox/tools/upgrade_unclaimed.sh b/infrastructure/sandbox/tools/upgrade_unclaimed.sh deleted file mode 100755 index 0cfebbc6e..000000000 --- a/infrastructure/sandbox/tools/upgrade_unclaimed.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/bin/bash - -set -e - -function get_unclaimed_instances() { - aws dynamodb scan --table-name sandbox-prod-lifecycle | jq -r '.Items[] | select(.State.S == "unclaimed") | .ID.S' | sort -} - -function purge_instances() { - INSTANCES="${1}" - for INSTANCE in ${INSTANCES}; do - # set -e should force this to abort on any error - terraform workspace select "${INSTANCE:?}" - terraform apply -destroy -auto-approve - terraform workspace select default - terraform workspace delete "${INSTANCE:?}" - done -} - -function provision_new_instances() { - echo "Running ${PREPROVISIONER_TASK_DEFINITION_ARN:?}" - TASK_ARN="$(aws ecs run-task --region us-east-2 --cluster sandbox-prod --task-definition "${PREPROVISIONER_TASK_DEFINITION_ARN:?}" --launch-type FARGATE --network-configuration 'awsvpcConfiguration={subnets="subnet-055269a06c5204d20",securityGroups="sg-0f7fb24be3617d79c"}' | jq -r '.tasks[0].taskArn')" - while : ; do - # Wait at least 60 seconds before checking on status to allow - # time for it to spin up in FARGATE. - sleep 60 - TASK_STATUS="$(aws ecs describe-tasks --tasks "${TASK_ARN:?}" --cluster sandbox-prod | jq -r '.tasks[0].desiredStatus')" - echo "${TASK_ARN:?} status is currently ${TASK_STATUS:?}" - if [ "${TASK_STATUS:?}" = "STOPPED" ]; then - break - fi - done -} - -cat <<-EOWARN - WARNING: - - You must be logged into the AWS CLI _and_ the VPN for this to work! - - Please note that in order to upgrade the running image or the included standard - query library, the terraform updating the task definition should be run prior - to running this script! You will also need to push the appropriate fleetdm/fleet - image to ECR. - - Press ENTER to continue or CTRL+C to abort. -EOWARN -read - -pushd "$(dirname "${0}")/../JITProvisioner/deprovisioner/deploy_terraform" - -export TF_VAR_eks_cluster="sandbox-prod" -export TF_VAR_mysql_secret="arn:aws:secretsmanager:us-east-2:411315989055:secret:/fleet/database/password/mysql-boxer-QGmEeA" - -terraform init -backend-config=backend.conf - -# This should probably be calculated rather than static at some point. -EXPECTED_UNCLAIMED_INSTANCES=10 -PREPROVISIONER_TASK_DEFINITION_ARN="$(aws ecs list-task-definitions | jq -r '.taskDefinitionArns[] | select(contains("sandbox-prod-preprovisioner"))' | tail -n1)" -UNCLAIMED_INSTANCES="$(get_unclaimed_instances)" -UNCLAIMED_ARRAY=( ${UNCLAIMED_INSTANCES} ) - -HALF_ROUND_DOWN="${UNCLAIMED_ARRAY[@]::$((${#UNCLAIMED_ARRAY[@]} / 2))}" - -purge_instances "${HALF_ROUND_DOWN:?}" - -provision_new_instances - -# If something went wrong, don't let us continue with way too few unclaimed instances -NEW_UNCLAIMED="$(get_unclaimed_instances | wc -w)" -if [ ${NEW_UNCLAIMED:?} -lt ${EXPECTED_UNCLAIMED_INSTANCES:?} ]; then - echo "Only ${NEW_UNCLAIMED:?} instances found, ${EXPECTED_UNCLAIMED_INSTANCES:?} expected. Press ENTER to continue or CTRL-C to abort." - read -fi - -# Get a fresh unclaimed as close to runtime as possible to reduce risk of deleting a claimed instance. -REMAINING_UNCLAIMED="$(comm -12 <(get_unclaimed_instances) <(echo "${UNCLAIMED_INSTANCES:?}"))" - -purge_instances "${REMAINING_UNCLAIMED:?}" - -provision_new_instances - -popd diff --git a/orbit/changes/16423-reduce-orbit-logging b/orbit/changes/16423-reduce-orbit-logging new file mode 100644 index 000000000..74c89295b --- /dev/null +++ b/orbit/changes/16423-reduce-orbit-logging @@ -0,0 +1 @@ +* Reduce error logs when orbit cannot connect to Fleet. diff --git a/orbit/changes/issue-16794-upgrade-go-to-1.21.7 b/orbit/changes/issue-16794-upgrade-go-to-1.21.7 new file mode 100644 index 000000000..ea86e8885 --- /dev/null +++ b/orbit/changes/issue-16794-upgrade-go-to-1.21.7 @@ -0,0 +1 @@ +- upgrade go version to 1.21.7 diff --git a/orbit/cmd/orbit/orbit.go b/orbit/cmd/orbit/orbit.go index e0c1ab61c..1fd3d79ba 100644 --- a/orbit/cmd/orbit/orbit.go +++ b/orbit/cmd/orbit/orbit.go @@ -782,6 +782,14 @@ func main() { enrollSecret, fleetClientCertificate, orbitHostInfo, + &service.OnGetConfigErrFuncs{ + DebugErrFunc: func(err error) { + log.Debug().Err(err).Msg("get config") + }, + OnNetErrFunc: func(err error) { + log.Info().Err(err).Msg("network error") + }, + }, ) if err != nil { return fmt.Errorf("error new orbit client: %w", err) @@ -1054,6 +1062,14 @@ func main() { enrollSecret, fleetClientCertificate, orbitHostInfo, + &service.OnGetConfigErrFuncs{ + DebugErrFunc: func(err error) { + log.Debug().Err(err).Msg("get config") + }, + OnNetErrFunc: func(err error) { + log.Info().Err(err).Msg("network error") + }, + }, ) if err != nil { return fmt.Errorf("new client for capabilities checker: %w", err) @@ -1563,7 +1579,7 @@ func (f *capabilitiesChecker) execute() error { // Do an initial ping to store the initial capabilities if needed if len(f.client.GetServerCapabilities()) == 0 { if err := f.client.Ping(); err != nil { - logging.LogErrIfEnvNotSet(constant.SilenceEnrollLogErrorEnvVar, err, "pinging the server") + logging.LogErrIfEnvNotSetDebug(constant.SilenceEnrollLogErrorEnvVar, err, "pinging the server") } } @@ -1573,7 +1589,7 @@ func (f *capabilitiesChecker) execute() error { oldCapabilities := f.client.GetServerCapabilities() // ping the server to get the latest capabilities if err := f.client.Ping(); err != nil { - logging.LogErrIfEnvNotSet(constant.SilenceEnrollLogErrorEnvVar, err, "pinging the server") + logging.LogErrIfEnvNotSetDebug(constant.SilenceEnrollLogErrorEnvVar, err, "pinging the server") continue } newCapabilities := f.client.GetServerCapabilities() diff --git a/orbit/pkg/logging/logging.go b/orbit/pkg/logging/logging.go index f557827ae..e5e9836a1 100644 --- a/orbit/pkg/logging/logging.go +++ b/orbit/pkg/logging/logging.go @@ -3,13 +3,24 @@ package logging import ( "os" + "github.com/rs/zerolog" "github.com/rs/zerolog/log" ) -// LogErrIfEnvNotSet logs if the environment variable is not set to "1". +// LogErrIfEnvNotSet logs an info error if the environment variable is not set to "1". func LogErrIfEnvNotSet(envVarName string, err error, message string) { + LogErrIfEnvNotSetWithEvent(envVarName, err, message, log.Info()) +} + +// LogErrIfEnvNotSetDebug logs a debug error if the environment variable is not set to "1". +func LogErrIfEnvNotSetDebug(envVarName string, err error, message string) { + LogErrIfEnvNotSetWithEvent(envVarName, err, message, log.Debug()) +} + +// LogErrIfEnvNotSetWithEvent logs if the environment variable is not set to "1". +func LogErrIfEnvNotSetWithEvent(envVarName string, err error, message string, event *zerolog.Event) { actualValue := os.Getenv(envVarName) if actualValue != "1" { - log.Info().Err(err).Msg(message) + event.Err(err).Msg(message) } } diff --git a/orbit/pkg/update/disk_encryption.go b/orbit/pkg/update/disk_encryption.go index ffd85fd95..e922cfdfd 100644 --- a/orbit/pkg/update/disk_encryption.go +++ b/orbit/pkg/update/disk_encryption.go @@ -22,7 +22,7 @@ func ApplyDiskEncryptionRunnerMiddleware(f OrbitConfigFetcher) *DiskEncryptionRu func (d *DiskEncryptionRunner) GetConfig() (*fleet.OrbitConfig, error) { cfg, err := d.fetcher.GetConfig() if err != nil { - log.Info().Err(err).Msg("calling GetConfig from DiskEncryptionFetcher") + log.Debug().Err(err).Msg("calling GetConfig from DiskEncryptionFetcher") return nil, err } diff --git a/orbit/pkg/update/nudge.go b/orbit/pkg/update/nudge.go index a6c168d41..b02e075ba 100644 --- a/orbit/pkg/update/nudge.go +++ b/orbit/pkg/update/nudge.go @@ -69,7 +69,7 @@ func (n *NudgeConfigFetcher) GetConfig() (*fleet.OrbitConfig, error) { log.Debug().Msg("running nudge config fetcher middleware") cfg, err := n.Fetcher.GetConfig() if err != nil { - log.Info().Err(err).Msg("calling GetConfig from NudgeConfigFetcher") + log.Debug().Err(err).Msg("calling GetConfig from NudgeConfigFetcher") return nil, err } diff --git a/pkg/mdm/mdmtest/apple.go b/pkg/mdm/mdmtest/apple.go index 21814a589..0f15c7c45 100644 --- a/pkg/mdm/mdmtest/apple.go +++ b/pkg/mdm/mdmtest/apple.go @@ -23,15 +23,15 @@ import ( "github.com/fleetdm/fleet/v4/pkg/fleethttp" apple_mdm "github.com/fleetdm/fleet/v4/server/mdm/apple" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil/x509util" + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" "github.com/go-kit/kit/log" kitlog "github.com/go-kit/kit/log" httptransport "github.com/go-kit/kit/transport/http" "github.com/google/uuid" "github.com/groob/plist" micromdm "github.com/micromdm/micromdm/mdm/mdm" - "github.com/micromdm/scep/v2/cryptoutil/x509util" - "github.com/micromdm/scep/v2/scep" - scepserver "github.com/micromdm/scep/v2/server" "go.mozilla.org/pkcs7" ) diff --git a/schema/osquery_fleet_schema.json b/schema/osquery_fleet_schema.json index 5c20ff377..cb2ae7e24 100644 --- a/schema/osquery_fleet_schema.json +++ b/schema/osquery_fleet_schema.json @@ -1,20 +1,20 @@ [ { "name": "account_policy_data", - "description": "Additional macOS user account data from the AccountPolicy section of OpenDirectory.", + "description": "Additional macOS user account data from the AccountPolicy section of [OpenDirectory](https://en.wikipedia.org/wiki/Apple_Open_Directory), the identity provider used by Apple.", "url": "https://fleetdm.com/tables/account_policy_data", "platforms": [ "darwin" ], "evented": false, "cacheable": false, - "notes": "", + "notes": "- The values in this OpenDirectory table are related to account creation. In the past, it was fairly common to use OpenDirectory to have a home folder (`~`) on a server, and then log in and get that folder wherever they are. (These days, this use case is more uncommon.)\n- To determine who is logged in to the Mac, or for example, to check the record name versus the computer's \"short name\", consider using the data in [the DSCL table](https://fleetdm.com/tables/dscl).", "examples": "Query the creation date of user accounts. You could also query the date of the last failed login attempt or password change.\n```\nSELECT strftime('%Y-%m-%d %H:%M:%S',creation_time,'unixepoch') AS creationdate FROM account_policy_data;\n```\n\nSee each user's last password set date and number of failed logins since last successful login to detect any intrusion attempts.\n```\nSELECT u.username u.uid, strftime('%Y-%m-%dT%H:%M:%S', a.password_last_set_time, 'unixepoch') AS password_last_set_time, a.failed_login_count, strftime('%Y-%m-%dT%H:%M:%S', a.failed_login_timestamp, 'unixepoch') AS failed_login_timestamp FROM account_policy_data AS a CROSS JOIN users AS u USING (uid) ORDER BY password_last_set_time ASC;\n```", "columns": [ { "name": "uid", - "description": "User ID", - "type": "bigint", + "description": "[User ID](https://superuser.com/a/1108201)", + "type": "BIGINT", "notes": "", "hidden": false, "required": false, diff --git a/schema/tables/account_policy_data.yml b/schema/tables/account_policy_data.yml index 5fbd708fa..d1abf269a 100644 --- a/schema/tables/account_policy_data.yml +++ b/schema/tables/account_policy_data.yml @@ -1,5 +1,14 @@ name: account_policy_data -description: Additional macOS user account data from the AccountPolicy section of [OpenDirectory](https://en.wikipedia.org/wiki/Apple_Open_Directory). +description: Additional macOS user account data from the AccountPolicy section of [OpenDirectory](https://en.wikipedia.org/wiki/Apple_Open_Directory), the identity provider used by Apple. +columns: + - name: uid + description: "[User ID](https://superuser.com/a/1108201)" + type: BIGINT + required: false +notes: >- + - The values in this OpenDirectory table are related to account creation. In the past, it was fairly common to use OpenDirectory to have a home folder (`~`) on a server, and then log in and get that folder wherever they are. (These days, this use case is more uncommon.) + + - To determine who is logged in to the Mac, or for example, to check the record name versus the computer's "short name", consider using the data in [the DSCL table](https://fleetdm.com/tables/dscl). examples: >- Query the creation date of user accounts. You could also query the date of the last failed login attempt or password change. diff --git a/server/config/config.go b/server/config/config.go index 58f520cd0..fa3220784 100644 --- a/server/config/config.go +++ b/server/config/config.go @@ -18,6 +18,7 @@ import ( "testing" "time" + "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/cryptoutil" nanodep_client "github.com/micromdm/nanodep/client" "github.com/micromdm/nanodep/tokenpki" "github.com/spf13/cast" @@ -594,6 +595,20 @@ func (m *MDMConfig) AppleAPNs() (cert *tls.Certificate, pemCert, pemKey []byte, return m.appleAPNs, m.appleAPNsPEMCert, m.appleAPNsPEMKey, nil } +func (m *MDMConfig) AppleAPNsTopic() (string, error) { + apnsCert, _, _, err := m.AppleAPNs() + if err != nil { + return "", fmt.Errorf("parsing APNs certificates: %w", err) + } + + mdmPushCertTopic, err := cryptoutil.TopicFromCert(apnsCert.Leaf) + if err != nil { + return "", fmt.Errorf("extracting topic from APNs certificate: %w", err) + } + + return mdmPushCertTopic, nil +} + // AppleSCEP returns the parsed and validated TLS certificate for Apple SCEP. // It parses and validates it if it hasn't been done yet. func (m *MDMConfig) AppleSCEP() (cert *tls.Certificate, pemCert, pemKey []byte, err error) { diff --git a/server/contexts/ctxerr/ctxerr.go b/server/contexts/ctxerr/ctxerr.go index fcdec4feb..f644ce733 100644 --- a/server/contexts/ctxerr/ctxerr.go +++ b/server/contexts/ctxerr/ctxerr.go @@ -23,6 +23,7 @@ import ( "github.com/fleetdm/fleet/v4/server/contexts/host" "github.com/fleetdm/fleet/v4/server/contexts/viewer" "github.com/fleetdm/fleet/v4/server/fleet" + "github.com/getsentry/sentry-go" "go.elastic.co/apm/v2" ) @@ -73,6 +74,16 @@ func (e *FleetError) StackTrace() *runtime.Frames { return runtime.CallersFrames(st) } +// StackFrames implements the reflection-based method that Sentry's Go SDK +// uses to look for a stack trace. It abuses the internals a bit, as it uses +// the name that sentry looks for, but returns the []uintptr slice (which works +// because of how they handle the returned value via reflection). A cleaner +// approach would be if they used an interface detection like APM does. +// https://github.com/getsentry/sentry-go/blob/master/stacktrace.go#L44-L49 +func (e *FleetError) StackFrames() []uintptr { + return e.stack.(stack) // outside of tests, e.stack is always a stack type +} + // LogFields implements fleet.ErrWithLogFields, so attached error data can be // logged along with the error func (e *FleetError) LogFields() []any { @@ -295,8 +306,37 @@ func Handle(ctx context.Context, err error) { // (the one from the initial New/Wrap call). cause = ferr } + + // send to elastic APM apm.CaptureError(ctx, cause).Send() + // if Sentry is configured, capture the error there + if sentryClient := sentry.CurrentHub().Client(); sentryClient != nil { + // sentry is configured, add contextual information if available + v, _ := viewer.FromContext(ctx) + h, _ := host.FromContext(ctx) + + if v.User != nil || h != nil { + // we have a viewer (user) or a host in the context, use this to + // enrich the error with more context + ctxHub := sentry.CurrentHub().Clone() + if v.User != nil { + ctxHub.ConfigureScope(func(scope *sentry.Scope) { + scope.SetTag("email", v.User.Email) + scope.SetTag("user_id", fmt.Sprint(v.User.ID)) + }) + } else if h != nil { + ctxHub.ConfigureScope(func(scope *sentry.Scope) { + scope.SetTag("hostname", h.Hostname) + scope.SetTag("host_id", fmt.Sprint(h.ID)) + }) + } + ctxHub.CaptureException(cause) + } else { + sentry.CaptureException(cause) + } + } + if eh := fromContext(ctx); eh != nil { eh.Store(err) } diff --git a/server/contexts/ctxerr/stack_test.go b/server/contexts/ctxerr/stack_test.go index 4b155f66e..ed65f774f 100644 --- a/server/contexts/ctxerr/stack_test.go +++ b/server/contexts/ctxerr/stack_test.go @@ -2,13 +2,21 @@ package ctxerr import ( "context" + "encoding/json" "errors" "fmt" "io" + "net/http" + "net/http/httptest" + "net/url" + "path/filepath" "regexp" + "slices" "strings" "testing" + "time" + "github.com/getsentry/sentry-go" "github.com/stretchr/testify/require" "go.elastic.co/apm/v2" "go.elastic.co/apm/v2/apmtest" @@ -157,17 +165,6 @@ func TestElasticStack(t *testing.T) { } for _, c := range cases { t.Run(c.desc, func(t *testing.T) { - checkStack := func(stack, contains []string) { - stackStr := strings.Join(stack, "\n") - lastIx := -1 - for _, want := range contains { - ix := strings.Index(stackStr, want) - require.True(t, ix > -1, "expected stack %v to contain %q", stackStr, want) - require.True(t, ix > lastIx, "expected %q to be after last check in %v", want, stackStr) - lastIx = ix - } - } - err := c.chain() require.Error(t, err) var ferr *FleetError @@ -192,8 +189,8 @@ func TestElasticStack(t *testing.T) { } } - checkStack(causeStack, c.causeStackContains) - checkStack(leafStack, c.leafStackContains) + checkStack(t, causeStack, c.causeStackContains) + checkStack(t, leafStack, c.leafStackContains) // run in a test APM transaction, recording the sent events _, _, apmErrs := apmtest.NewRecordingTracer().WithTransaction(func(ctx context.Context) { @@ -219,7 +216,216 @@ func TestElasticStack(t *testing.T) { for _, st := range apmErr.Exception.Stacktrace { apmStack = append(apmStack, st.Module+"."+st.Function+" ("+st.File+":"+fmt.Sprint(st.Line)+")") } - checkStack(apmStack, c.causeStackContains) + checkStack(t, apmStack, c.causeStackContains) }) } } + +func TestSentryStack(t *testing.T) { + ctx := context.Background() + + var wrap = errors.New("wrap") + errFn := func(fn func() error) error { // func1 + if err := fn(); err != nil { + if err == wrap { + return Wrap(ctx, err, "wrapped") + } + return err + } + return nil + } + + type sentryPayload struct { + Exceptions []*sentry.Exception `json:"exception"` // json field name is singular + } + + cases := []struct { + desc string + chain func() error + causeStackContains []string + leafStackContains []string + }{ + { + desc: "depth 2, wrap in errFn", + chain: func() error { + // gets wrapped in errFn, so top of the stack is func1 + return errFn(func() error { return wrap }) + }, + causeStackContains: []string{ + "/ctxerr.TestSentryStack.func1 ", + }, + }, + { + desc: "depth 2, wrap immediately", + chain: func() error { + // gets wrapped immediately when returned, so top of the stack is funcX.1 + return errFn(func() error { return Wrap(ctx, wrap) }) + }, + causeStackContains: []string{ + "/ctxerr.TestSentryStack.func3.1 ", + "/ctxerr.TestSentryStack.func1 ", // errFn + }, + }, + { + desc: "depth 3, ctxerr.New", + chain: func() error { + // gets wrapped directly in the call to New, so top of the stack is X.1.1 + return errFn(func() error { return func() error { return New(ctx, "new") }() }) + }, + causeStackContains: []string{ + "/ctxerr.TestSentryStack.func4.1.1 ", + "/ctxerr.TestSentryStack.func4.1 ", + "/ctxerr.TestSentryStack.func1 ", // errFn + }, + }, + { + desc: "depth 4, ctxerr.New", + chain: func() error { + // stacked capture in New, so top of the stack is X.1.1.1 + return errFn(func() error { + return func() error { + return func() error { + return New(ctx, "new") + }() + }() + }) + }, + causeStackContains: []string{ + "/ctxerr.TestSentryStack.func5.1.1.1 ", + "/ctxerr.TestSentryStack.func5.1.1 ", + "/ctxerr.TestSentryStack.func5.1 ", + "/ctxerr.TestSentryStack.func1 ", // errFn + }, + }, + { + desc: "depth 4, ctxerr.New always wrapped", + chain: func() error { + // stacked capture in New, so top of the stack is X.1.1.1 + return errFn(func() error { + return Wrap(ctx, func() error { + return Wrap(ctx, func() error { + return New(ctx, "new") + }()) + }()) + }) + }, + causeStackContains: []string{ + "/ctxerr.TestSentryStack.func6.1.1.1 ", + "/ctxerr.TestSentryStack.func6.1.1 ", + "/ctxerr.TestSentryStack.func6.1 ", + "/ctxerr.TestSentryStack.func1 ", // errFn + }, + leafStackContains: []string{ + // only a single stack trace is collected when wrapping another + // FleetError. + "/ctxerr.TestSentryStack.func6.1 ", + }, + }, + { + desc: "depth 4, wrapped only at the end", + chain: func() error { + return errFn(func() error { + return Wrap(ctx, func() error { + return func() error { + return io.EOF + }() + }()) + }) + }, + causeStackContains: []string{ + // since it wraps a non-FleetError, the full stack is collected + "/ctxerr.TestSentryStack.func7.1 ", + "/ctxerr.TestSentryStack.func1 ", // errFn + }, + }, + } + for _, c := range cases { + t.Run(c.desc, func(t *testing.T) { + err := c.chain() + require.Error(t, err) + var ferr *FleetError + require.ErrorAs(t, err, &ferr) + + leafStack := ferr.Stack() + cause := FleetCause(err) + causeStack := cause.Stack() + + // if the fleet root error != fleet leaf error, then separate leaf + + // cause stacks must be provided. + if cause != ferr { + require.True(t, len(c.causeStackContains) > 0) + require.True(t, len(c.leafStackContains) > 0) + } else { + // otherwise use the same stack expectations for both + if len(c.causeStackContains) == 0 { + c.causeStackContains = c.leafStackContains + } + if len(c.leafStackContains) == 0 { + c.leafStackContains = c.causeStackContains + } + } + + checkStack(t, causeStack, c.causeStackContains) + checkStack(t, leafStack, c.leafStackContains) + + // start an HTTP server that Sentry will send the event to + var payload sentryPayload + srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + b, err := io.ReadAll(r.Body) + require.NoError(t, err) + err = json.Unmarshal(b, &payload) + require.NoError(t, err) + w.WriteHeader(200) + })) + defer srv.Close() + + // a "project ID" is required, which is the path portion + parsedURL, err := url.Parse(srv.URL + "/testproject") + require.NoError(t, err) + parsedURL.User = url.User("test") + err = sentry.Init(sentry.ClientOptions{Dsn: parsedURL.String()}) + require.NoError(t, err) + + // best-effort un-configure of Sentry on exit + t.Cleanup(func() { + sentry.CurrentHub().BindClient(nil) + }) + + eventID := sentry.CaptureException(cause) + require.NotNil(t, eventID) + require.True(t, sentry.Flush(2*time.Second), "failed to flush Sentry events in time") + require.True(t, len(payload.Exceptions) >= 1) // the wrapped errors are exploded into separate exceptions in the slice + + // since we capture the FleetCause error, the last entry in the exceptions + // must be a FleetError and contain the stacktrace we're looking for. + rootCapturedErr := payload.Exceptions[len(payload.Exceptions)-1] + require.Equal(t, "*ctxerr.FleetError", rootCapturedErr.Type) + + // format the stack trace the same way we do in ctxerr + var stack []string + for _, st := range rootCapturedErr.Stacktrace.Frames { + filename := st.Filename + if filename == "" { + // get it from abspath + filename = filepath.Base(st.AbsPath) + } + stack = append(stack, st.Module+"."+st.Function+" ("+filename+":"+fmt.Sprint(st.Lineno)+")") + } + + // for some reason, Sentry reverses the stack trace + slices.Reverse(stack) + checkStack(t, stack, c.causeStackContains) + }) + } +} + +func checkStack(t *testing.T, stack, contains []string) { + stackStr := strings.Join(stack, "\n") + lastIx := -1 + for _, want := range contains { + ix := strings.Index(stackStr, want) + require.True(t, ix > -1, "expected stack %v to contain %q", stackStr, want) + require.True(t, ix > lastIx, "expected %q to be after last check in %v", want, stackStr) + lastIx = ix + } +} diff --git a/server/datastore/mysql/mdm.go b/server/datastore/mysql/mdm.go index 6ba1c16d5..73ab3a4d3 100644 --- a/server/datastore/mysql/mdm.go +++ b/server/datastore/mysql/mdm.go @@ -953,3 +953,80 @@ func (ds *Datastore) MDMDeleteEULA(ctx context.Context, token string) error { } return nil } + +func (ds *Datastore) GetHostCertAssociationsToExpire(ctx context.Context, expiryDays, limit int) ([]fleet.SCEPIdentityAssociation, error) { + // TODO(roberto): this is not good because we don't have any indexes on + // h.uuid, due to time constraints, I'm assuming that this + // function is called with a relatively low amount of shas + // + // Note that we use GROUP BY because we can't guarantee unique entries + // based on uuid in the hosts table. + stmt, args, err := sqlx.In( + `SELECT + h.uuid as host_uuid, + ncaa.sha256 as sha256, + COALESCE(MAX(hm.fleet_enroll_ref), '') as enroll_reference + FROM + nano_cert_auth_associations ncaa + LEFT JOIN hosts h ON h.uuid = ncaa.id + LEFT JOIN host_mdm hm ON hm.host_id = h.id + WHERE + cert_not_valid_after BETWEEN '0000-00-00' AND DATE_ADD(CURDATE(), INTERVAL ? DAY) + AND renew_command_uuid IS NULL + GROUP BY + host_uuid, ncaa.sha256, cert_not_valid_after + ORDER BY cert_not_valid_after ASC + LIMIT ? + `, expiryDays, limit) + if err != nil { + return nil, ctxerr.Wrap(ctx, err, "building sqlx.In query") + } + + var uuids []fleet.SCEPIdentityAssociation + if err := sqlx.SelectContext(ctx, ds.reader(ctx), &uuids, stmt, args...); err != nil { + if err == sql.ErrNoRows { + return nil, nil + } + return nil, ctxerr.Wrap(ctx, err, "get identity certs close to expiry") + } + return uuids, nil +} + +func (ds *Datastore) SetCommandForPendingSCEPRenewal(ctx context.Context, assocs []fleet.SCEPIdentityAssociation, cmdUUID string) error { + if len(assocs) == 0 { + return nil + } + + var sb strings.Builder + args := make([]any, len(assocs)*3) + for i, assoc := range assocs { + sb.WriteString("(?, ?, ?),") + args[i*3] = assoc.HostUUID + args[i*3+1] = assoc.SHA256 + args[i*3+2] = cmdUUID + } + + stmt := fmt.Sprintf(` + INSERT INTO nano_cert_auth_associations (id, sha256, renew_command_uuid) VALUES %s + ON DUPLICATE KEY UPDATE + renew_command_uuid = VALUES(renew_command_uuid) + `, strings.TrimSuffix(sb.String(), ",")) + + return ds.withTx(ctx, func(tx sqlx.ExtContext) error { + res, err := tx.ExecContext(ctx, stmt, args...) + if err != nil { + return fmt.Errorf("failed to update cert associations: %w", err) + } + + // NOTE: we can't use insertOnDuplicateDidInsert because the + // LastInsertId check only works tables that have an + // auto-incrementing primary key. See notes in that function + // and insertOnDuplicateDidUpdate to understand the mechanism. + affected, _ := res.RowsAffected() + if affected == 1 { + return errors.New("this function can only be used to update existing associations") + } + + return nil + }) +} diff --git a/server/datastore/mysql/mdm_test.go b/server/datastore/mysql/mdm_test.go index 614a068ce..51f8202ba 100644 --- a/server/datastore/mysql/mdm_test.go +++ b/server/datastore/mysql/mdm_test.go @@ -2,19 +2,25 @@ package mysql import ( "context" + "crypto/x509" + "crypto/x509/pkix" "fmt" "sort" "strconv" "testing" + "time" "github.com/fleetdm/fleet/v4/server/fleet" mdm_types "github.com/fleetdm/fleet/v4/server/mdm" + apple_mdm "github.com/fleetdm/fleet/v4/server/mdm/apple" "github.com/fleetdm/fleet/v4/server/mdm/apple/mobileconfig" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" + "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/service/certauth" "github.com/fleetdm/fleet/v4/server/ptr" "github.com/fleetdm/fleet/v4/server/test" "github.com/google/uuid" "github.com/jmoiron/sqlx" + "github.com/micromdm/nanodep/tokenpki" "github.com/stretchr/testify/require" ) @@ -35,6 +41,8 @@ func TestMDMShared(t *testing.T) { {"TestBatchSetProfileLabelAssociations", testBatchSetProfileLabelAssociations}, {"TestBatchSetProfilesTransactionError", testBatchSetMDMProfilesTransactionError}, {"TestMDMEULA", testMDMEULA}, + {"TestGetHostCertAssociationsToExpire", testSCEPRenewalHelpers}, + {"TestSCEPRenewalHelpers", testSCEPRenewalHelpers}, } for _, c := range cases { @@ -3130,3 +3138,133 @@ func testMDMEULA(t *testing.T, ds *Datastore) { err = ds.MDMInsertEULA(ctx, eula) require.NoError(t, err) } + +func testSCEPRenewalHelpers(t *testing.T, ds *Datastore) { + ctx := context.Background() + testCert, testKey, err := apple_mdm.NewSCEPCACertKey() + require.NoError(t, err) + testCertPEM := tokenpki.PEMCertificate(testCert.Raw) + testKeyPEM := tokenpki.PEMRSAPrivateKey(testKey) + scepDepot, err := ds.NewSCEPDepot(testCertPEM, testKeyPEM) + require.NoError(t, err) + + nanoStorage, err := ds.NewMDMAppleMDMStorage(testCertPEM, testKeyPEM) + require.NoError(t, err) + + var i int + setHost := func(notAfter time.Time) *fleet.Host { + i++ + h, err := ds.NewHost(ctx, &fleet.Host{ + Hostname: fmt.Sprintf("test-host%d-name", i), + OsqueryHostID: ptr.String(fmt.Sprintf("osquery-%d", i)), + NodeKey: ptr.String(fmt.Sprintf("nodekey-%d", i)), + UUID: fmt.Sprintf("test-uuid-%d", i), + Platform: "darwin", + }) + require.NoError(t, err) + + // create a cert + association + serial, err := scepDepot.Serial() + require.NoError(t, err) + cert := &x509.Certificate{ + SerialNumber: serial, + Subject: pkix.Name{ + CommonName: "FleetDM Identity", + }, + NotAfter: notAfter, + // use the host UUID, just to make sure they're + // different from each other, we don't care about the + // DER contents here + Raw: []byte(h.UUID)} + err = scepDepot.Put(cert.Subject.CommonName, cert) + require.NoError(t, err) + req := mdm.Request{ + EnrollID: &mdm.EnrollID{ID: h.UUID}, + Context: ctx, + } + certHash := certauth.HashCert(cert) + err = nanoStorage.AssociateCertHash(&req, certHash, notAfter) + require.NoError(t, err) + nanoEnroll(t, ds, h, false) + return h + } + + // certs expired at lest 1 year ago + h1 := setHost(time.Now().AddDate(-1, -1, 0)) + h2 := setHost(time.Now().AddDate(-1, 0, 0)) + // cert that expires in 1 month + h3 := setHost(time.Now().AddDate(0, 1, 0)) + // cert that expires in 1 year + h4 := setHost(time.Now().AddDate(1, 0, 0)) + + // list assocs that expire in the next 10 days + assocs, err := ds.GetHostCertAssociationsToExpire(ctx, 10, 100) + require.NoError(t, err) + require.Len(t, assocs, 2) + require.Equal(t, h1.UUID, assocs[0].HostUUID) + require.Equal(t, h2.UUID, assocs[1].HostUUID) + + // list certs that expire in the next 1000 days with limit = 1 + assocs, err = ds.GetHostCertAssociationsToExpire(ctx, 1000, 1) + require.NoError(t, err) + require.Len(t, assocs, 1) + require.Equal(t, h1.UUID, assocs[0].HostUUID) + + // list certs that expire in the next 50 days + assocs, err = ds.GetHostCertAssociationsToExpire(ctx, 50, 100) + require.NoError(t, err) + require.Len(t, assocs, 3) + require.Equal(t, h1.UUID, assocs[0].HostUUID) + require.Equal(t, h2.UUID, assocs[1].HostUUID) + require.Equal(t, h3.UUID, assocs[2].HostUUID) + + // list certs that expire in the next 1000 days + assocs, err = ds.GetHostCertAssociationsToExpire(ctx, 1000, 100) + require.NoError(t, err) + require.Len(t, assocs, 4) + require.Equal(t, h1.UUID, assocs[0].HostUUID) + require.Equal(t, h2.UUID, assocs[1].HostUUID) + require.Equal(t, h3.UUID, assocs[2].HostUUID) + require.Equal(t, h4.UUID, assocs[3].HostUUID) + + checkSCEPRenew := func(assoc fleet.SCEPIdentityAssociation, want *string) { + var got *string + ExecAdhocSQL(t, ds, func(q sqlx.ExtContext) error { + return sqlx.GetContext(ctx, q, &got, `SELECT renew_command_uuid FROM nano_cert_auth_associations WHERE id = ?`, assoc.HostUUID) + }) + require.EqualValues(t, want, got) + } + + // insert dummy nano commands + ExecAdhocSQL(t, ds, func(q sqlx.ExtContext) error { + _, err = q.ExecContext(ctx, ` + INSERT INTO nano_commands (command_uuid, request_type, command) + VALUES ('foo', 'foo', ' 0: level.Info(kitlog.With(d.logger)).Log("msg", fmt.Sprintf("added %d new mdm device(s) to pending hosts", n)) case n == 0: @@ -736,6 +743,21 @@ func GenerateEnrollmentProfileMobileconfig(orgName, fleetURL, scepChallenge, top return buf.Bytes(), nil } +func AddEnrollmentRefToFleetURL(fleetURL, reference string) (string, error) { + if reference == "" { + return fleetURL, nil + } + + u, err := url.Parse(fleetURL) + if err != nil { + return "", fmt.Errorf("parsing configured server URL: %w", err) + } + q := u.Query() + q.Add(mobileconfig.FleetEnrollReferenceKey, reference) + u.RawQuery = q.Encode() + return u.String(), nil +} + // ProfileBimap implements bidirectional mapping for profiles, and utility // functions to generate those mappings based on frequently used operations. type ProfileBimap struct { diff --git a/server/mdm/apple/apple_mdm_test.go b/server/mdm/apple/apple_mdm_test.go index 944d9617e..8a05171a9 100644 --- a/server/mdm/apple/apple_mdm_test.go +++ b/server/mdm/apple/apple_mdm_test.go @@ -10,6 +10,7 @@ import ( "time" "github.com/fleetdm/fleet/v4/server/fleet" + "github.com/fleetdm/fleet/v4/server/mdm/apple/mobileconfig" "github.com/fleetdm/fleet/v4/server/mock" nanodep_mock "github.com/fleetdm/fleet/v4/server/mock/nanodep" "github.com/go-kit/log" @@ -133,6 +134,54 @@ func TestDEPService(t *testing.T) { }) } +func TestAddEnrollmentRefToFleetURL(t *testing.T) { + const ( + baseFleetURL = "https://example.com" + reference = "enroll-ref" + ) + + tests := []struct { + name string + fleetURL string + reference string + expectedOutput string + expectError bool + }{ + { + name: "empty Reference", + fleetURL: baseFleetURL, + reference: "", + expectedOutput: baseFleetURL, + expectError: false, + }, + { + name: "valid URL and Reference", + fleetURL: baseFleetURL, + reference: reference, + expectedOutput: baseFleetURL + "?" + mobileconfig.FleetEnrollReferenceKey + "=" + reference, + expectError: false, + }, + { + name: "invalid URL", + fleetURL: "://invalid-url", + reference: reference, + expectError: true, + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + output, err := AddEnrollmentRefToFleetURL(tc.fleetURL, tc.reference) + if tc.expectError { + require.Error(t, err) + } else { + require.NoError(t, err) + require.Equal(t, tc.expectedOutput, output) + } + }) + } +} + type notFoundError struct{} func (e notFoundError) IsNotFound() bool { return true } diff --git a/server/mdm/apple/cert.go b/server/mdm/apple/cert.go index 28ee1231a..0baa243ea 100644 --- a/server/mdm/apple/cert.go +++ b/server/mdm/apple/cert.go @@ -13,8 +13,8 @@ import ( "os" "strings" + "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" "github.com/micromdm/nanodep/tokenpki" - "github.com/micromdm/scep/v2/depot" ) const ( @@ -145,7 +145,6 @@ func NewSCEPCACertKey() (*x509.Certificate, *rsa.PrivateKey, error) { // NEWDEPKeyPairPEM generates a new public key certificate and private key for downloading the Apple DEP token. // The public key is returned as a PEM encoded certificate. func NewDEPKeyPairPEM() ([]byte, []byte, error) { - // Note, Apple doesn't check the expiry key, cert, err := tokenpki.SelfSignedRSAKeypair(depCertificateCommonName, depCertificateExpiryDays) if err != nil { diff --git a/server/mdm/nanomdm/service/certauth/certauth.go b/server/mdm/nanomdm/service/certauth/certauth.go index 0e206e88b..e7d8e0247 100644 --- a/server/mdm/nanomdm/service/certauth/certauth.go +++ b/server/mdm/nanomdm/service/certauth/certauth.go @@ -97,7 +97,7 @@ func New(next service.CheckinAndCommandService, storage storage.CertAuthStore, o return certAuth } -func hashCert(cert *x509.Certificate) string { +func HashCert(cert *x509.Certificate) string { hashed := sha256.Sum256(cert.Raw) b := make([]byte, len(hashed)) copy(b, hashed[:]) @@ -112,7 +112,7 @@ func (s *CertAuth) associateNewEnrollment(r *mdm.Request) error { return err } logger := ctxlog.Logger(r.Context, s.logger) - hash := hashCert(r.Certificate) + hash := HashCert(r.Certificate) if hasHash, err := s.storage.HasCertHash(r, hash); err != nil { return err } else if hasHash { @@ -137,7 +137,7 @@ func (s *CertAuth) associateNewEnrollment(r *mdm.Request) error { } } } - if err := s.storage.AssociateCertHash(r, hash); err != nil { + if err := s.storage.AssociateCertHash(r, hash, r.Certificate.NotAfter); err != nil { return err } logger.Info( @@ -157,7 +157,7 @@ func (s *CertAuth) validateAssociateExistingEnrollment(r *mdm.Request) error { return err } logger := ctxlog.Logger(r.Context, s.logger) - hash := hashCert(r.Certificate) + hash := HashCert(r.Certificate) if isAssoc, err := s.storage.IsCertHashAssociated(r, hash); err != nil { return err } else if isAssoc { @@ -211,7 +211,7 @@ func (s *CertAuth) validateAssociateExistingEnrollment(r *mdm.Request) error { if s.warnOnly { return nil } - if err := s.storage.AssociateCertHash(r, hash); err != nil { + if err := s.storage.AssociateCertHash(r, hash, r.Certificate.NotAfter); err != nil { return err } logger.Info( diff --git a/server/mdm/nanomdm/storage/allmulti/certauth.go b/server/mdm/nanomdm/storage/allmulti/certauth.go index 0b4c76a60..242f771f7 100644 --- a/server/mdm/nanomdm/storage/allmulti/certauth.go +++ b/server/mdm/nanomdm/storage/allmulti/certauth.go @@ -1,6 +1,8 @@ package allmulti import ( + "time" + "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/storage" ) @@ -26,9 +28,9 @@ func (ms *MultiAllStorage) IsCertHashAssociated(r *mdm.Request, hash string) (bo return val.(bool), err } -func (ms *MultiAllStorage) AssociateCertHash(r *mdm.Request, hash string) error { +func (ms *MultiAllStorage) AssociateCertHash(r *mdm.Request, hash string, certNotValidAfter time.Time) error { _, err := ms.execStores(r.Context, func(s storage.AllStorage) (interface{}, error) { - return nil, s.AssociateCertHash(r, hash) + return nil, s.AssociateCertHash(r, hash, certNotValidAfter) }) return err } diff --git a/server/mdm/nanomdm/storage/file/certauth.go b/server/mdm/nanomdm/storage/file/certauth.go index ad9f1d412..a065e7592 100644 --- a/server/mdm/nanomdm/storage/file/certauth.go +++ b/server/mdm/nanomdm/storage/file/certauth.go @@ -6,6 +6,7 @@ import ( "os" "path" "strings" + "time" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" ) @@ -52,7 +53,7 @@ func (s *FileStorage) IsCertHashAssociated(r *mdm.Request, hash string) (bool, e return strings.ToLower(string(b)) == strings.ToLower(hash), nil } -func (s *FileStorage) AssociateCertHash(r *mdm.Request, hash string) error { +func (s *FileStorage) AssociateCertHash(r *mdm.Request, hash string, _ time.Time) error { f, err := os.OpenFile( path.Join(s.path, CertAuthAssociationsFilename), os.O_APPEND|os.O_CREATE|os.O_WRONLY, diff --git a/server/mdm/nanomdm/storage/mysql/certauth.go b/server/mdm/nanomdm/storage/mysql/certauth.go index 77c3af72f..b89602265 100644 --- a/server/mdm/nanomdm/storage/mysql/certauth.go +++ b/server/mdm/nanomdm/storage/mysql/certauth.go @@ -3,6 +3,7 @@ package mysql import ( "context" "strings" + "time" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" ) @@ -38,14 +39,18 @@ func (s *MySQLStorage) IsCertHashAssociated(r *mdm.Request, hash string) (bool, ) } -func (s *MySQLStorage) AssociateCertHash(r *mdm.Request, hash string) error { +func (s *MySQLStorage) AssociateCertHash(r *mdm.Request, hash string, certNotValidAfter time.Time) error { _, err := s.db.ExecContext( r.Context, ` -INSERT INTO nano_cert_auth_associations (id, sha256) VALUES (?, ?) +INSERT INTO nano_cert_auth_associations (id, sha256, cert_not_valid_after) VALUES (?, ?, ?) ON DUPLICATE KEY -UPDATE sha256 = VALUES(sha256);`, +UPDATE + sha256 = VALUES(sha256), + cert_not_valid_after = VALUES(cert_not_valid_after), + renew_command_uuid = NULL;`, r.ID, strings.ToLower(hash), + certNotValidAfter, ) return err } diff --git a/server/mdm/nanomdm/storage/pgsql/certauth.go b/server/mdm/nanomdm/storage/pgsql/certauth.go index bafe0d60b..e474319b8 100644 --- a/server/mdm/nanomdm/storage/pgsql/certauth.go +++ b/server/mdm/nanomdm/storage/pgsql/certauth.go @@ -3,6 +3,7 @@ package pgsql import ( "context" "strings" + "time" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" ) @@ -39,7 +40,7 @@ func (s *PgSQLStorage) IsCertHashAssociated(r *mdm.Request, hash string) (bool, } // AssociateCertHash "DO NOTHING" on duplicated keys -func (s *PgSQLStorage) AssociateCertHash(r *mdm.Request, hash string) error { +func (s *PgSQLStorage) AssociateCertHash(r *mdm.Request, hash string, _ time.Time) error { _, err := s.db.ExecContext( r.Context, ` INSERT INTO cert_auth_associations (id, sha256) diff --git a/server/mdm/nanomdm/storage/storage.go b/server/mdm/nanomdm/storage/storage.go index 7c82c6876..19fa15357 100644 --- a/server/mdm/nanomdm/storage/storage.go +++ b/server/mdm/nanomdm/storage/storage.go @@ -5,6 +5,7 @@ package storage import ( "context" "crypto/tls" + "time" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" ) @@ -62,7 +63,7 @@ type CertAuthStore interface { HasCertHash(r *mdm.Request, hash string) (bool, error) EnrollmentHasCertHash(r *mdm.Request, hash string) (bool, error) IsCertHashAssociated(r *mdm.Request, hash string) (bool, error) - AssociateCertHash(r *mdm.Request, hash string) error + AssociateCertHash(r *mdm.Request, hash string, certNotValidAfter time.Time) error } // StoreMigrator retrieves MDM check-ins diff --git a/server/mdm/scep/.gitignore b/server/mdm/scep/.gitignore new file mode 100644 index 000000000..b0bbd25a4 --- /dev/null +++ b/server/mdm/scep/.gitignore @@ -0,0 +1,2 @@ +scepserver-* +scepclient-* diff --git a/server/mdm/scep/Dockerfile b/server/mdm/scep/Dockerfile new file mode 100644 index 000000000..c34882b76 --- /dev/null +++ b/server/mdm/scep/Dockerfile @@ -0,0 +1,10 @@ +FROM alpine:3 + +COPY ./scepclient-linux-amd64 /usr/bin/scepclient +COPY ./scepserver-linux-amd64 /usr/bin/scepserver + +EXPOSE 8080 + +VOLUME ["/depot"] + +ENTRYPOINT ["scepserver"] diff --git a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/LICENSE.txt b/server/mdm/scep/LICENSE similarity index 93% rename from infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/LICENSE.txt rename to server/mdm/scep/LICENSE index 429a1767e..9ad85793b 100644 --- a/infrastructure/sandbox/Data/lambda/urllib3-1.26.12.dist-info/LICENSE.txt +++ b/server/mdm/scep/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2008-2020 Andrey Petrov and contributors (see CONTRIBUTORS.txt) +Copyright (c) 2016 Victor Vrantchan Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/server/mdm/scep/Makefile b/server/mdm/scep/Makefile new file mode 100644 index 000000000..e13e4ea8e --- /dev/null +++ b/server/mdm/scep/Makefile @@ -0,0 +1,51 @@ +VERSION=$(shell git describe --tags --always --dirty) +LDFLAGS=-ldflags "-X main.version=$(VERSION)" +OSARCH=$(shell go env GOHOSTOS)-$(shell go env GOHOSTARCH) + +SCEPCLIENT=\ + scepclient-linux-amd64 \ + scepclient-linux-arm \ + scepclient-darwin-amd64 \ + scepclient-darwin-arm64 \ + scepclient-freebsd-amd64 \ + scepclient-windows-amd64.exe + +SCEPSERVER=\ + scepserver-linux-amd64 \ + scepserver-linux-arm \ + scepserver-darwin-amd64 \ + scepserver-darwin-arm64 \ + scepserver-freebsd-amd64 \ + scepserver-windows-amd64.exe + +my: scepclient-$(OSARCH) scepserver-$(OSARCH) + +docker: scepclient-linux-amd64 scepserver-linux-amd64 + +$(SCEPCLIENT): + GOOS=$(word 2,$(subst -, ,$@)) GOARCH=$(word 3,$(subst -, ,$(subst .exe,,$@))) go build $(LDFLAGS) -o $@ ./cmd/scepclient + +$(SCEPSERVER): + GOOS=$(word 2,$(subst -, ,$@)) GOARCH=$(word 3,$(subst -, ,$(subst .exe,,$@))) go build $(LDFLAGS) -o $@ ./cmd/scepserver + +%-$(VERSION).zip: %.exe + rm -f $@ + zip $@ $< + +%-$(VERSION).zip: % + rm -f $@ + zip $@ $< + +release: $(foreach bin,$(SCEPCLIENT) $(SCEPSERVER),$(subst .exe,,$(bin))-$(VERSION).zip) + +clean: + rm -f scepclient-* scepserver-* + +test: + go test -cover ./... + +# don't run race tests by default. see https://github.com/etcd-io/bbolt/issues/187 +test-race: + go test -cover -race ./... + +.PHONY: my docker $(SCEPCLIENT) $(SCEPSERVER) release clean test test-race diff --git a/server/mdm/scep/README.md b/server/mdm/scep/README.md new file mode 100644 index 000000000..49a419c2f --- /dev/null +++ b/server/mdm/scep/README.md @@ -0,0 +1,236 @@ +# scep + +> The contents of this directory were copied (in February 2024) from https://github.com/fleetdm/scep (the `remove-path-setting-on-scep-handler` branch) which was forked from https://github.com/micromdm/scep. + +[![CI](https://github.com/micromdm/scep/workflows/CI/badge.svg)](https://github.com/micromdm/scep/actions) +[![Go Reference](https://pkg.go.dev/badge/github.com/micromdm/scep/v2.svg)](https://pkg.go.dev/github.com/micromdm/scep/v2) + +`scep` is a Simple Certificate Enrollment Protocol server and client + +## Installation + +Binary releases are available on the [releases page](https://github.com/micromdm/scep/releases). + +### Compiling from source + +To compile the SCEP client and server you will need [a Go compiler](https://golang.org/dl/) as well as standard tools like git, make, etc. + +1. Clone the repository and get into the source directory: `git clone https://github.com/micromdm/scep.git && cd scep` +2. Compile the client and server binaries: `make` + +The binaries will be compiled in the current directory and named after the architecture. I.e. `scepclient-linux-amd64` and `scepserver-linux-amd64`. + +### Docker + +See Docker documentation below. + +## Example setup + +Minimal example for both server and client. + +``` +# SERVER: +# create a new CA +./scepserver-linux-amd64 ca -init +# start server +./scepserver-linux-amd64 -depot depot -port 2016 -challenge=secret + +# SCEP request: +# in a separate terminal window, run a client +# note, if the client.key doesn't exist, the client will create a new rsa private key. Must be in PEM format. +./scepclient-linux-amd64 -private-key client.key -server-url=http://127.0.0.1:2016/scep -challenge=secret + +# NDES request: +# note, this should point to an NDES server, scepserver does not provide NDES. +./scepclient-linux-amd64 -private-key client.key -server-url=https://scep.example.com:4321/certsrv/mscep/ -ca-fingerprint="e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" +``` + +## Server Usage + +The default flags configure and run the scep server. + +`-depot` must be the path to a folder with `ca.pem` and `ca.key` files. If you don't already have a CA to use, you can create one using the `ca` subcommand. + +The scepserver provides one HTTP endpoint, `/scep`, that facilitates the normal PKIOperation/Message parameters. + +Server usage: +```sh +$ ./scepserver-linux-amd64 -help + -allowrenew string + do not allow renewal until n days before expiry, set to 0 to always allow (default "14") + -capass string + passwd for the ca.key + -challenge string + enforce a challenge password + -crtvalid string + validity for new client certificates in days (default "365") + -csrverifierexec string + will be passed the CSRs for verification + -debug + enable debug logging + -depot string + path to ca folder (default "depot") + -log-json + output JSON logs + -port string + port to listen on (default "8080") + -version + prints version information +usage: scep [] [] + ca create/manage a CA +type --help to see usage for each subcommand +``` + +Use the `ca -init` subcommand to create a new CA and private key. + +CA sub-command usage: +``` +$ ./scepserver-linux-amd64 ca -help +Usage of ca: + -country string + country for CA cert (default "US") + -depot string + path to ca folder (default "depot") + -init + create a new CA + -key-password string + password to store rsa key + -keySize int + rsa key size (default 4096) + -common_name string + common name (CN) for CA cert (default "MICROMDM SCEP CA") + -organization string + organization for CA cert (default "scep-ca") + -organizational_unit string + organizational unit (OU) for CA cert (default "SCEP CA") + -years int + default CA years (default 10) +``` + +### CSR verifier + +The `-csrverifierexec` switch to the SCEP server allows for executing a command before a certificate is issued to verify the submitted CSR. Scripts exiting without errors (zero exit status) will proceed to certificate issuance, otherwise a SCEP error is generated to the client. For example if you wanted to just save the CSR this is a valid CSR verifier shell script: + +```sh +#!/bin/sh + +cat - > /tmp/scep.csr +``` + +## Client Usage + +```sh +$ ./scepclient-linux-amd64 -help +Usage of ./scepclient-linux-amd64: + -ca-fingerprint string + SHA-256 digest of CA certificate for NDES server. Note: Changed from MD5. + -certificate string + certificate path, if there is no key, scepclient will create one + -challenge string + enforce a challenge password + -cn string + common name for certificate (default "scepclient") + -country string + country code in certificate (default "US") + -debug + enable debug logging + -keySize int + rsa key size (default 2048) + -locality string + locality for certificate + -log-json + use JSON for log output + -organization string + organization for cert (default "scep-client") + -ou string + organizational unit for certificate (default "MDM") + -private-key string + private key path, if there is no key, scepclient will create one + -province string + province for certificate + -server-url string + SCEP server url + -version + prints version information +``` + +Note: Make sure to specify the desired endpoint in your `-server-url` value (e.g. `'http://scep.groob.io:2016/scep'`) + +To obtain a certificate through Network Device Enrollment Service (NDES), set `-server-url` to a server that provides NDES. +This most likely uses the `/certsrv/mscep` path. You will need to add the `-ca-fingerprint` client argument during this request to specify which CA to use. + +If you're not sure which SHA-256 hash (for a specific CA) to use, you can use the `-debug` flag to print them out for the CAs returned from the SCEP server. + +## Docker + +```sh +# first compile the Docker binaries +make docker + +# build the image +docker build -t micromdm/scep:latest . + +# create CA +docker run -it --rm -v /path/to/ca/folder:/depot micromdm/scep:latest ca -init + +# run +docker run -it --rm -v /path/to/ca/folder:/depot -p 8080:8080 micromdm/scep:latest +``` + +## SCEP library + +The core `scep` library can be used for both client and server operations. + +``` +go get github.com/micromdm/scep/scep +``` + +For detailed usage, see the [Go Reference](https://pkg.go.dev/github.com/micromdm/scep/v2/scep). + +Example (server): + +```go +// read a request body containing SCEP message +body, err := ioutil.ReadAll(r.Body) +if err != nil { + // handle err +} + +// parse the SCEP message +msg, err := scep.ParsePKIMessage(body) +if err != nil { + // handle err +} + +// do something with msg +fmt.Println(msg.MessageType) + +// extract encrypted pkiEnvelope +err := msg.DecryptPKIEnvelope(CAcert, CAkey) +if err != nil { + // handle err +} + +// use the CSR from decrypted PKCS request and sign +// MyCSRSigner returns an *x509.Certificate here +crt, err := MyCSRSigner(msg.CSRReqMessage.CSR) +if err != nil { + // handle err +} + +// create a CertRep message from the original +certRep, err := msg.Success(CAcert, CAkey, crt) +if err != nil { + // handle err +} + +// send response back +// w is a http.ResponseWriter +w.Write(certRep.Raw) +``` + +## Server library + +You can import the scep endpoint into another Go project. For an example take a look at [scepserver.go](cmd/scepserver/scepserver.go). + +The SCEP server includes a built-in CA/certificate store. This is facilitated by the `Depot` and `CSRSigner` Go interfaces. This certificate storage to happen however you want. It also allows for swapping out the entire CA signer altogether or even using SCEP as a proxy for certificates. diff --git a/server/mdm/scep/challenge/bolt/challenge.go b/server/mdm/scep/challenge/bolt/challenge.go new file mode 100644 index 000000000..77e2d091c --- /dev/null +++ b/server/mdm/scep/challenge/bolt/challenge.go @@ -0,0 +1,74 @@ +package challengestore + +import ( + "crypto/rand" + "encoding/base64" + "errors" + "fmt" + + "github.com/boltdb/bolt" +) + +type Depot struct { + *bolt.DB +} + +const challengeBucket = "scep_challenges" + +// NewBoltDepot creates a depot.Depot backed by BoltDB. +func NewBoltDepot(db *bolt.DB) (*Depot, error) { + err := db.Update(func(tx *bolt.Tx) error { + _, err := tx.CreateBucketIfNotExists([]byte(challengeBucket)) + if err != nil { + return fmt.Errorf("create bucket: %s", err) + } + return nil + }) + if err != nil { + return nil, err + } + return &Depot{db}, nil +} + +func (db *Depot) SCEPChallenge() (string, error) { + key := make([]byte, 24) + _, err := rand.Read(key) + if err != nil { + return "", err + } + + challenge := base64.StdEncoding.EncodeToString(key) + err = db.Update(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(challengeBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", challengeBucket) + } + return bucket.Put([]byte(challenge), []byte(challenge)) + }) + if err != nil { + return "", err + } + return challenge, nil +} + +func (db *Depot) HasChallenge(pw string) (bool, error) { + tx, err := db.Begin(true) + if err != nil { + return false, errors.Join(err, errors.New("begin transaction")) + } + bkt := tx.Bucket([]byte(challengeBucket)) + if bkt == nil { + return false, fmt.Errorf("bucket %q not found!", challengeBucket) + } + + key := []byte(pw) + var matches bool + if chal := bkt.Get(key); chal != nil { + if err := bkt.Delete(key); err != nil { + return false, err + } + matches = true + } + + return matches, tx.Commit() +} diff --git a/server/mdm/scep/challenge/challenge.go b/server/mdm/scep/challenge/challenge.go new file mode 100644 index 000000000..dc26c9bd3 --- /dev/null +++ b/server/mdm/scep/challenge/challenge.go @@ -0,0 +1,31 @@ +// Package challenge defines an interface for a dynamic challenge password cache. +package challenge + +import ( + "crypto/x509" + "errors" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" +) + +// Store is a dynamic challenge password cache. +type Store interface { + SCEPChallenge() (string, error) + HasChallenge(pw string) (bool, error) +} + +// Middleware wraps next in a CSRSigner that verifies and invalidates the challenge +func Middleware(store Store, next scepserver.CSRSigner) scepserver.CSRSignerFunc { + return func(m *scep.CSRReqMessage) (*x509.Certificate, error) { + // TODO: compare challenge only for PKCSReq? + valid, err := store.HasChallenge(m.ChallengePassword) + if err != nil { + return nil, err + } + if !valid { + return nil, errors.New("invalid challenge") + } + return next.SignCSR(m) + } +} diff --git a/server/mdm/scep/challenge/challenge_bolt_test.go b/server/mdm/scep/challenge/challenge_bolt_test.go new file mode 100644 index 000000000..8a58719d6 --- /dev/null +++ b/server/mdm/scep/challenge/challenge_bolt_test.go @@ -0,0 +1,95 @@ +package challenge + +import ( + "io/ioutil" + "os" + "testing" + + challengestore "github.com/fleetdm/fleet/v4/server/mdm/scep/challenge/bolt" + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" + + "github.com/boltdb/bolt" +) + +func TestDynamicChallenge(t *testing.T) { + db, err := openTempBolt("scep-challenge") + if err != nil { + t.Fatal(err) + } + + depot, err := challengestore.NewBoltDepot(db) + if err != nil { + t.Fatal(err) + } + + // use the exported interface + store := Store(depot) + + // get first challenge + challengePassword, err := store.SCEPChallenge() + if err != nil { + t.Fatal(err) + } + + if challengePassword == "" { + t.Error("empty challenge returned") + } + + // test store API + valid, err := store.HasChallenge(challengePassword) + if err != nil { + t.Fatal(err) + } + if valid != true { + t.Error("challenge just acquired is not valid") + } + valid, err = store.HasChallenge(challengePassword) + if err != nil { + t.Fatal(err) + } + if valid != false { + t.Error("challenge should not be valid twice") + } + + // get another challenge + challengePassword, err = store.SCEPChallenge() + if err != nil { + t.Fatal(err) + } + + if challengePassword == "" { + t.Error("empty challenge returned") + } + + // test CSRSigner middleware + signer := Middleware(depot, scepserver.NopCSRSigner()) + + csrReq := &scep.CSRReqMessage{ + ChallengePassword: challengePassword, + } + + _, err = signer.SignCSR(csrReq) + if err != nil { + t.Error(err) + } + + _, err = signer.SignCSR(csrReq) + if err == nil { + t.Error("challenge should not be valid twice") + } +} + +func openTempBolt(prefix string) (*bolt.DB, error) { + f, err := ioutil.TempFile("", prefix+"-") + if err != nil { + return nil, err + } + f.Close() + err = os.Remove(f.Name()) + if err != nil { + return nil, err + } + + return bolt.Open(f.Name(), 0644, nil) +} diff --git a/server/mdm/scep/client/client.go b/server/mdm/scep/client/client.go new file mode 100644 index 000000000..1c0254d7b --- /dev/null +++ b/server/mdm/scep/client/client.go @@ -0,0 +1,29 @@ +package scepclient + +import ( + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" + + "github.com/go-kit/kit/log" + "github.com/go-kit/kit/log/level" +) + +// Client is a SCEP Client +type Client interface { + scepserver.Service + Supports(cap string) bool +} + +// New creates a SCEP Client. +func New( + serverURL string, + logger log.Logger, +) (Client, error) { + endpoints, err := scepserver.MakeClientEndpoints(serverURL) + if err != nil { + return nil, err + } + logger = level.Info(logger) + endpoints.GetEndpoint = scepserver.EndpointLoggingMiddleware(logger)(endpoints.GetEndpoint) + endpoints.PostEndpoint = scepserver.EndpointLoggingMiddleware(logger)(endpoints.PostEndpoint) + return endpoints, nil +} diff --git a/server/mdm/scep/cmd/scepclient/cert.go b/server/mdm/scep/cmd/scepclient/cert.go new file mode 100644 index 000000000..f4ae94d85 --- /dev/null +++ b/server/mdm/scep/cmd/scepclient/cert.go @@ -0,0 +1,100 @@ +package main + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "errors" + "fmt" + "io/ioutil" + "math/big" + "os" + "time" +) + +const ( + certificatePEMBlockType = "CERTIFICATE" +) + +func pemCert(derBytes []byte) []byte { + pemBlock := &pem.Block{ + Type: certificatePEMBlockType, + Headers: nil, + Bytes: derBytes, + } + out := pem.EncodeToMemory(pemBlock) + return out +} + +func loadOrSign(path string, priv *rsa.PrivateKey, csr *x509.CertificateRequest) (*x509.Certificate, error) { + file, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0666) + if err != nil { + if os.IsExist(err) { + return loadPEMCertFromFile(path) + } + return nil, err + } + defer file.Close() + self, err := selfSign(priv, csr) + if err != nil { + return nil, err + } + pemBlock := &pem.Block{ + Type: certificatePEMBlockType, + Headers: nil, + Bytes: self.Raw, + } + if err = pem.Encode(file, pemBlock); err != nil { + return nil, err + } + return self, nil +} + +func selfSign(priv *rsa.PrivateKey, csr *x509.CertificateRequest) (*x509.Certificate, error) { + serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) + serialNumber, err := rand.Int(rand.Reader, serialNumberLimit) + if err != nil { + return nil, fmt.Errorf("failed to generate serial number: %s", err) + } + + notBefore := time.Now() + notAfter := notBefore.Add(time.Hour * 1) + template := x509.Certificate{ + SerialNumber: serialNumber, + Subject: pkix.Name{ + CommonName: "SCEP SIGNER", + Organization: csr.Subject.Organization, + }, + NotBefore: notBefore, + NotAfter: notAfter, + + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + } + + derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) + if err != nil { + return nil, err + } + return x509.ParseCertificate(derBytes) +} + +func loadPEMCertFromFile(path string) (*x509.Certificate, error) { + data, err := ioutil.ReadFile(path) + if err != nil { + return nil, err + } + + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("PEM decode failed") + } + if pemBlock.Type != certificatePEMBlockType { + return nil, errors.New("unmatched type or headers") + } + + return x509.ParseCertificate(pemBlock.Bytes) +} diff --git a/server/mdm/scep/cmd/scepclient/csr.go b/server/mdm/scep/cmd/scepclient/csr.go new file mode 100644 index 000000000..fde14934d --- /dev/null +++ b/server/mdm/scep/cmd/scepclient/csr.go @@ -0,0 +1,86 @@ +package main + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "errors" + "io/ioutil" + "os" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil/x509util" +) + +const ( + csrPEMBlockType = "CERTIFICATE REQUEST" +) + +type csrOptions struct { + cn, org, country, ou, locality, province, challenge string + key *rsa.PrivateKey +} + +func loadOrMakeCSR(path string, opts *csrOptions) (*x509.CertificateRequest, error) { + file, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0o666) + if err != nil { + if os.IsExist(err) { + return loadCSRfromFile(path) + } + return nil, err + } + defer file.Close() + + subject := pkix.Name{ + CommonName: opts.cn, + Organization: subjOrNil(opts.org), + OrganizationalUnit: subjOrNil(opts.ou), + Province: subjOrNil(opts.province), + Locality: subjOrNil(opts.locality), + Country: subjOrNil(opts.country), + } + template := x509util.CertificateRequest{ + CertificateRequest: x509.CertificateRequest{ + Subject: subject, + SignatureAlgorithm: x509.SHA256WithRSA, + }, + } + if opts.challenge != "" { + template.ChallengePassword = opts.challenge + } + + derBytes, _ := x509util.CreateCertificateRequest(rand.Reader, &template, opts.key) + pemBlock := &pem.Block{ + Type: csrPEMBlockType, + Bytes: derBytes, + } + if err := pem.Encode(file, pemBlock); err != nil { + return nil, err + } + return x509.ParseCertificateRequest(derBytes) +} + +// returns nil or []string{input} to populate pkix.Name.Subject +func subjOrNil(input string) []string { + if input == "" { + return nil + } + return []string{input} +} + +// load PEM encoded CSR from file +func loadCSRfromFile(path string) (*x509.CertificateRequest, error) { + data, err := ioutil.ReadFile(path) + if err != nil { + return nil, err + } + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("cannot find the next PEM formatted block") + } + if pemBlock.Type != csrPEMBlockType || len(pemBlock.Headers) != 0 { + return nil, errors.New("unmatched type or headers") + } + return x509.ParseCertificateRequest(pemBlock.Bytes) +} diff --git a/server/mdm/scep/cmd/scepclient/key.go b/server/mdm/scep/cmd/scepclient/key.go new file mode 100644 index 000000000..b21c0b19e --- /dev/null +++ b/server/mdm/scep/cmd/scepclient/key.go @@ -0,0 +1,70 @@ +package main + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "errors" + "io/ioutil" + "os" +) + +const ( + rsaPrivateKeyPEMBlockType = "RSA PRIVATE KEY" +) + +// create a new RSA private key +func newRSAKey(bits int) (*rsa.PrivateKey, error) { + private, err := rsa.GenerateKey(rand.Reader, bits) + if err != nil { + return nil, err + } + return private, nil +} + +// load key if it exists or create a new one +func loadOrMakeKey(path string, rsaBits int) (*rsa.PrivateKey, error) { + file, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0666) + if err != nil { + if os.IsExist(err) { + return loadKeyFromFile(path) + } + return nil, err + } + defer file.Close() + + // write key + priv, err := newRSAKey(rsaBits) + if err != nil { + return nil, err + } + privBytes := x509.MarshalPKCS1PrivateKey(priv) + pemBlock := &pem.Block{ + Type: rsaPrivateKeyPEMBlockType, + Headers: nil, + Bytes: privBytes, + } + if err = pem.Encode(file, pemBlock); err != nil { + return nil, err + } + return priv, nil +} + +// load a PEM private key from disk +func loadKeyFromFile(path string) (*rsa.PrivateKey, error) { + data, err := ioutil.ReadFile(path) + if err != nil { + return nil, err + } + + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("PEM decode failed") + } + if pemBlock.Type != rsaPrivateKeyPEMBlockType { + return nil, errors.New("unmatched type or headers") + } + + return x509.ParsePKCS1PrivateKey(pemBlock.Bytes) +} diff --git a/server/mdm/scep/cmd/scepclient/scepclient.go b/server/mdm/scep/cmd/scepclient/scepclient.go new file mode 100644 index 000000000..c45d1decc --- /dev/null +++ b/server/mdm/scep/cmd/scepclient/scepclient.go @@ -0,0 +1,350 @@ +package main + +import ( + "context" + "crypto" + _ "crypto/sha256" + "crypto/x509" + "encoding/hex" + "errors" + "flag" + "fmt" + "io/ioutil" + stdlog "log" + "net/url" + "os" + "path/filepath" + "strings" + "time" + + scepclient "github.com/fleetdm/fleet/v4/server/mdm/scep/client" + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + + "github.com/go-kit/kit/log" + "github.com/go-kit/kit/log/level" +) + +// version info +var ( + version = "unknown" +) + +const fingerprintHashType = crypto.SHA256 + +type runCfg struct { + dir string + csrPath string + keyPath string + keyBits int + selfSignPath string + certPath string + cn string + org string + ou string + locality string + province string + country string + challenge string + serverURL string + caCertsSelector scep.CertsSelector + debug bool + logfmt string + caCertMsg string +} + +func run(cfg runCfg) error { + ctx := context.Background() + var logger log.Logger + { + if strings.ToLower(cfg.logfmt) == "json" { + logger = log.NewJSONLogger(os.Stderr) + } else { + logger = log.NewLogfmtLogger(os.Stderr) + } + stdlog.SetOutput(log.NewStdlibAdapter(logger)) + logger = log.With(logger, "ts", log.DefaultTimestampUTC) + if !cfg.debug { + logger = level.NewFilter(logger, level.AllowInfo()) + } + } + lginfo := level.Info(logger) + + client, err := scepclient.New(cfg.serverURL, logger) + if err != nil { + return err + } + + key, err := loadOrMakeKey(cfg.keyPath, cfg.keyBits) + if err != nil { + return err + } + + opts := &csrOptions{ + cn: cfg.cn, + org: cfg.org, + country: strings.ToUpper(cfg.country), + ou: cfg.ou, + locality: cfg.locality, + province: cfg.province, + challenge: cfg.challenge, + key: key, + } + + csr, err := loadOrMakeCSR(cfg.csrPath, opts) + if err != nil { + fmt.Println(err) + os.Exit(1) + } + + var self *x509.Certificate + cert, err := loadPEMCertFromFile(cfg.certPath) + if err != nil { + if !os.IsNotExist(err) { + return err + } + s, err := loadOrSign(cfg.selfSignPath, key, csr) + if err != nil { + return err + } + self = s + } + + resp, certNum, err := client.GetCACert(ctx, cfg.caCertMsg) + if err != nil { + return err + } + var certs []*x509.Certificate + { + if certNum > 1 { + certs, err = scep.CACerts(resp) + if err != nil { + return err + } + } else { + certs, err = x509.ParseCertificates(resp) + if err != nil { + return err + } + } + } + + if cfg.debug { + logCerts(level.Debug(logger), certs) + } + + var signerCert *x509.Certificate + { + if cert != nil { + signerCert = cert + } else { + signerCert = self + } + } + + var msgType scep.MessageType + { + // TODO validate CA and set UpdateReq if needed + if cert != nil { + msgType = scep.RenewalReq + } else { + msgType = scep.PKCSReq + } + } + + tmpl := &scep.PKIMessage{ + MessageType: msgType, + Recipients: certs, + SignerKey: key, + SignerCert: signerCert, + } + + if cfg.challenge != "" && msgType == scep.PKCSReq { + tmpl.CSRReqMessage = &scep.CSRReqMessage{ + ChallengePassword: cfg.challenge, + } + } + + msg, err := scep.NewCSRRequest(csr, tmpl, scep.WithLogger(logger), scep.WithCertsSelector(cfg.caCertsSelector)) + if err != nil { + return errors.Join(err, errors.New("creating csr pkiMessage")) + } + + var respMsg *scep.PKIMessage + + for { + // loop in case we get a PENDING response which requires + // a manual approval. + + respBytes, err := client.PKIOperation(ctx, msg.Raw) + if err != nil { + return errors.Join(err, fmt.Errorf("PKIOperation for %s", msgType)) + } + + respMsg, err = scep.ParsePKIMessage(respBytes, scep.WithLogger(logger), scep.WithCACerts(msg.Recipients)) + if err != nil { + return errors.Join(err, fmt.Errorf("parsing pkiMessage response %s", msgType)) + } + + switch respMsg.PKIStatus { + case scep.FAILURE: + return fmt.Errorf("%s request failed, failInfo: %s", msgType, respMsg.FailInfo) + case scep.PENDING: + lginfo.Log("pkiStatus", "PENDING", "msg", "sleeping for 30 seconds, then trying again.") + time.Sleep(30 * time.Second) + continue + } + lginfo.Log("pkiStatus", "SUCCESS", "msg", "server returned a certificate.") + break // on scep.SUCCESS + } + + if err := respMsg.DecryptPKIEnvelope(signerCert, key); err != nil { + return errors.Join(err, fmt.Errorf("decrypt pkiEnvelope, msgType: %s, status %s", msgType, respMsg.PKIStatus)) + } + + respCert := respMsg.CertRepMessage.Certificate + if err := ioutil.WriteFile(cfg.certPath, pemCert(respCert.Raw), 0o666); err != nil { // nolint:gosec + return err + } + + // remove self signer if used + if self != nil { + if err := os.Remove(cfg.selfSignPath); err != nil { + return err + } + } + + return nil +} + +// logCerts logs the count, number, RDN, and fingerprint of certs to logger +func logCerts(logger log.Logger, certs []*x509.Certificate) { + logger.Log("msg", "cacertlist", "count", len(certs)) + for i, cert := range certs { + h := fingerprintHashType.New() + h.Write(cert.Raw) + logger.Log( + "msg", "cacertlist", + "number", i, + "rdn", cert.Subject.ToRDNSequence().String(), + "hash_type", fingerprintHashType.String(), + "hash", fmt.Sprintf("%x", h.Sum(nil)), + ) + } +} + +// validateFingerprint makes sure fingerprint looks like a hash. +// We remove spaces and colons from fingerprint as it may come in various forms: +// +// e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 +// E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855 +// e3b0c442 98fc1c14 9afbf4c8 996fb924 27ae41e4 649b934c a495991b 7852b855 +// e3:b0:c4:42:98:fc:1c:14:9a:fb:f4:c8:99:6f:b9:24:27:ae:41:e4:64:9b:93:4c:a4:95:99:1b:78:52:b8:55 +func validateFingerprint(fingerprint string) (hash []byte, err error) { + fingerprint = strings.NewReplacer(" ", "", ":", "").Replace(fingerprint) + hash, err = hex.DecodeString(fingerprint) + if err != nil { + return + } + if len(hash) != fingerprintHashType.Size() { + err = fmt.Errorf("invalid %s hash length", fingerprintHashType) + } + return +} + +func validateFlags(keyPath, serverURL string) error { + if keyPath == "" { + return errors.New("must specify private key path") + } + if serverURL == "" { + return errors.New("must specify server-url flag parameter") + } + _, err := url.Parse(serverURL) + if err != nil { + return fmt.Errorf("invalid server-url flag parameter %s", err) + } + return nil +} + +func main() { + var ( + flVersion = flag.Bool("version", false, "prints version information") + flServerURL = flag.String("server-url", "", "SCEP server url") + flChallengePassword = flag.String("challenge", "", "enforce a challenge password") + flPKeyPath = flag.String("private-key", "", "private key path, if there is no key, scepclient will create one") + flCertPath = flag.String("certificate", "", "certificate path, if there is no key, scepclient will create one") + flKeySize = flag.Int("keySize", 2048, "rsa key size") + flOrg = flag.String("organization", "scep-client", "organization for cert") + flCName = flag.String("cn", "scepclient", "common name for certificate") + flOU = flag.String("ou", "MDM", "organizational unit for certificate") + flLoc = flag.String("locality", "", "locality for certificate") + flProvince = flag.String("province", "", "province for certificate") + flCountry = flag.String("country", "US", "country code in certificate") + flCACertMessage = flag.String("cacert-message", "", "message sent with GetCACert operation") + + // in case of multiple certificate authorities, we need to figure out who the recipient of the encrypted + // data is. + flCAFingerprint = flag.String("ca-fingerprint", "", "SHA-256 digest of CA certificate for NDES server. Note: Changed from MD5.") + + flDebugLogging = flag.Bool("debug", false, "enable debug logging") + flLogJSON = flag.Bool("log-json", false, "use JSON for log output") + ) + flag.Parse() + + // print version information + if *flVersion { + fmt.Println(version) + os.Exit(0) + } + + if err := validateFlags(*flPKeyPath, *flServerURL); err != nil { + fmt.Println(err) + os.Exit(1) + } + + caCertsSelector := scep.NopCertsSelector() + if *flCAFingerprint != "" { + hash, err := validateFingerprint(*flCAFingerprint) + if err != nil { + fmt.Printf("invalid fingerprint: %s\n", err) + os.Exit(1) + } + caCertsSelector = scep.FingerprintCertsSelector(fingerprintHashType, hash) + } + + dir := filepath.Dir(*flPKeyPath) + csrPath := dir + "/csr.pem" + selfSignPath := dir + "/self.pem" + if *flCertPath == "" { + *flCertPath = dir + "/client.pem" + } + var logfmt string + if *flLogJSON { + logfmt = "json" + } + + cfg := runCfg{ + dir: dir, + csrPath: csrPath, + keyPath: *flPKeyPath, + keyBits: *flKeySize, + selfSignPath: selfSignPath, + certPath: *flCertPath, + cn: *flCName, + org: *flOrg, + country: *flCountry, + locality: *flLoc, + ou: *flOU, + province: *flProvince, + challenge: *flChallengePassword, + serverURL: *flServerURL, + caCertsSelector: caCertsSelector, + debug: *flDebugLogging, + logfmt: logfmt, + caCertMsg: *flCACertMessage, + } + + if err := run(cfg); err != nil { + fmt.Println(err) + os.Exit(1) + } +} diff --git a/server/mdm/scep/cmd/scepserver/scepserver.go b/server/mdm/scep/cmd/scepserver/scepserver.go new file mode 100644 index 000000000..85385bc10 --- /dev/null +++ b/server/mdm/scep/cmd/scepserver/scepserver.go @@ -0,0 +1,323 @@ +package main + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "flag" + "fmt" + "net/http" + "os" + "os/signal" + "path/filepath" + "strconv" + "syscall" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/csrverifier" + executablecsrverifier "github.com/fleetdm/fleet/v4/server/mdm/scep/csrverifier/executable" + scepdepot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" + "github.com/fleetdm/fleet/v4/server/mdm/scep/depot/file" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" + "github.com/gorilla/mux" + + "github.com/go-kit/kit/log" + "github.com/go-kit/kit/log/level" +) + +// version info +var ( + version = "unknown" +) + +func main() { + caCMD := flag.NewFlagSet("ca", flag.ExitOnError) + { + if len(os.Args) >= 2 { + if os.Args[1] == "ca" { + status := caMain(caCMD) + os.Exit(status) + } + } + } + + // main flags + var ( + flVersion = flag.Bool("version", false, "prints version information") + flHTTPAddr = flag.String("http-addr", envString("SCEP_HTTP_ADDR", ""), "http listen address. defaults to \":8080\"") + flPort = flag.String("port", envString("SCEP_HTTP_LISTEN_PORT", "8080"), "http port to listen on (if you want to specify an address, use -http-addr instead)") + flDepotPath = flag.String("depot", envString("SCEP_FILE_DEPOT", "depot"), "path to ca folder") + flCAPass = flag.String("capass", envString("SCEP_CA_PASS", ""), "passwd for the ca.key") + flClDuration = flag.String("crtvalid", envString("SCEP_CERT_VALID", "365"), "validity for new client certificates in days") + flClAllowRenewal = flag.String("allowrenew", envString("SCEP_CERT_RENEW", "14"), "do not allow renewal until n days before expiry, set to 0 to always allow") + flChallengePassword = flag.String("challenge", envString("SCEP_CHALLENGE_PASSWORD", ""), "enforce a challenge password") + flCSRVerifierExec = flag.String("csrverifierexec", envString("SCEP_CSR_VERIFIER_EXEC", ""), "will be passed the CSRs for verification") + flDebug = flag.Bool("debug", envBool("SCEP_LOG_DEBUG"), "enable debug logging") + flLogJSON = flag.Bool("log-json", envBool("SCEP_LOG_JSON"), "output JSON logs") + flSignServerAttrs = flag.Bool("sign-server-attrs", envBool("SCEP_SIGN_SERVER_ATTRS"), "sign cert attrs for server usage") + ) + flag.Usage = func() { + flag.PrintDefaults() + + fmt.Println("usage: scep [] []") + fmt.Println(" ca create/manage a CA") + fmt.Println("type --help to see usage for each subcommand") + } + flag.Parse() + + // print version information + if *flVersion { + fmt.Println(version) + os.Exit(0) + } + + // -http-addr and -port conflict. Don't allow the user to set both. + httpAddrSet := setByUser("http-addr", "SCEP_HTTP_ADDR") + portSet := setByUser("port", "SCEP_HTTP_LISTEN_PORT") + var httpAddr string + if httpAddrSet && portSet { + fmt.Fprintln(os.Stderr, "cannot set both -http-addr and -port") + os.Exit(1) + } else if httpAddrSet { + httpAddr = *flHTTPAddr + } else { + httpAddr = ":" + *flPort + } + + var logger log.Logger + { + + if *flLogJSON { + logger = log.NewJSONLogger(os.Stderr) + } else { + logger = log.NewLogfmtLogger(os.Stderr) + } + if !*flDebug { + logger = level.NewFilter(logger, level.AllowInfo()) + } + logger = log.With(logger, "ts", log.DefaultTimestampUTC) + logger = log.With(logger, "caller", log.DefaultCaller) + } + lginfo := level.Info(logger) + + var err error + var depot scepdepot.Depot // cert storage + { + depot, err = file.NewFileDepot(*flDepotPath) + if err != nil { + lginfo.Log("err", err) + os.Exit(1) + } + } + allowRenewal, err := strconv.Atoi(*flClAllowRenewal) + if err != nil { + lginfo.Log("err", err, "msg", "No valid number for allowed renewal time") + os.Exit(1) + } + clientValidity, err := strconv.Atoi(*flClDuration) + if err != nil { + lginfo.Log("err", err, "msg", "No valid number for client cert validity") + os.Exit(1) + } + var csrVerifier csrverifier.CSRVerifier + if *flCSRVerifierExec > "" { + executableCSRVerifier, err := executablecsrverifier.New(*flCSRVerifierExec, lginfo) + if err != nil { + lginfo.Log("err", err, "msg", "Could not instantiate CSR verifier") + os.Exit(1) + } + csrVerifier = executableCSRVerifier + } + + var svc scepserver.Service // scep service + { + crts, key, err := depot.CA([]byte(*flCAPass)) + if err != nil { + lginfo.Log("err", err) + os.Exit(1) + } + if len(crts) < 1 { + lginfo.Log("err", "missing CA certificate") + os.Exit(1) + } + signerOpts := []scepdepot.Option{ + scepdepot.WithAllowRenewalDays(allowRenewal), + scepdepot.WithValidityDays(clientValidity), + scepdepot.WithCAPass(*flCAPass), + } + if *flSignServerAttrs { + signerOpts = append(signerOpts, scepdepot.WithSeverAttrs()) + } + var signer scepserver.CSRSigner = scepdepot.NewSigner(depot, signerOpts...) + if *flChallengePassword != "" { + signer = scepserver.ChallengeMiddleware(*flChallengePassword, signer) + } + if csrVerifier != nil { + signer = csrverifier.Middleware(csrVerifier, signer) + } + svc, err = scepserver.NewService(crts[0], key, signer, scepserver.WithLogger(logger)) + if err != nil { + lginfo.Log("err", err) + os.Exit(1) + } + svc = scepserver.NewLoggingService(log.With(lginfo, "component", "scep_service"), svc) + } + + var h http.Handler // http handler + { + e := scepserver.MakeServerEndpoints(svc) + e.GetEndpoint = scepserver.EndpointLoggingMiddleware(lginfo)(e.GetEndpoint) + e.PostEndpoint = scepserver.EndpointLoggingMiddleware(lginfo)(e.PostEndpoint) + scepHandler := scepserver.MakeHTTPHandler(e, svc, log.With(lginfo, "component", "http")) + r := mux.NewRouter() + r.Handle("/scep", scepHandler) + h = r + } + + // start http server + errs := make(chan error, 2) + go func() { + lginfo.Log("transport", "http", "address", httpAddr, "msg", "listening") + errs <- http.ListenAndServe(httpAddr, h) //nolint:gosec + }() + go func() { + c := make(chan os.Signal, 1) + signal.Notify(c, syscall.SIGINT) + errs <- fmt.Errorf("%s", <-c) + }() + + lginfo.Log("terminated", <-errs) +} + +func caMain(cmd *flag.FlagSet) int { + var ( + flDepotPath = cmd.String("depot", "depot", "path to ca folder") + flInit = cmd.Bool("init", false, "create a new CA") + flYears = cmd.Int("years", 10, "default CA years") + flKeySize = cmd.Int("keySize", 4096, "rsa key size") + flCommonName = cmd.String("common_name", "MICROMDM SCEP CA", "common name (CN) for CA cert") + flOrg = cmd.String("organization", "scep-ca", "organization for CA cert") + flOrgUnit = cmd.String("organizational_unit", "SCEP CA", "organizational unit (OU) for CA cert") + flPassword = cmd.String("key-password", "", "password to store rsa key") + flCountry = cmd.String("country", "US", "country for CA cert") + ) + _ = cmd.Parse(os.Args[2:]) + if *flInit { + fmt.Println("Initializing new CA") + key, err := createKey(*flKeySize, []byte(*flPassword), *flDepotPath) + if err != nil { + fmt.Println(err) + return 1 + } + if err := createCertificateAuthority(key, *flYears, *flCommonName, *flOrg, *flOrgUnit, *flCountry, *flDepotPath); err != nil { + fmt.Println(err) + return 1 + } + } + + return 0 +} + +// create a key, save it to depot and return it for further usage. +func createKey(bits int, password []byte, depot string) (*rsa.PrivateKey, error) { + // create depot folder if missing + if err := os.MkdirAll(depot, 0o755); err != nil { + return nil, err + } + name := filepath.Join(depot, "ca.key") + file, err := os.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0o400) + if err != nil { + return nil, err + } + defer file.Close() + + // create RSA key and save as PEM file + key, err := rsa.GenerateKey(rand.Reader, bits) + if err != nil { + return nil, err + } + privPEMBlock, err := x509.EncryptPEMBlock( + rand.Reader, + rsaPrivateKeyPEMBlockType, + x509.MarshalPKCS1PrivateKey(key), + password, + x509.PEMCipher3DES, + ) + if err != nil { + return nil, err + } + if err := pem.Encode(file, privPEMBlock); err != nil { + os.Remove(name) + return nil, err + } + + return key, nil +} + +func createCertificateAuthority(key *rsa.PrivateKey, years int, commonName string, organization string, organizationalUnit string, country string, depot string) error { + cert := scepdepot.NewCACert( + scepdepot.WithYears(years), + scepdepot.WithCommonName(commonName), + scepdepot.WithOrganization(organization), + scepdepot.WithOrganizationalUnit(organizationalUnit), + scepdepot.WithCountry(country), + ) + crtBytes, err := cert.SelfSign(rand.Reader, &key.PublicKey, key) + if err != nil { + return err + } + + name := filepath.Join(depot, "ca.pem") + file, err := os.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0o400) + if err != nil { + return err + } + defer file.Close() + + if _, err := file.Write(pemCert(crtBytes)); err != nil { + file.Close() + os.Remove(name) + return err + } + + return nil +} + +const ( + rsaPrivateKeyPEMBlockType = "RSA PRIVATE KEY" + certificatePEMBlockType = "CERTIFICATE" +) + +func pemCert(derBytes []byte) []byte { + pemBlock := &pem.Block{ + Type: certificatePEMBlockType, + Headers: nil, + Bytes: derBytes, + } + out := pem.EncodeToMemory(pemBlock) + return out +} + +func envString(key, def string) string { + if env := os.Getenv(key); env != "" { + return env + } + return def +} + +func envBool(key string) bool { + if env := os.Getenv(key); env == "true" { + return true + } + return false +} + +func setByUser(flagName, envName string) bool { + userDefinedFlags := make(map[string]bool) + flag.Visit(func(f *flag.Flag) { + userDefinedFlags[f.Name] = true + }) + flagSet := userDefinedFlags[flagName] + _, envSet := os.LookupEnv(envName) + return flagSet || envSet +} diff --git a/server/mdm/scep/cryptoutil/cryptoutil.go b/server/mdm/scep/cryptoutil/cryptoutil.go new file mode 100644 index 000000000..6512c6154 --- /dev/null +++ b/server/mdm/scep/cryptoutil/cryptoutil.go @@ -0,0 +1,36 @@ +package cryptoutil + +import ( + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rsa" + "crypto/sha256" + "encoding/asn1" + "errors" +) + +// GenerateSubjectKeyID generates Subject Key Identifier (SKI) using SHA-256 +// hash of the public key bytes according to RFC 7093 section 2. +func GenerateSubjectKeyID(pub crypto.PublicKey) ([]byte, error) { + var pubBytes []byte + var err error + switch pub := pub.(type) { + case *rsa.PublicKey: + pubBytes, err = asn1.Marshal(*pub) + if err != nil { + return nil, err + } + case *ecdsa.PublicKey: + pubBytes = elliptic.Marshal(pub.Curve, pub.X, pub.Y) + default: + return nil, errors.New("only ECDSA and RSA public keys are supported") + } + + hash := sha256.Sum256(pubBytes) + + // According to RFC 7093, The keyIdentifier is composed of the leftmost + // 160-bits of the SHA-256 hash of the value of the BIT STRING + // subjectPublicKey (excluding the tag, length, and number of unused bits). + return hash[:20], nil +} diff --git a/server/mdm/scep/cryptoutil/cryptoutil_test.go b/server/mdm/scep/cryptoutil/cryptoutil_test.go new file mode 100644 index 000000000..1adaaf41a --- /dev/null +++ b/server/mdm/scep/cryptoutil/cryptoutil_test.go @@ -0,0 +1,58 @@ +package cryptoutil + +import ( + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/rsa" + "math/big" + "testing" +) + +func TestGenerateSubjectKeyID(t *testing.T) { + ecdsaKey, err := ecdsa.GenerateKey(elliptic.P224(), rand.Reader) + if err != nil { + t.Fatal(err) + } + for _, test := range []struct { + testName string + pub crypto.PublicKey + }{ + {"RSA", &rsa.PublicKey{N: big.NewInt(123), E: 65537}}, + {"ECDSA", &ecdsa.PublicKey{X: ecdsaKey.X, Y: ecdsaKey.Y, Curve: elliptic.P224()}}, + } { + test := test + t.Run(test.testName, func(t *testing.T) { + t.Parallel() + ski, err := GenerateSubjectKeyID(test.pub) + if err != nil { + t.Fatal(err) + } + if len(ski) != 20 { + t.Fatalf("unexpected subject public key identifier length: %d", len(ski)) + } + ski2, err := GenerateSubjectKeyID(test.pub) + if err != nil { + t.Fatal(err) + } + if !testSKIEq(ski, ski2) { + t.Fatal("subject key identifier generation is not deterministic") + } + }) + } +} + +func testSKIEq(a, b []byte) bool { + if len(a) != len(b) { + return false + } + + for i := range a { + if a[i] != b[i] { + return false + } + } + + return true +} diff --git a/server/mdm/scep/cryptoutil/doc.go b/server/mdm/scep/cryptoutil/doc.go new file mode 100644 index 000000000..3c73ba02b --- /dev/null +++ b/server/mdm/scep/cryptoutil/doc.go @@ -0,0 +1,2 @@ +// package cryptoutil provides utilities for working with crypto types. +package cryptoutil diff --git a/server/mdm/scep/cryptoutil/x509util/doc.go b/server/mdm/scep/cryptoutil/x509util/doc.go new file mode 100644 index 000000000..a5754bbbe --- /dev/null +++ b/server/mdm/scep/cryptoutil/x509util/doc.go @@ -0,0 +1,2 @@ +// package x509 provides utilities for working with x509 types. +package x509util diff --git a/server/mdm/scep/cryptoutil/x509util/x509util.go b/server/mdm/scep/cryptoutil/x509util/x509util.go new file mode 100644 index 000000000..90ff22cce --- /dev/null +++ b/server/mdm/scep/cryptoutil/x509util/x509util.go @@ -0,0 +1,396 @@ +/* +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +*/ + +package x509util + +import ( + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rsa" + _ "crypto/sha256" + _ "crypto/sha512" + "crypto/x509" + "crypto/x509/pkix" + "encoding/asn1" + "errors" + "io" +) + +type CertificateRequest struct { + x509.CertificateRequest + + ChallengePassword string +} + +// CreateCertificateRequest creates a new certificate request based on a template. +// The resulting CSR is similar to x509 but optionally supports the +// challengePassword attribute. +// +// See https://github.com/golang/go/issues/15995 +func CreateCertificateRequest(rand io.Reader, template *CertificateRequest, priv interface{}) (csr []byte, err error) { + if template.ChallengePassword == "" { + // if no challenge password, return a stdlib CSR. + return x509.CreateCertificateRequest(rand, &template.CertificateRequest, priv) + } + derBytes, err := x509.CreateCertificateRequest(rand, &template.CertificateRequest, priv) + if err != nil { + return nil, err + } + // add the challenge attribute to the CSR, then re-sign the raw csr. + // not checking the crypto.Signer assertion because x509.CreateCertificateRequest already did that. + return addChallenge( + template.CertificateRequest.SignatureAlgorithm, + rand, + derBytes, + template.ChallengePassword, + priv.(crypto.Signer), + ) +} + +type passwordChallengeAttribute struct { + Type asn1.ObjectIdentifier + Value []string `asn1:"set"` +} + +// The structures below are copied from the Go standard library x509 package. + +type publicKeyInfo struct { + Raw asn1.RawContent + Algorithm pkix.AlgorithmIdentifier + PublicKey asn1.BitString +} + +type tbsCertificateRequest struct { + Raw asn1.RawContent + Version int + Subject asn1.RawValue + PublicKey publicKeyInfo + RawAttributes []asn1.RawValue `asn1:"tag:0"` +} + +type certificateRequest struct { + Raw asn1.RawContent + TBSCSR tbsCertificateRequest + SignatureAlgorithm pkix.AlgorithmIdentifier + SignatureValue asn1.BitString +} + +// ParseChallengePassword extracts the challengePassword attribute from a +// DER encoded Certificate Signing Request. +func ParseChallengePassword(asn1Data []byte) (string, error) { + type attribute struct { + ID asn1.ObjectIdentifier + Value asn1.RawValue `asn1:"set"` + } + var csr certificateRequest + rest, err := asn1.Unmarshal(asn1Data, &csr) + if err != nil { + return "", err + } else if len(rest) != 0 { + err = asn1.SyntaxError{Msg: "trailing data"} + return "", err + } + + var password string + for _, rawAttr := range csr.TBSCSR.RawAttributes { + var attr attribute + _, err := asn1.Unmarshal(rawAttr.FullBytes, &attr) + if err != nil { + return "", err + } + if attr.ID.Equal(oidChallengePassword) { + _, err := asn1.Unmarshal(attr.Value.Bytes, &password) + if err != nil { + return "", err + } + } + } + + return password, nil +} + +// addChallenge takes a raw CSR created by x509.CreateCertificateRequest, +// adds a passwordChallengeAttribute and re-signs the raw CSR bytes. +func addChallenge( + templateSigAlgo x509.SignatureAlgorithm, + reader io.Reader, + derBytes []byte, + challenge string, + key crypto.Signer, +) (csr []byte, err error) { + var hashFunc crypto.Hash + var sigAlgo pkix.AlgorithmIdentifier + hashFunc, sigAlgo, err = signingParamsForPublicKey(key.Public(), templateSigAlgo) + if err != nil { + return nil, err + } + + var req certificateRequest + rest, err := asn1.Unmarshal(derBytes, &req) + if err != nil { + return nil, err + } else if len(rest) != 0 { + err = asn1.SyntaxError{Msg: "trailing data"} + return nil, err + } + + passwordAttribute := passwordChallengeAttribute{ + Type: oidChallengePassword, + Value: []string{challenge}, + } + b, err := asn1.Marshal(passwordAttribute) + if err != nil { + return nil, err + } + + var rawAttribute asn1.RawValue + rest, err = asn1.Unmarshal(b, &rawAttribute) + if err != nil { + return nil, err + } else if len(rest) != 0 { + err = asn1.SyntaxError{Msg: "trailing data"} + return nil, err + } + + // append attribute + req.TBSCSR.RawAttributes = append(req.TBSCSR.RawAttributes, rawAttribute) + + // recreate request + tbsCSR := tbsCertificateRequest{ + Version: 0, + Subject: req.TBSCSR.Subject, + PublicKey: req.TBSCSR.PublicKey, + RawAttributes: req.TBSCSR.RawAttributes, + } + + tbsCSRContents, err := asn1.Marshal(tbsCSR) + if err != nil { + return nil, err + } + tbsCSR.Raw = tbsCSRContents + + h := hashFunc.New() + if _, err := h.Write(tbsCSRContents); err != nil { + return nil, err + } + + var signature []byte + signature, err = key.Sign(reader, h.Sum(nil), hashFunc) + if err != nil { + return nil, err + } + + return asn1.Marshal(certificateRequest{ + TBSCSR: tbsCSR, + SignatureAlgorithm: sigAlgo, + SignatureValue: asn1.BitString{ + Bytes: signature, + BitLength: len(signature) * 8, + }, + }) +} + +// signingParamsForPublicKey returns the parameters to use for signing with +// priv. If requestedSigAlgo is not zero then it overrides the default +// signature algorithm. +func signingParamsForPublicKey(pub interface{}, requestedSigAlgo x509.SignatureAlgorithm) (hashFunc crypto.Hash, sigAlgo pkix.AlgorithmIdentifier, err error) { + var pubType x509.PublicKeyAlgorithm + + switch pub := pub.(type) { + case *rsa.PublicKey: + pubType = x509.RSA + hashFunc = crypto.SHA256 + sigAlgo.Algorithm = oidSignatureSHA256WithRSA + sigAlgo.Parameters = asn1NullRawValue + + case *ecdsa.PublicKey: + pubType = x509.ECDSA + + switch pub.Curve { + case elliptic.P224(), elliptic.P256(): + hashFunc = crypto.SHA256 + sigAlgo.Algorithm = oidSignatureECDSAWithSHA256 + case elliptic.P384(): + hashFunc = crypto.SHA384 + sigAlgo.Algorithm = oidSignatureECDSAWithSHA384 + case elliptic.P521(): + hashFunc = crypto.SHA512 + sigAlgo.Algorithm = oidSignatureECDSAWithSHA512 + default: + err = errors.New("x509: unknown elliptic curve") + } + + default: + err = errors.New("x509: only RSA and ECDSA keys supported") + } + + if err != nil { + return + } + + if requestedSigAlgo == 0 { + return + } + + found := false + for _, details := range signatureAlgorithmDetails { + if details.algo == requestedSigAlgo { + if details.pubKeyAlgo != pubType { + err = errors.New("x509: requested SignatureAlgorithm does not match private key type") + return + } + sigAlgo.Algorithm, hashFunc = details.oid, details.hash + if hashFunc == 0 { + err = errors.New("x509: cannot sign with hash function requested") + return + } + // copy x509.SignatureAlgorithm.isRSAPSS method + isRSAPSS := func() bool { + switch requestedSigAlgo { + case x509.SHA256WithRSAPSS, x509.SHA384WithRSAPSS, x509.SHA512WithRSAPSS: + return true + default: + return false + } + } + if isRSAPSS() { + sigAlgo.Parameters = rsaPSSParameters(hashFunc) + } + found = true + break + } + } + + if !found { + err = errors.New("x509: unknown SignatureAlgorithm") + } + + return +} + +var signatureAlgorithmDetails = []struct { + algo x509.SignatureAlgorithm + oid asn1.ObjectIdentifier + pubKeyAlgo x509.PublicKeyAlgorithm + hash crypto.Hash +}{ + {x509.SHA256WithRSA, oidSignatureSHA256WithRSA, x509.RSA, crypto.SHA256}, + {x509.SHA384WithRSA, oidSignatureSHA384WithRSA, x509.RSA, crypto.SHA384}, + {x509.SHA512WithRSA, oidSignatureSHA512WithRSA, x509.RSA, crypto.SHA512}, + {x509.SHA256WithRSAPSS, oidSignatureRSAPSS, x509.RSA, crypto.SHA256}, + {x509.SHA384WithRSAPSS, oidSignatureRSAPSS, x509.RSA, crypto.SHA384}, + {x509.SHA512WithRSAPSS, oidSignatureRSAPSS, x509.RSA, crypto.SHA512}, + {x509.DSAWithSHA256, oidSignatureDSAWithSHA256, x509.DSA, crypto.SHA256}, + {x509.ECDSAWithSHA256, oidSignatureECDSAWithSHA256, x509.ECDSA, crypto.SHA256}, + {x509.ECDSAWithSHA384, oidSignatureECDSAWithSHA384, x509.ECDSA, crypto.SHA384}, + {x509.ECDSAWithSHA512, oidSignatureECDSAWithSHA512, x509.ECDSA, crypto.SHA512}, +} + +var ( + oidSignatureSHA256WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 11} + oidSignatureSHA384WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 12} + oidSignatureSHA512WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 13} + oidSignatureRSAPSS = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 10} + oidSignatureDSAWithSHA256 = asn1.ObjectIdentifier{2, 16, 840, 1, 101, 3, 4, 3, 2} + oidSignatureECDSAWithSHA256 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 3, 2} + oidSignatureECDSAWithSHA384 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 3, 3} + oidSignatureECDSAWithSHA512 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 3, 4} + + oidSHA256 = asn1.ObjectIdentifier{2, 16, 840, 1, 101, 3, 4, 2, 1} + oidSHA384 = asn1.ObjectIdentifier{2, 16, 840, 1, 101, 3, 4, 2, 2} + oidSHA512 = asn1.ObjectIdentifier{2, 16, 840, 1, 101, 3, 4, 2, 3} + + oidMGF1 = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 8} + + oidChallengePassword = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 9, 7} +) + +// added to Go in 1.9 +var asn1NullRawValue = asn1.RawValue{ + Tag: 5, /* ASN.1 NULL */ +} + +// pssParameters reflects the parameters in an AlgorithmIdentifier that +// specifies RSA PSS. See https://tools.ietf.org/html/rfc3447#appendix-A.2.3 +type pssParameters struct { + // The following three fields are not marked as + // optional because the default values specify SHA-1, + // which is no longer suitable for use in signatures. + Hash pkix.AlgorithmIdentifier `asn1:"explicit,tag:0"` + MGF pkix.AlgorithmIdentifier `asn1:"explicit,tag:1"` + SaltLength int `asn1:"explicit,tag:2"` + TrailerField int `asn1:"optional,explicit,tag:3,default:1"` +} + +// rsaPSSParameters returns an asn1.RawValue suitable for use as the Parameters +// in an AlgorithmIdentifier that specifies RSA PSS. +func rsaPSSParameters(hashFunc crypto.Hash) asn1.RawValue { + var hashOID asn1.ObjectIdentifier + + switch hashFunc { + case crypto.SHA256: + hashOID = oidSHA256 + case crypto.SHA384: + hashOID = oidSHA384 + case crypto.SHA512: + hashOID = oidSHA512 + } + + params := pssParameters{ + Hash: pkix.AlgorithmIdentifier{ + Algorithm: hashOID, + Parameters: asn1NullRawValue, + }, + MGF: pkix.AlgorithmIdentifier{ + Algorithm: oidMGF1, + }, + SaltLength: hashFunc.Size(), + TrailerField: 1, + } + + mgf1Params := pkix.AlgorithmIdentifier{ + Algorithm: hashOID, + Parameters: asn1NullRawValue, + } + + var err error + params.MGF.Parameters.FullBytes, err = asn1.Marshal(mgf1Params) + if err != nil { + panic(err) + } + + serialized, err := asn1.Marshal(params) + if err != nil { + panic(err) + } + + return asn1.RawValue{FullBytes: serialized} +} diff --git a/server/mdm/scep/cryptoutil/x509util/x509util_test.go b/server/mdm/scep/cryptoutil/x509util/x509util_test.go new file mode 100644 index 000000000..c11bb7d2e --- /dev/null +++ b/server/mdm/scep/cryptoutil/x509util/x509util_test.go @@ -0,0 +1,50 @@ +package x509util + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" + "testing" +) + +func TestCreateCertificateRequest(t *testing.T) { + r := rand.Reader + priv, err := rsa.GenerateKey(r, 1024) // nolint:gosec + if err != nil { + t.Fatal(err) + } + + template := CertificateRequest{ + CertificateRequest: x509.CertificateRequest{ + Subject: pkix.Name{ + CommonName: "test.acme.co", + Country: []string{"US"}, + }, + }, + ChallengePassword: "foobar", + } + + derBytes, err := CreateCertificateRequest(r, &template, priv) + if err != nil { + t.Fatal(err) + } + + out, err := x509.ParseCertificateRequest(derBytes) + if err != nil { + t.Fatalf("failed to create certificate request: %s", err) + } + + if err := out.CheckSignature(); err != nil { + t.Errorf("failed to check certificate request signature: %s", err) + } + + challenge, err := ParseChallengePassword(derBytes) + if err != nil { + t.Fatalf("failed to parse challengePassword attribute: %s", err) + } + + if have, want := challenge, template.ChallengePassword; have != want { + t.Errorf("have %s, want %s", have, want) + } +} diff --git a/server/mdm/scep/csrverifier/csrverifier.go b/server/mdm/scep/csrverifier/csrverifier.go new file mode 100644 index 000000000..8a0dce89e --- /dev/null +++ b/server/mdm/scep/csrverifier/csrverifier.go @@ -0,0 +1,29 @@ +// Package csrverifier defines an interface for CSR verification. +package csrverifier + +import ( + "crypto/x509" + "errors" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" +) + +// CSRVerifier verifies the raw decrypted CSR. +type CSRVerifier interface { + Verify(data []byte) (bool, error) +} + +// Middleware wraps next in a CSRSigner that runs verifier +func Middleware(verifier CSRVerifier, next scepserver.CSRSigner) scepserver.CSRSignerFunc { + return func(m *scep.CSRReqMessage) (*x509.Certificate, error) { + ok, err := verifier.Verify(m.RawDecrypted) + if err != nil { + return nil, err + } + if !ok { + return nil, errors.New("CSR verify failed") + } + return next.SignCSR(m) + } +} diff --git a/server/mdm/scep/csrverifier/executable/csrverifier.go b/server/mdm/scep/csrverifier/executable/csrverifier.go new file mode 100644 index 000000000..8cda685d4 --- /dev/null +++ b/server/mdm/scep/csrverifier/executable/csrverifier.go @@ -0,0 +1,67 @@ +// Package executablecsrverifier defines the ExecutableCSRVerifier csrverifier.CSRVerifier. +package executablecsrverifier + +import ( + "errors" + "os" + "os/exec" + + "github.com/go-kit/kit/log" +) + +const ( + userExecute os.FileMode = 1 << (6 - 3*iota) + groupExecute + otherExecute +) + +// New creates a executablecsrverifier.ExecutableCSRVerifier. +func New(path string, logger log.Logger) (*ExecutableCSRVerifier, error) { + fileInfo, err := os.Stat(path) + if err != nil { + return nil, err + } + + fileMode := fileInfo.Mode() + if fileMode.IsDir() { + return nil, errors.New("CSR Verifier executable is a directory") + } + + filePerm := fileMode.Perm() + if filePerm&(userExecute|groupExecute|otherExecute) == 0 { + return nil, errors.New("CSR Verifier executable is not executable") + } + + return &ExecutableCSRVerifier{executable: path, logger: logger}, nil +} + +// ExecutableCSRVerifier implements a csrverifier.CSRVerifier. +// It executes a command, and passes it the raw decrypted CSR. +// If the command exit code is 0, the CSR is considered valid. +// In any other cases, the CSR is considered invalid. +type ExecutableCSRVerifier struct { + executable string + logger log.Logger +} + +func (v *ExecutableCSRVerifier) Verify(data []byte) (bool, error) { + cmd := exec.Command(v.executable) // nolint:gosec + + stdin, err := cmd.StdinPipe() + if err != nil { + return false, err + } + + go func() { + defer stdin.Close() + _, _ = stdin.Write(data) + }() + + err = cmd.Run() + if err != nil { + v.logger.Log("err", err) + // mask the executable error + return false, nil + } + return true, err +} diff --git a/server/mdm/scep/depot/bolt/depot.go b/server/mdm/scep/depot/bolt/depot.go new file mode 100644 index 000000000..a078afadf --- /dev/null +++ b/server/mdm/scep/depot/bolt/depot.go @@ -0,0 +1,289 @@ +package bolt + +import ( + "bytes" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "errors" + "fmt" + "math/big" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" + + "github.com/boltdb/bolt" +) + +// Depot implements a SCEP certificate store using boltdb. +// https://github.com/boltdb/bolt +type Depot struct { + *bolt.DB +} + +const ( + certBucket = "scep_certificates" +) + +// NewBoltDepot creates a depot.Depot backed by BoltDB. +func NewBoltDepot(db *bolt.DB) (*Depot, error) { + err := db.Update(func(tx *bolt.Tx) error { + _, err := tx.CreateBucketIfNotExists([]byte(certBucket)) + if err != nil { + return fmt.Errorf("create bucket: %s", err) + } + return nil + }) + if err != nil { + return nil, err + } + return &Depot{db}, nil +} + +// For some read operations Bolt returns a direct memory reference to +// the underlying mmap. This means that persistent references to these +// memory locations are volatile. Make sure to copy data for places we +// know references to this memeory will be kept. +func bucketGetCopy(b *bolt.Bucket, key []byte) (out []byte) { + in := b.Get(key) + if in == nil { + return + } + out = make([]byte, len(in)) + copy(out, in) + return +} + +func (db *Depot) CA(pass []byte) ([]*x509.Certificate, *rsa.PrivateKey, error) { + chain := []*x509.Certificate{} + var key *rsa.PrivateKey + err := db.View(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + // get ca_certificate + caCert := bucketGetCopy(bucket, []byte("ca_certificate")) + if caCert == nil { + return errors.New("no ca_certificate in bucket") + } + cert, err := x509.ParseCertificate(caCert) + if err != nil { + return err + } + chain = append(chain, cert) + + // get ca_key + caKey := bucket.Get([]byte("ca_key")) + if caKey == nil { + return errors.New("no ca_key in bucket") + } + key, err = x509.ParsePKCS1PrivateKey(caKey) + if err != nil { + return err + } + return nil + }) + if err != nil { + return nil, nil, err + } + return chain, key, nil +} + +func (db *Depot) Put(cn string, crt *x509.Certificate) error { + if crt == nil || crt.Raw == nil { + return fmt.Errorf("%q does not specify a valid certificate for storage", cn) + } + serial, err := db.Serial() + if err != nil { + return err + } + + err = db.Update(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + name := cn + "." + serial.String() + return bucket.Put([]byte(name), crt.Raw) + }) + if err != nil { + return err + } + return db.incrementSerial(serial) +} + +func (db *Depot) Serial() (*big.Int, error) { + s := big.NewInt(2) + if !db.hasKey([]byte("serial")) { + if err := db.writeSerial(s); err != nil { + return nil, err + } + return s, nil + } + err := db.View(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + k := bucket.Get([]byte("serial")) + if k == nil { + return fmt.Errorf("key %q not found", "serial") + } + s = s.SetBytes(k) + return nil + }) + if err != nil { + return nil, err + } + return s, nil +} + +func (db *Depot) writeSerial(s *big.Int) error { + err := db.Update(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + return bucket.Put([]byte("serial"), s.Bytes()) + }) + return err +} + +func (db *Depot) hasKey(name []byte) bool { + var present bool + _ = db.View(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + k := bucket.Get([]byte("serial")) + if k != nil { + present = true + } + return nil + }) + return present +} + +func (db *Depot) incrementSerial(s *big.Int) error { + serial := s.Add(s, big.NewInt(1)) + err := db.Update(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + return bucket.Put([]byte("serial"), serial.Bytes()) + }) + return err +} + +func (db *Depot) HasCN(cn string, allowTime int, cert *x509.Certificate, revokeOldCertificate bool) (bool, error) { + // TODO: implement allowTime + // TODO: implement revocation + if cert == nil { + return false, errors.New("nil certificate provided") + } + var hasCN bool + err := db.View(func(tx *bolt.Tx) error { + // TODO: "scep_certificates" is internal const in micromdm/scep + curs := tx.Bucket([]byte("scep_certificates")).Cursor() + prefix := []byte(cert.Subject.CommonName) + for k, v := curs.Seek(prefix); k != nil && bytes.HasPrefix(k, prefix); k, v = curs.Next() { + if bytes.Compare(v, cert.Raw) == 0 { + hasCN = true + return nil + } + } + + return nil + }) + return hasCN, err +} + +func (db *Depot) CreateOrLoadKey(bits int) (*rsa.PrivateKey, error) { + var ( + key *rsa.PrivateKey + err error + ) + err = db.View(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + priv := bucket.Get([]byte("ca_key")) + if priv == nil { + return nil + } + key, err = x509.ParsePKCS1PrivateKey(priv) + return err + }) + if err != nil { + return nil, err + } + if key != nil { + return key, nil + } + key, err = rsa.GenerateKey(rand.Reader, bits) + if err != nil { + return nil, err + } + err = db.Update(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + return bucket.Put([]byte("ca_key"), x509.MarshalPKCS1PrivateKey(key)) + }) + if err != nil { + return nil, err + } + return key, nil +} + +func (db *Depot) CreateOrLoadCA(key *rsa.PrivateKey, years int, org, country string) (*x509.Certificate, error) { + var ( + cert *x509.Certificate + err error + ) + err = db.View(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + caCert := bucketGetCopy(bucket, []byte("ca_certificate")) + if caCert == nil { + return nil + } + cert, err = x509.ParseCertificate(caCert) + return err + }) + if err != nil { + return nil, err + } + if cert != nil { + return cert, nil + } + + newCert := depot.NewCACert( + depot.WithYears(years), + depot.WithOrganization(org), + depot.WithOrganizationalUnit("MICROMDM SCEP CA"), + depot.WithCountry(country), + ) + crtBytes, err := newCert.SelfSign(rand.Reader, &key.PublicKey, key) + if err != nil { + return nil, err + } + + err = db.Update(func(tx *bolt.Tx) error { + bucket := tx.Bucket([]byte(certBucket)) + if bucket == nil { + return fmt.Errorf("bucket %q not found!", certBucket) + } + return bucket.Put([]byte("ca_certificate"), crtBytes) + }) + if err != nil { + return nil, err + } + return x509.ParseCertificate(crtBytes) +} diff --git a/server/mdm/scep/depot/bolt/depot_test.go b/server/mdm/scep/depot/bolt/depot_test.go new file mode 100644 index 000000000..c19f6290a --- /dev/null +++ b/server/mdm/scep/depot/bolt/depot_test.go @@ -0,0 +1,144 @@ +package bolt + +import ( + "io/ioutil" + "math/big" + "os" + "reflect" + "testing" + + "github.com/boltdb/bolt" +) + +// createDepot creates a Bolt database in a temporary location. +func createDB(mode os.FileMode, options *bolt.Options) *Depot { + // Create temporary path. + f, _ := ioutil.TempFile("", "bolt-") + f.Close() + os.Remove(f.Name()) + + db, err := bolt.Open(f.Name(), mode, options) + if err != nil { + panic(err.Error()) + } + d, err := NewBoltDepot(db) + if err != nil { + panic(err.Error()) + } + return d +} + +func TestDepot_Serial(t *testing.T) { + db := createDB(0o666, nil) + tests := []struct { + name string + want *big.Int + wantErr bool + }{ + { + name: "two is the default value.", + want: big.NewInt(2), + }, + } + for _, tt := range tests { + got, err := db.Serial() + if (err != nil) != tt.wantErr { + t.Errorf("%q. Depot.Serial() error = %v, wantErr %v", tt.name, err, tt.wantErr) + continue + } + if !reflect.DeepEqual(got, tt.want) { + t.Errorf("%q. Depot.Serial() = %v, want %v", tt.name, got, tt.want) + } + } +} + +func TestDepot_writeSerial(t *testing.T) { + db := createDB(0o666, nil) + + tests := []struct { + name string + args *big.Int + wantErr bool + }{ + { + args: big.NewInt(5), + }, + { + args: big.NewInt(3), + }, + } + for _, tt := range tests { + if err := db.writeSerial(tt.args); (err != nil) != tt.wantErr { + t.Errorf("%q. Depot.writeSerial() error = %v, wantErr %v", tt.name, err, tt.wantErr) + } + } +} + +func TestDepot_incrementSerial(t *testing.T) { + db := createDB(0o666, nil) + + tests := []struct { + name string + args *big.Int + want *big.Int + wantErr bool + }{ + { + args: big.NewInt(2), + want: big.NewInt(3), + }, + { + args: big.NewInt(3), + want: big.NewInt(4), + }, + } + for _, tt := range tests { + if err := db.incrementSerial(tt.args); (err != nil) != tt.wantErr { + t.Errorf("%q. Depot.incrementSerial() error = %v, wantErr %v", tt.name, err, tt.wantErr) + } + got, _ := db.Serial() + if !reflect.DeepEqual(got, tt.want) { + t.Errorf("%q. Depot.Serial() = %v, want %v", tt.name, got, tt.want) + } + } +} + +func TestDepot_CreateOrLoadKey(t *testing.T) { + db := createDB(0o666, nil) + tests := []struct { + bits int + wantErr bool + }{ + { + bits: 1024, + }, + { + bits: 2048, + }, + } + for i, tt := range tests { + if _, err := db.CreateOrLoadKey(tt.bits); (err != nil) != tt.wantErr { + t.Errorf("%d. Depot.CreateOrLoadKey() error = %v, wantErr %v", i, err, tt.wantErr) + } + } +} + +func TestDepot_CreateOrLoadCA(t *testing.T) { + db := createDB(0o666, nil) + tests := []struct { + wantErr bool + }{ + {}, + {}, + } + for i, tt := range tests { + key, err := db.CreateOrLoadKey(1024) + if err != nil { + t.Fatalf("%d. Depot.CreateOrLoadKey() error = %v", i, err) + } + + if _, err := db.CreateOrLoadCA(key, 10, "MicroMDM", "US"); (err != nil) != tt.wantErr { + t.Errorf("%d. Depot.CreateOrLoadCA() error = %v, wantErr %v", i, err, tt.wantErr) + } + } +} diff --git a/server/mdm/scep/depot/cacert.go b/server/mdm/scep/depot/cacert.go new file mode 100644 index 000000000..65d44a0c8 --- /dev/null +++ b/server/mdm/scep/depot/cacert.go @@ -0,0 +1,123 @@ +package depot + +import ( + "crypto" + "crypto/x509" + "crypto/x509/pkix" + "io" + "math/big" + "time" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil" +) + +// CACert represents a new self-signed CA certificate +type CACert struct { + commonName string + country string + organization string + organizationalUnit string + years int + keyUsage x509.KeyUsage +} + +// NewCACert creates a new CACert object with options +func NewCACert(opts ...CACertOption) *CACert { + c := &CACert{ + organization: "scep-ca", + organizationalUnit: "SCEP CA", + years: 10, + keyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign, + } + for _, opt := range opts { + opt(c) + } + return c +} + +type CACertOption func(*CACert) + +// WithOrganization specifies the Organization on the CA template. +func WithOrganization(o string) CACertOption { + return func(c *CACert) { + c.organization = o + } +} + +// WithOrganizationalUnit specifies the OrganizationalUnit on the CA template. +func WithOrganizationalUnit(ou string) CACertOption { + return func(c *CACert) { + c.organizationalUnit = ou + } +} + +// WithYears specifies the validity date of the CA. +func WithYears(y int) CACertOption { + return func(c *CACert) { + c.years = y + } +} + +// WithCountry specifies the Country on the CA template. +func WithCountry(country string) CACertOption { + return func(c *CACert) { + c.country = country + } +} + +// WithCommonName specifies the CommonName on the CA template. +func WithCommonName(name string) CACertOption { + return func(c *CACert) { + c.commonName = name + } +} + +// WithKeyUsage specifies the X.509 Key Usage on the CA template. +func WithKeyUsage(usage x509.KeyUsage) CACertOption { + return func(c *CACert) { + c.keyUsage = usage + } +} + +// newPkixName creates a new pkix.Name from c +func (c *CACert) newPkixName() *pkix.Name { + return &pkix.Name{ + Country: []string{c.country}, + Organization: []string{c.organization}, + OrganizationalUnit: []string{c.organizationalUnit}, + CommonName: c.commonName, + } +} + +// SelfSign creates an x509 template based off our settings and self-signs it using priv. +func (c *CACert) SelfSign(rand io.Reader, pub crypto.PublicKey, priv interface{}) ([]byte, error) { + subjKeyId, err := cryptoutil.GenerateSubjectKeyID(pub) + if err != nil { + return nil, err + } + // Build CA based on RFC5280 + tmpl := x509.Certificate{ + Subject: *c.newPkixName(), + SerialNumber: big.NewInt(1), + + // NotBefore is set to be 10min earlier to fix gap on time difference in cluster + NotBefore: time.Now().Add(-600).UTC(), + NotAfter: time.Now().AddDate(c.years, 0, 0).UTC(), + + // Used for certificate signing only + KeyUsage: c.keyUsage, + + // activate CA + BasicConstraintsValid: true, + IsCA: true, + + // Not allow any non-self-issued intermediate CA + MaxPathLen: 0, + + // 160-bit SHA-1 hash of the value of the BIT STRING subjectPublicKey + // (excluding the tag, length, and number of unused bits) + SubjectKeyId: subjKeyId, + } + + return x509.CreateCertificate(rand, &tmpl, &tmpl, pub, priv) +} diff --git a/server/mdm/scep/depot/depot.go b/server/mdm/scep/depot/depot.go new file mode 100644 index 000000000..e9e5d2152 --- /dev/null +++ b/server/mdm/scep/depot/depot.go @@ -0,0 +1,15 @@ +package depot + +import ( + "crypto/rsa" + "crypto/x509" + "math/big" +) + +// Depot is a repository for managing certificates +type Depot interface { + CA(pass []byte) ([]*x509.Certificate, *rsa.PrivateKey, error) + Put(name string, crt *x509.Certificate) error + Serial() (*big.Int, error) + HasCN(cn string, allowTime int, cert *x509.Certificate, revokeOldCertificate bool) (bool, error) +} diff --git a/server/mdm/scep/depot/file/depot.go b/server/mdm/scep/depot/file/depot.go new file mode 100644 index 000000000..0c5d1e65c --- /dev/null +++ b/server/mdm/scep/depot/file/depot.go @@ -0,0 +1,408 @@ +package file + +import ( + "bufio" + "bytes" + "crypto/rsa" + "crypto/sha256" + "crypto/x509" + "encoding/pem" + "errors" + "fmt" + "io" + "io/ioutil" + "math/big" + "os" + "path/filepath" + "strconv" + "strings" + "time" +) + +// NewFileDepot returns a new cert depot. +func NewFileDepot(path string) (*fileDepot, error) { + f, err := os.OpenFile(fmt.Sprintf("%s/index.txt", path), + os.O_RDONLY|os.O_CREATE, 0o666) + if err != nil { + return nil, err + } + defer f.Close() + return &fileDepot{dirPath: path}, nil +} + +type fileDepot struct { + dirPath string +} + +func (d *fileDepot) CA(pass []byte) ([]*x509.Certificate, *rsa.PrivateKey, error) { + caPEM, err := d.getFile("ca.pem") + if err != nil { + return nil, nil, err + } + cert, err := loadCert(caPEM.Data) + if err != nil { + return nil, nil, err + } + keyPEM, err := d.getFile("ca.key") + if err != nil { + return nil, nil, err + } + key, err := loadKey(keyPEM.Data, pass) + if err != nil { + return nil, nil, err + } + return []*x509.Certificate{cert}, key, nil +} + +// file permissions +const ( + certPerm = 0o444 + serialPerm = 0o400 + dbPerm = 0o600 +) + +// Put adds a certificate to the depot +func (d *fileDepot) Put(cn string, crt *x509.Certificate) error { + if crt == nil { + return errors.New("crt is nil") + } + if crt.Raw == nil { + return errors.New("data is nil") + } + data := crt.Raw + + if err := os.MkdirAll(d.dirPath, 0o755); err != nil { + return err + } + + serial, err := d.Serial() + if err != nil { + return err + } + + if crt.Subject.CommonName == "" { + // this means our cn was replaced by the certificate Signature + // which is inappropriate for a filename + cn = fmt.Sprintf("%x", sha256.Sum256(crt.Raw)) + } + filename := fmt.Sprintf("%s.%s.pem", cn, serial.String()) + + filepath := d.path(filename) + file, err := os.OpenFile(filepath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, certPerm) + if err != nil { + return err + } + defer file.Close() + + if _, err := file.Write(pemCert(data)); err != nil { + os.Remove(filepath) + return err + } + if err := d.writeDB(cn, serial, filename, crt); err != nil { + // TODO : remove certificate in case of writeDB problems + return err + } + + if err := d.incrementSerial(serial); err != nil { + return err + } + + return nil +} + +func (d *fileDepot) Serial() (*big.Int, error) { + name := d.path("serial") + s := big.NewInt(2) + if err := d.check("serial"); err != nil { + // assuming it doesnt exist, create + if err := d.writeSerial(s); err != nil { + return nil, err + } + return s, nil + } + file, err := os.Open(name) + if err != nil { + return nil, err + } + defer file.Close() + r := bufio.NewReader(file) + data, err := r.ReadString('\r') + if err != nil && err != io.EOF { + return nil, err + } + data = strings.TrimSuffix(data, "\r") + data = strings.TrimSuffix(data, "\n") + serial, ok := s.SetString(data, 16) + if !ok { + return nil, errors.New("could not convert " + data + " to serial number") + } + return serial, nil +} + +func makeOpenSSLTime(t time.Time) string { + y := (t.Year() % 100) + validDate := fmt.Sprintf("%02d%02d%02d%02d%02d%02dZ", y, t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second()) + return validDate +} + +func makeDn(cert *x509.Certificate) string { + var dn bytes.Buffer + + if len(cert.Subject.Country) > 0 && len(cert.Subject.Country[0]) > 0 { + dn.WriteString("/C=" + cert.Subject.Country[0]) + } + if len(cert.Subject.Province) > 0 && len(cert.Subject.Province[0]) > 0 { + dn.WriteString("/ST=" + cert.Subject.Province[0]) + } + if len(cert.Subject.Locality) > 0 && len(cert.Subject.Locality[0]) > 0 { + dn.WriteString("/L=" + cert.Subject.Locality[0]) + } + if len(cert.Subject.Organization) > 0 && len(cert.Subject.Organization[0]) > 0 { + dn.WriteString("/O=" + cert.Subject.Organization[0]) + } + if len(cert.Subject.OrganizationalUnit) > 0 && len(cert.Subject.OrganizationalUnit[0]) > 0 { + dn.WriteString("/OU=" + cert.Subject.OrganizationalUnit[0]) + } + if len(cert.Subject.CommonName) > 0 { + dn.WriteString("/CN=" + cert.Subject.CommonName) + } + if len(cert.EmailAddresses) > 0 { + dn.WriteString("/emailAddress=" + cert.EmailAddresses[0]) + } + return dn.String() +} + +// Determine if the cadb already has a valid certificate with the same name +func (d *fileDepot) HasCN(_ string, allowTime int, cert *x509.Certificate, revokeOldCertificate bool) (bool, error) { + var addDB bytes.Buffer + candidates := make(map[string]string) + + dn := makeDn(cert) + + if err := os.MkdirAll(d.dirPath, 0o755); err != nil { + return false, err + } + + name := d.path("index.txt") + file, err := os.Open(name) + if err != nil { + return false, err + } + defer file.Close() + + // Loop over index.txt, determine if a certificate is valid and can be revoked + // revoke certificate in DB if requested + scanner := bufio.NewScanner(file) + for scanner.Scan() { + line := scanner.Text() + if strings.HasSuffix(line, dn) { + // Removing revoked certificate from candidates, if any + if strings.HasPrefix(line, "R\t") { + entries := strings.Split(line, "\t") + serial := strings.ToUpper(entries[3]) + candidates[serial] = line + delete(candidates, serial) + addDB.WriteString(line + "\n") + // Test & add certificate candidates, if any + } else if strings.HasPrefix(line, "V\t") { + issueDate, err := strconv.ParseInt(strings.Replace(strings.Split(line, "\t")[1], "Z", "", 1), 10, 64) + if err != nil { + return false, errors.New("Could not get expiry date from ca db") + } + minimalRenewDate, err := strconv.ParseInt(strings.Replace(makeOpenSSLTime(time.Now().AddDate(0, 0, allowTime).UTC()), "Z", "", 1), 10, 64) + if err != nil { + return false, errors.New("Could not calculate expiry date") + } + entries := strings.Split(line, "\t") + serial := strings.ToUpper(entries[3]) + + // all non renewable certificates + if minimalRenewDate < issueDate && allowTime > 0 { + candidates[serial] = "no" + } else { + candidates[serial] = line + } + } + } else { + addDB.WriteString(line + "\n") + } + } + file.Close() + for key, value := range candidates { + if value == "no" { + return false, errors.New("DN " + dn + " already exists") + } + if revokeOldCertificate { + fmt.Println("Revoking certificate with serial " + key + " from DB. Recreation of CRL needed.") + entries := strings.Split(value, "\t") + addDB.WriteString("R\t" + entries[1] + "\t" + makeOpenSSLTime(time.Now()) + "\t" + strings.ToUpper(entries[3]) + "\t" + entries[4] + "\t" + entries[5] + "\n") + } + } + if err := scanner.Err(); err != nil { + return false, err + } + if revokeOldCertificate { + file, err := os.OpenFile(name, os.O_CREATE|os.O_RDWR, dbPerm) + if err != nil { + return false, err + } + if _, err := file.Write(addDB.Bytes()); err != nil { + return false, err + } + } + return true, nil +} + +func (d *fileDepot) writeDB(cn string, serial *big.Int, filename string, cert *x509.Certificate) error { + var dbEntry bytes.Buffer + + // Revoke old certificate + if _, err := d.HasCN(cn, 0, cert, true); err != nil { + return err + } + if err := os.MkdirAll(d.dirPath, 0o755); err != nil { + return err + } + name := d.path("index.txt") + + file, err := os.OpenFile(name, os.O_CREATE|os.O_RDWR|os.O_APPEND, dbPerm) + if err != nil { + return fmt.Errorf("could not append to "+name+" : %q\n", err.Error()) + } + defer file.Close() + + // Format of the caDB, see http://pki-tutorial.readthedocs.io/en/latest/cadb.html + // STATUSFLAG EXPIRATIONDATE REVOCATIONDATE(or emtpy) SERIAL_IN_HEX CERTFILENAME_OR_'unknown' Certificate_DN + + serialHex := fmt.Sprintf("%X", cert.SerialNumber) + if len(serialHex)%2 == 1 { + serialHex = fmt.Sprintf("0%s", serialHex) + } + + validDate := makeOpenSSLTime(cert.NotAfter) + + dn := makeDn(cert) + + // Valid + dbEntry.WriteString("V\t") + // Valid till + dbEntry.WriteString(validDate + "\t") + // Empty (not revoked) + dbEntry.WriteString("\t") + // Serial in Hex + dbEntry.WriteString(serialHex + "\t") + // Certificate file name + dbEntry.WriteString(filename + "\t") + // Certificate DN + dbEntry.WriteString(dn) + dbEntry.WriteString("\n") + + if _, err := file.Write(dbEntry.Bytes()); err != nil { + return err + } + return nil +} + +func (d *fileDepot) writeSerial(serial *big.Int) error { + if err := os.MkdirAll(d.dirPath, 0o755); err != nil { + return err + } + name := d.path("serial") + os.Remove(name) + + file, err := os.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_EXCL, serialPerm) + if err != nil { + return err + } + defer file.Close() + + if _, err := file.WriteString(fmt.Sprintf("%x\n", serial.Bytes())); err != nil { + os.Remove(name) + return err + } + return nil +} + +// read serial and increment +func (d *fileDepot) incrementSerial(s *big.Int) error { + serial := s.Add(s, big.NewInt(1)) + if err := d.writeSerial(serial); err != nil { + return err + } + return nil +} + +type file struct { + Info os.FileInfo + Data []byte +} + +func (d *fileDepot) check(path string) error { + name := d.path(path) + _, err := os.Stat(name) + if err != nil { + return err + } + return nil +} + +func (d *fileDepot) getFile(path string) (*file, error) { + if err := d.check(path); err != nil { + return nil, err + } + fi, err := os.Stat(d.path(path)) + if err != nil { + return nil, err + } + b, err := ioutil.ReadFile(d.path(path)) + return &file{fi, b}, err +} + +func (d *fileDepot) path(name string) string { + return filepath.Join(d.dirPath, name) +} + +const ( + rsaPrivateKeyPEMBlockType = "RSA PRIVATE KEY" + certificatePEMBlockType = "CERTIFICATE" +) + +// load an encrypted private key from disk +func loadKey(data []byte, password []byte) (*rsa.PrivateKey, error) { + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("PEM decode failed") + } + if pemBlock.Type != rsaPrivateKeyPEMBlockType { + return nil, errors.New("unmatched type or headers") + } + + b, err := x509.DecryptPEMBlock(pemBlock, password) + if err != nil { + return nil, err + } + return x509.ParsePKCS1PrivateKey(b) +} + +// load an encrypted private key from disk +func loadCert(data []byte) (*x509.Certificate, error) { + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("PEM decode failed") + } + if pemBlock.Type != certificatePEMBlockType { + return nil, errors.New("unmatched type or headers") + } + + return x509.ParseCertificate(pemBlock.Bytes) +} + +func pemCert(derBytes []byte) []byte { + pemBlock := &pem.Block{ + Type: certificatePEMBlockType, + Headers: nil, + Bytes: derBytes, + } + out := pem.EncodeToMemory(pemBlock) + return out +} diff --git a/server/mdm/scep/depot/signer.go b/server/mdm/scep/depot/signer.go new file mode 100644 index 000000000..3f5e3562d --- /dev/null +++ b/server/mdm/scep/depot/signer.go @@ -0,0 +1,141 @@ +package depot + +import ( + "crypto/rand" + "crypto/x509" + "sync" + "time" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil" + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" +) + +// Signer signs x509 certificates and stores them in a Depot +type Signer struct { + depot Depot + mu sync.Mutex + caPass string + allowRenewalDays int + validityDays int + serverAttrs bool +} + +// Option customizes Signer +type Option func(*Signer) + +// NewSigner creates a new Signer +func NewSigner(depot Depot, opts ...Option) *Signer { + s := &Signer{ + depot: depot, + allowRenewalDays: 14, + validityDays: 365, + } + for _, opt := range opts { + opt(s) + } + return s +} + +// WithCAPass specifies the password to use with an encrypted CA key +func WithCAPass(pass string) Option { + return func(s *Signer) { + s.caPass = pass + } +} + +// WithAllowRenewalDays sets the allowable renewal time for existing certs +func WithAllowRenewalDays(r int) Option { + return func(s *Signer) { + s.allowRenewalDays = r + } +} + +// WithValidityDays sets the validity period new certs will use +func WithValidityDays(v int) Option { + return func(s *Signer) { + s.validityDays = v + } +} + +func WithSeverAttrs() Option { + return func(s *Signer) { + s.serverAttrs = true + } +} + +// SignCSR signs a certificate using Signer's Depot CA +func (s *Signer) SignCSR(m *scep.CSRReqMessage) (*x509.Certificate, error) { + id, err := cryptoutil.GenerateSubjectKeyID(m.CSR.PublicKey) + if err != nil { + return nil, err + } + + s.mu.Lock() + defer s.mu.Unlock() + + serial, err := s.depot.Serial() + if err != nil { + return nil, err + } + + // create cert template + tmpl := &x509.Certificate{ + SerialNumber: serial, + Subject: m.CSR.Subject, + NotBefore: time.Now().Add(time.Second * -600).UTC(), + NotAfter: time.Now().AddDate(0, 0, s.validityDays).UTC(), + SubjectKeyId: id, + KeyUsage: x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageClientAuth, + }, + SignatureAlgorithm: m.CSR.SignatureAlgorithm, + DNSNames: m.CSR.DNSNames, + EmailAddresses: m.CSR.EmailAddresses, + IPAddresses: m.CSR.IPAddresses, + URIs: m.CSR.URIs, + } + + if s.serverAttrs { + tmpl.KeyUsage |= x509.KeyUsageDataEncipherment | x509.KeyUsageKeyEncipherment + tmpl.ExtKeyUsage = append(tmpl.ExtKeyUsage, x509.ExtKeyUsageServerAuth) + } + + caCerts, caKey, err := s.depot.CA([]byte(s.caPass)) + if err != nil { + return nil, err + } + + crtBytes, err := x509.CreateCertificate(rand.Reader, tmpl, caCerts[0], m.CSR.PublicKey, caKey) + if err != nil { + return nil, err + } + + crt, err := x509.ParseCertificate(crtBytes) + if err != nil { + return nil, err + } + + name := certName(crt) + + // Test if this certificate is already in the CADB, revoke if needed + // revocation is done if the validity of the existing certificate is + // less than allowRenewalDays + _, err = s.depot.HasCN(name, s.allowRenewalDays, crt, false) + if err != nil { + return nil, err + } + + if err := s.depot.Put(name, crt); err != nil { + return nil, err + } + + return crt, nil +} + +func certName(crt *x509.Certificate) string { + if crt.Subject.CommonName != "" { + return crt.Subject.CommonName + } + return string(crt.Signature) +} diff --git a/server/mdm/scep/scep/certs_selector.go b/server/mdm/scep/scep/certs_selector.go new file mode 100644 index 000000000..62d726d95 --- /dev/null +++ b/server/mdm/scep/scep/certs_selector.go @@ -0,0 +1,57 @@ +package scep + +import ( + "bytes" + "crypto" + "crypto/x509" +) + +// A CertsSelector filters certificates. +type CertsSelector interface { + SelectCerts([]*x509.Certificate) []*x509.Certificate +} + +// CertsSelectorFunc is a type of function that filters certificates. +type CertsSelectorFunc func([]*x509.Certificate) []*x509.Certificate + +func (f CertsSelectorFunc) SelectCerts(certs []*x509.Certificate) []*x509.Certificate { + return f(certs) +} + +// NopCertsSelector returns a CertsSelectorFunc that does not do anything. +func NopCertsSelector() CertsSelectorFunc { + return func(certs []*x509.Certificate) []*x509.Certificate { + return certs + } +} + +// A EnciphermentCertsSelector returns a CertsSelectorFunc that selects +// certificates eligible for key encipherment. This certsSelector can be used +// to filter PKCSReq recipients. +func EnciphermentCertsSelector() CertsSelectorFunc { + return func(certs []*x509.Certificate) (selected []*x509.Certificate) { + enciphermentKeyUsages := x509.KeyUsageKeyEncipherment | x509.KeyUsageDataEncipherment + for _, cert := range certs { + if cert.KeyUsage&enciphermentKeyUsages != 0 { + selected = append(selected, cert) + } + } + return selected + } +} + +// FingerprintCertsSelector selects a certificate that matches hash using +// hashType against the digest of the raw certificate DER bytes +func FingerprintCertsSelector(hashType crypto.Hash, hash []byte) CertsSelectorFunc { + return func(certs []*x509.Certificate) (selected []*x509.Certificate) { + for _, cert := range certs { + h := hashType.New() + h.Write(cert.Raw) + if bytes.Compare(hash, h.Sum(nil)) == 0 { + selected = append(selected, cert) + return + } + } + return + } +} diff --git a/server/mdm/scep/scep/certs_selector_test.go b/server/mdm/scep/scep/certs_selector_test.go new file mode 100644 index 000000000..643249a04 --- /dev/null +++ b/server/mdm/scep/scep/certs_selector_test.go @@ -0,0 +1,156 @@ +package scep + +import ( + "crypto" + _ "crypto/sha256" + "crypto/x509" + "encoding/hex" + "testing" +) + +func TestFingerprintCertsSelector(t *testing.T) { + for _, test := range []struct { + testName string + hashType crypto.Hash + hash string + certRaw []byte + expectedCount int + }{ + { + "null SHA-256 hash", + crypto.SHA256, + "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855", + nil, + 1, + }, + { + "3 byte SHA-256 hash", + crypto.SHA256, + "039058c6f2c0cb492c533b0a4d14ef77cc0f78abccced5287d84a1a2011cfb81", + []byte{1, 2, 3}, + 1, + }, + { + "mismatched hash", + crypto.SHA256, + "8db07061ebb4cd0b0cd00825b363e5fb7f8131d8ff2c1fd70d03fa4fd6dc3785", + []byte{4, 5, 6}, + 0, + }, + } { + test := test + t.Run(test.testName, func(t *testing.T) { + t.Parallel() + + fakeCerts := []*x509.Certificate{{Raw: test.certRaw}} + + hash, err := hex.DecodeString(test.hash) + if err != nil { + t.Fatal(err) + } + if want, have := test.hashType.Size(), len(hash); want != have { + t.Errorf("invalid input hash length, want: %d have: %d", want, have) + } + + selected := FingerprintCertsSelector(test.hashType, hash).SelectCerts(fakeCerts) + + if want, have := test.expectedCount, len(selected); want != have { + t.Errorf("wrong selected certs count, want: %d have: %d", want, have) + } + }) + } +} + +func TestEnciphermentCertsSelector(t *testing.T) { + for _, test := range []struct { + testName string + certs []*x509.Certificate + expectedSelectedCerts []*x509.Certificate + }{ + { + "empty certificates list", + []*x509.Certificate{}, + []*x509.Certificate{}, + }, + { + "non-empty certificates list", + []*x509.Certificate{ + {KeyUsage: x509.KeyUsageKeyEncipherment}, + {KeyUsage: x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageDigitalSignature}, + {}, + }, + []*x509.Certificate{ + {KeyUsage: x509.KeyUsageKeyEncipherment}, + {KeyUsage: x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDataEncipherment}, + }, + }, + } { + test := test + t.Run(test.testName, func(t *testing.T) { + t.Parallel() + + selected := EnciphermentCertsSelector().SelectCerts(test.certs) + if !certsKeyUsagesEq(selected, test.expectedSelectedCerts) { + t.Fatal("selected and expected certificates did not match") + } + }) + } +} + +func TestNopCertsSelector(t *testing.T) { + for _, test := range []struct { + testName string + certs []*x509.Certificate + expectedSelectedCerts []*x509.Certificate + }{ + { + "empty certificates list", + []*x509.Certificate{}, + []*x509.Certificate{}, + }, + { + "non-empty certificates list", + []*x509.Certificate{ + {KeyUsage: x509.KeyUsageKeyEncipherment}, + {KeyUsage: x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageDigitalSignature}, + {}, + }, + []*x509.Certificate{ + {KeyUsage: x509.KeyUsageKeyEncipherment}, + {KeyUsage: x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDataEncipherment}, + {KeyUsage: x509.KeyUsageDigitalSignature}, + {}, + }, + }, + } { + test := test + t.Run(test.testName, func(t *testing.T) { + t.Parallel() + + selected := NopCertsSelector().SelectCerts(test.certs) + if !certsKeyUsagesEq(selected, test.expectedSelectedCerts) { + t.Fatal("selected and expected certificates did not match") + } + }) + } +} + +// certsKeyUsagesEq returns true if certs in a have the same key usages +// of certs in b and in the same order. +func certsKeyUsagesEq(a []*x509.Certificate, b []*x509.Certificate) bool { + if len(a) != len(b) { + return false + } + for i, cert := range a { + if cert.KeyUsage != b[i].KeyUsage { + return false + } + } + return true +} diff --git a/server/mdm/scep/scep/scep.go b/server/mdm/scep/scep/scep.go new file mode 100644 index 000000000..e28927d3f --- /dev/null +++ b/server/mdm/scep/scep/scep.go @@ -0,0 +1,665 @@ +// Package scep provides common functionality for encoding and decoding +// Simple Certificate Enrolment Protocol pki messages as defined by +// https://tools.ietf.org/html/draft-gutmann-scep-02 +package scep + +import ( + "bytes" + "crypto" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/asn1" + "encoding/base64" + "errors" + "fmt" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil" + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil/x509util" + + "github.com/go-kit/kit/log" + "github.com/go-kit/kit/log/level" + "go.mozilla.org/pkcs7" +) + +// errors +var ( + errNotImplemented = errors.New("not implemented") + errUnknownMessageType = errors.New("unknown messageType") +) + +// The MessageType attribute specifies the type of operation performed +// by the transaction. This attribute MUST be included in all PKI +// messages. +// +// The following message types are defined: +type MessageType string + +// Undefined message types are treated as an error. +const ( + CertRep MessageType = "3" + RenewalReq = "17" + UpdateReq = "18" + PKCSReq = "19" + CertPoll = "20" + GetCert = "21" + GetCRL = "22" +) + +func (msg MessageType) String() string { + switch msg { + case CertRep: + return "CertRep (3)" + case RenewalReq: + return "RenewalReq (17)" + case UpdateReq: + return "UpdateReq (18)" + case PKCSReq: + return "PKCSReq (19)" + case CertPoll: + return "CertPoll (20) " + case GetCert: + return "GetCert (21)" + case GetCRL: + return "GetCRL (22)" + default: + panic("scep: unknown messageType" + msg) + } +} + +// PKIStatus is a SCEP pkiStatus attribute which holds transaction status information. +// All SCEP responses MUST include a pkiStatus. +// +// The following pkiStatuses are defined: +type PKIStatus string + +// Undefined pkiStatus attributes are treated as an error +const ( + SUCCESS PKIStatus = "0" + FAILURE = "2" + PENDING = "3" +) + +// FailInfo is a SCEP failInfo attribute +// +// The FailInfo attribute MUST contain one of the following failure +// reasons: +type FailInfo string + +const ( + BadAlg FailInfo = "0" + BadMessageCheck = "1" + BadRequest = "2" + BadTime = "3" + BadCertID = "4" +) + +func (info FailInfo) String() string { + switch info { + case BadAlg: + return "badAlg (0)" + case BadMessageCheck: + return "badMessageCheck (1)" + case BadRequest: + return "badRequest (2)" + case BadTime: + return "badTime (3)" + case BadCertID: + return "badCertID (4)" + default: + panic("scep: unknown failInfo type" + info) + } +} + +// SenderNonce is a random 16 byte number. +// A sender must include the senderNonce in each transaction to a recipient. +type SenderNonce []byte + +// The RecipientNonce MUST be copied from the SenderNonce +// and included in the reply. +type RecipientNonce []byte + +// The TransactionID is a text +// string generated by the client when starting a transaction. The +// client MUST generate a unique string as the transaction identifier, +// which MUST be used for all PKI messages exchanged for a given +// enrolment, encoded as a PrintableString. +type TransactionID string + +// SCEP OIDs +var ( + oidSCEPmessageType = asn1.ObjectIdentifier{2, 16, 840, 1, 113733, 1, 9, 2} + oidSCEPpkiStatus = asn1.ObjectIdentifier{2, 16, 840, 1, 113733, 1, 9, 3} + oidSCEPfailInfo = asn1.ObjectIdentifier{2, 16, 840, 1, 113733, 1, 9, 4} + oidSCEPsenderNonce = asn1.ObjectIdentifier{2, 16, 840, 1, 113733, 1, 9, 5} + oidSCEPrecipientNonce = asn1.ObjectIdentifier{2, 16, 840, 1, 113733, 1, 9, 6} + oidSCEPtransactionID = asn1.ObjectIdentifier{2, 16, 840, 1, 113733, 1, 9, 7} +) + +// WithLogger adds option logging to the SCEP operations. +func WithLogger(logger log.Logger) Option { + return func(c *config) { + c.logger = logger + } +} + +// WithCACerts adds option CA certificates to the SCEP operations. +// Note: This changes the verification behavior of PKCS #7 messages. If this +// option is specified, only caCerts will be used as expected signers. +func WithCACerts(caCerts []*x509.Certificate) Option { + return func(c *config) { + c.caCerts = caCerts + } +} + +// WithCertsSelector adds the certificates certsSelector option to the SCEP +// operations. +// This option is effective when used with NewCSRRequest function. In +// this case, only certificates selected with the certsSelector will be used +// as the PKCS #7 message recipients. +func WithCertsSelector(selector CertsSelector) Option { + return func(c *config) { + c.certsSelector = selector + } +} + +// Option specifies custom configuration for SCEP. +type Option func(*config) + +type config struct { + logger log.Logger + caCerts []*x509.Certificate // specified if CA certificates have already been retrieved + certsSelector CertsSelector +} + +// PKIMessage defines the possible SCEP message types +type PKIMessage struct { + TransactionID + MessageType + SenderNonce + *CertRepMessage + *CSRReqMessage + + // DER Encoded PKIMessage + Raw []byte + + // parsed + p7 *pkcs7.PKCS7 + + // decrypted enveloped content + pkiEnvelope []byte + + // Used to encrypt message + Recipients []*x509.Certificate + + // Signer info + SignerKey *rsa.PrivateKey + SignerCert *x509.Certificate + + logger log.Logger +} + +// CertRepMessage is a type of PKIMessage +type CertRepMessage struct { + PKIStatus + RecipientNonce + FailInfo + + Certificate *x509.Certificate + + degenerate []byte +} + +// CSRReqMessage can be of the type PKCSReq/RenewalReq/UpdateReq +// and includes a PKCS#10 CSR request. +// The content of this message is protected +// by the recipient public key(example CA) +type CSRReqMessage struct { + RawDecrypted []byte + + // PKCS#10 Certificate request inside the envelope + CSR *x509.CertificateRequest + + ChallengePassword string +} + +// ParsePKIMessage unmarshals a PKCS#7 signed data into a PKI message struct +func ParsePKIMessage(data []byte, opts ...Option) (*PKIMessage, error) { + conf := &config{logger: log.NewNopLogger()} + for _, opt := range opts { + opt(conf) + } + + // parse PKCS#7 signed data + p7, err := pkcs7.Parse(data) + if err != nil { + return nil, err + } + + if len(conf.caCerts) > 0 { + // According to RFC #2315 Section 9.1, it is valid that the server sends fewer + // certificates than necessary, if it is expected that those verifying the + // signatures have an alternate means of obtaining necessary certificates. + // In SCEP case, an alternate means is to use GetCaCert request. + // Note: The https://github.com/jscep/jscep implementation logs a warning if + // no certificates were found for signers in the PKCS #7 received from the + // server, but the certificates obtained from GetCaCert request are still + // used for decoding the message. + p7.Certificates = conf.caCerts + } + + if err := p7.Verify(); err != nil { + return nil, err + } + + var tID TransactionID + if err := p7.UnmarshalSignedAttribute(oidSCEPtransactionID, &tID); err != nil { + return nil, err + } + + var msgType MessageType + if err := p7.UnmarshalSignedAttribute(oidSCEPmessageType, &msgType); err != nil { + return nil, err + } + + msg := &PKIMessage{ + TransactionID: tID, + MessageType: msgType, + Raw: data, + p7: p7, + logger: conf.logger, + } + + // log relevant key-values when parsing a pkiMessage. + logKeyVals := []interface{}{ + "msg", "parsed scep pkiMessage", + "scep_message_type", msgType, + "transaction_id", tID, + } + level.Debug(msg.logger).Log(logKeyVals...) + + if err := msg.parseMessageType(); err != nil { + return nil, err + } + + return msg, nil +} + +func (msg *PKIMessage) parseMessageType() error { + switch msg.MessageType { + case CertRep: + var status PKIStatus + if err := msg.p7.UnmarshalSignedAttribute(oidSCEPpkiStatus, &status); err != nil { + return err + } + var rn RecipientNonce + if err := msg.p7.UnmarshalSignedAttribute(oidSCEPrecipientNonce, &rn); err != nil { + return err + } + if len(rn) == 0 { + return errors.New("scep pkiMessage must include recipientNonce attribute") + } + cr := &CertRepMessage{ + PKIStatus: status, + RecipientNonce: rn, + } + switch status { + case SUCCESS: + break + case FAILURE: + var fi FailInfo + if err := msg.p7.UnmarshalSignedAttribute(oidSCEPfailInfo, &fi); err != nil { + return err + } + if fi == "" { + return errors.New("scep pkiStatus FAILURE must have a failInfo attribute") + } + cr.FailInfo = fi + case PENDING: + break + default: + return fmt.Errorf("unknown scep pkiStatus %s", status) + } + msg.CertRepMessage = cr + return nil + case PKCSReq, UpdateReq, RenewalReq: + var sn SenderNonce + if err := msg.p7.UnmarshalSignedAttribute(oidSCEPsenderNonce, &sn); err != nil { + return err + } + if len(sn) == 0 { + return errors.New("scep pkiMessage must include senderNonce attribute") + } + msg.SenderNonce = sn + return nil + case GetCRL, GetCert, CertPoll: + return errNotImplemented + default: + return errUnknownMessageType + } +} + +// DecryptPKIEnvelope decrypts the pkcs envelopedData inside the SCEP PKIMessage +func (msg *PKIMessage) DecryptPKIEnvelope(cert *x509.Certificate, key *rsa.PrivateKey) error { + p7, err := pkcs7.Parse(msg.p7.Content) + if err != nil { + return err + } + msg.pkiEnvelope, err = p7.Decrypt(cert, key) + if err != nil { + return err + } + + logKeyVals := []interface{}{ + "msg", "decrypt pkiEnvelope", + } + defer func() { level.Debug(msg.logger).Log(logKeyVals...) }() + + switch msg.MessageType { + case CertRep: + certs, err := CACerts(msg.pkiEnvelope) + if err != nil { + return err + } + msg.CertRepMessage.Certificate = certs[0] + logKeyVals = append(logKeyVals, "ca_certs", len(certs)) + return nil + case PKCSReq, UpdateReq, RenewalReq: + csr, err := x509.ParseCertificateRequest(msg.pkiEnvelope) + if err != nil { + return errors.Join(err, errors.New("parse CSR from pkiEnvelope")) + } + // check for challengePassword + cp, err := x509util.ParseChallengePassword(msg.pkiEnvelope) + if err != nil { + return errors.Join(err, errors.New("scep: parse challenge password in pkiEnvelope")) + } + msg.CSRReqMessage = &CSRReqMessage{ + RawDecrypted: msg.pkiEnvelope, + CSR: csr, + ChallengePassword: cp, + } + logKeyVals = append(logKeyVals, "has_challenge", cp != "") + return nil + case GetCRL, GetCert, CertPoll: + return errNotImplemented + default: + return errUnknownMessageType + } +} + +func (msg *PKIMessage) Fail(crtAuth *x509.Certificate, keyAuth *rsa.PrivateKey, info FailInfo) (*PKIMessage, error) { + config := pkcs7.SignerInfoConfig{ + ExtraSignedAttributes: []pkcs7.Attribute{ + { + Type: oidSCEPtransactionID, + Value: msg.TransactionID, + }, + { + Type: oidSCEPpkiStatus, + Value: FAILURE, + }, + { + Type: oidSCEPfailInfo, + Value: info, + }, + { + Type: oidSCEPmessageType, + Value: CertRep, + }, + { + Type: oidSCEPsenderNonce, + Value: msg.SenderNonce, + }, + { + Type: oidSCEPrecipientNonce, + Value: msg.SenderNonce, + }, + }, + } + + sd, err := pkcs7.NewSignedData(nil) + if err != nil { + return nil, err + } + + // sign the attributes + if err := sd.AddSigner(crtAuth, keyAuth, config); err != nil { + return nil, err + } + + certRepBytes, err := sd.Finish() + if err != nil { + return nil, err + } + + cr := &CertRepMessage{ + PKIStatus: FAILURE, + FailInfo: BadRequest, + RecipientNonce: RecipientNonce(msg.SenderNonce), + } + + // create a CertRep message from the original + crepMsg := &PKIMessage{ + Raw: certRepBytes, + TransactionID: msg.TransactionID, + MessageType: CertRep, + CertRepMessage: cr, + } + + return crepMsg, nil +} + +// Success returns a new PKIMessage with CertRep data using an already-issued certificate +func (msg *PKIMessage) Success(crtAuth *x509.Certificate, keyAuth *rsa.PrivateKey, crt *x509.Certificate) (*PKIMessage, error) { + // check if CSRReqMessage has already been decrypted + if msg.CSRReqMessage.CSR == nil { + if err := msg.DecryptPKIEnvelope(crtAuth, keyAuth); err != nil { + return nil, err + } + } + + // create a degenerate cert structure + deg, err := DegenerateCertificates([]*x509.Certificate{crt}) + if err != nil { + return nil, err + } + + // encrypt degenerate data using the original messages recipients + e7, err := pkcs7.Encrypt(deg, msg.p7.Certificates) + if err != nil { + return nil, err + } + + // PKIMessageAttributes to be signed + config := pkcs7.SignerInfoConfig{ + ExtraSignedAttributes: []pkcs7.Attribute{ + { + Type: oidSCEPtransactionID, + Value: msg.TransactionID, + }, + { + Type: oidSCEPpkiStatus, + Value: SUCCESS, + }, + { + Type: oidSCEPmessageType, + Value: CertRep, + }, + { + Type: oidSCEPsenderNonce, + Value: msg.SenderNonce, + }, + { + Type: oidSCEPrecipientNonce, + Value: msg.SenderNonce, + }, + }, + } + + signedData, err := pkcs7.NewSignedData(e7) + if err != nil { + return nil, err + } + // add the certificate into the signed data type + // this cert must be added before the signedData because the recipient will expect it + // as the first certificate in the array + signedData.AddCertificate(crt) + // sign the attributes + if err := signedData.AddSigner(crtAuth, keyAuth, config); err != nil { + return nil, err + } + + certRepBytes, err := signedData.Finish() + if err != nil { + return nil, err + } + + cr := &CertRepMessage{ + PKIStatus: SUCCESS, + RecipientNonce: RecipientNonce(msg.SenderNonce), + Certificate: crt, + degenerate: deg, + } + + // create a CertRep message from the original + crepMsg := &PKIMessage{ + Raw: certRepBytes, + TransactionID: msg.TransactionID, + MessageType: CertRep, + CertRepMessage: cr, + } + + return crepMsg, nil +} + +// DegenerateCertificates creates degenerate certificates pkcs#7 type +func DegenerateCertificates(certs []*x509.Certificate) ([]byte, error) { + var buf bytes.Buffer + for _, cert := range certs { + buf.Write(cert.Raw) + } + degenerate, err := pkcs7.DegenerateCertificate(buf.Bytes()) + if err != nil { + return nil, err + } + return degenerate, nil +} + +// CACerts extract CA Certificate or chain from pkcs7 degenerate signed data +func CACerts(data []byte) ([]*x509.Certificate, error) { + p7, err := pkcs7.Parse(data) + if err != nil { + return nil, err + } + return p7.Certificates, nil +} + +// NewCSRRequest creates a scep PKI PKCSReq/UpdateReq message +func NewCSRRequest(csr *x509.CertificateRequest, tmpl *PKIMessage, opts ...Option) (*PKIMessage, error) { + conf := &config{logger: log.NewNopLogger(), certsSelector: NopCertsSelector()} + for _, opt := range opts { + opt(conf) + } + + derBytes := csr.Raw + recipients := conf.certsSelector.SelectCerts(tmpl.Recipients) + if len(recipients) < 1 { + if len(tmpl.Recipients) >= 1 { + // our certsSelector eliminated any CA/RA recipients + return nil, errors.New("no selected CA/RA recipients") + } + return nil, errors.New("no CA/RA recipients") + } + e7, err := pkcs7.Encrypt(derBytes, recipients) + if err != nil { + return nil, err + } + + signedData, err := pkcs7.NewSignedData(e7) + if err != nil { + return nil, err + } + + // create transaction ID from public key hash + tID, err := newTransactionID(csr.PublicKey) + if err != nil { + return nil, err + } + + sn, err := newNonce() + if err != nil { + return nil, err + } + + level.Debug(conf.logger).Log( + "msg", "creating SCEP CSR request", + "transaction_id", tID, + "signer_cn", tmpl.SignerCert.Subject.CommonName, + ) + + // PKIMessageAttributes to be signed + config := pkcs7.SignerInfoConfig{ + ExtraSignedAttributes: []pkcs7.Attribute{ + { + Type: oidSCEPtransactionID, + Value: tID, + }, + { + Type: oidSCEPmessageType, + Value: tmpl.MessageType, + }, + { + Type: oidSCEPsenderNonce, + Value: sn, + }, + }, + } + + // sign attributes + if err := signedData.AddSigner(tmpl.SignerCert, tmpl.SignerKey, config); err != nil { + return nil, err + } + + rawPKIMessage, err := signedData.Finish() + if err != nil { + return nil, err + } + + cr := &CSRReqMessage{ + CSR: csr, + } + + newMsg := &PKIMessage{ + Raw: rawPKIMessage, + MessageType: tmpl.MessageType, + TransactionID: tID, + SenderNonce: sn, + CSRReqMessage: cr, + Recipients: recipients, + logger: conf.logger, + } + + return newMsg, nil +} + +func newNonce() (SenderNonce, error) { + size := 16 + b := make([]byte, size) + _, err := rand.Read(b) + if err != nil { + return SenderNonce{}, err + } + return SenderNonce(b), nil +} + +// use public key to create a deterministric transactionID +func newTransactionID(key crypto.PublicKey) (TransactionID, error) { + id, err := cryptoutil.GenerateSubjectKeyID(key) + if err != nil { + return "", err + } + + encHash := base64.StdEncoding.EncodeToString(id) + return TransactionID(encHash), nil +} diff --git a/server/mdm/scep/scep/scep_test.go b/server/mdm/scep/scep/scep_test.go new file mode 100644 index 000000000..134bad225 --- /dev/null +++ b/server/mdm/scep/scep/scep_test.go @@ -0,0 +1,350 @@ +package scep_test + +import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "errors" + "io/ioutil" + "math/big" + "testing" + "time" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil" + "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" +) + +func testParsePKIMessage(t *testing.T, data []byte) *scep.PKIMessage { + msg, err := scep.ParsePKIMessage(data) + if err != nil { + t.Fatal(err) + } + validateParsedPKIMessage(t, msg) + return msg +} + +func validateParsedPKIMessage(t *testing.T, msg *scep.PKIMessage) { + if msg.TransactionID == "" { + t.Errorf("expected TransactionID attribute") + } + if msg.MessageType == "" { + t.Errorf("expected MessageType attribute") + } + switch msg.MessageType { + case scep.CertRep: + if len(msg.RecipientNonce) == 0 { + t.Errorf("expected RecipientNonce attribute") + } + case scep.PKCSReq, scep.UpdateReq, scep.RenewalReq: + if len(msg.SenderNonce) == 0 { + t.Errorf("expected SenderNonce attribute") + } + } +} + +// Tests the case when servers reply with PKCS #7 signed-data that contains no +// certificates assuming that the client can request CA certificates using +// GetCaCert request. +func TestParsePKIEnvelopeCert_MissingCertificatesForSigners(t *testing.T) { + certRepMissingCertificates := loadTestFile(t, "testdata/testca2/CertRep_NoCertificatesForSigners.der") + caPEM := loadTestFile(t, "testdata/testca2/ca2.pem") + + // Try to parse the PKIMessage without providing certificates for signers. + _, err := scep.ParsePKIMessage(certRepMissingCertificates) + if err == nil { + t.Fatal("parsed PKIMessage without providing signer certificates") + } + + signerCert := decodePEMCert(t, caPEM) + msg, err := scep.ParsePKIMessage(certRepMissingCertificates, scep.WithCACerts([]*x509.Certificate{signerCert})) + if err != nil { + t.Fatalf("failed to parse PKIMessage: %v", err) + } + validateParsedPKIMessage(t, msg) +} + +func TestDecryptPKIEnvelopeCSR(t *testing.T) { + pkcsReq := loadTestFile(t, "testdata/PKCSReq.der") + msg := testParsePKIMessage(t, pkcsReq) + cacert, cakey := loadCACredentials(t) + err := msg.DecryptPKIEnvelope(cacert, cakey) + if err != nil { + t.Fatal(err) + } + if msg.CSRReqMessage.CSR == nil { + t.Errorf("expected non-nil CSR field") + } +} + +func TestDecryptPKIEnvelopeCert(t *testing.T) { + certRep := loadTestFile(t, "testdata/CertRep.der") + testParsePKIMessage(t, certRep) + // clientcert, clientkey := loadClientCredentials(t) + // err = msg.DecryptPKIEnvelope(clientcert, clientkey) + // if err != nil { + // t.Fatal(err) + // } +} + +func TestSignCSR(t *testing.T) { + pkcsReq := loadTestFile(t, "testdata/PKCSReq.der") + msg := testParsePKIMessage(t, pkcsReq) + cacert, cakey := loadCACredentials(t) + err := msg.DecryptPKIEnvelope(cacert, cakey) + if err != nil { + t.Fatal(err) + } + csr := msg.CSRReqMessage.CSR + id, err := cryptoutil.GenerateSubjectKeyID(csr.PublicKey) + if err != nil { + t.Fatal(err) + } + tmpl := &x509.Certificate{ + SerialNumber: big.NewInt(4), + Subject: csr.Subject, + NotBefore: time.Now().Add(-600).UTC(), + NotAfter: time.Now().AddDate(1, 0, 0).UTC(), + SubjectKeyId: id, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageAny, + x509.ExtKeyUsageClientAuth, + }, + } + // sign the CSR creating a DER encoded cert + crtBytes, err := x509.CreateCertificate(rand.Reader, tmpl, cacert, csr.PublicKey, cakey) + if err != nil { + t.Fatal(err) + } + crt, err := x509.ParseCertificate(crtBytes) + if err != nil { + t.Fatal(err) + } + certRep, err := msg.Success(cacert, cakey, crt) + if err != nil { + t.Fatal(err) + } + testParsePKIMessage(t, certRep.Raw) +} + +func TestNewCSRRequest(t *testing.T) { + for _, test := range []struct { + testName string + keyUsage x509.KeyUsage + certsSelectorFunc scep.CertsSelectorFunc + shouldCreateCSR bool + }{ + { + "KeyEncipherment not set with NOP certificates selector", + x509.KeyUsageCertSign, + scep.NopCertsSelector(), + true, + }, + { + "KeyEncipherment is set with NOP certificates selector", + x509.KeyUsageCertSign | x509.KeyUsageKeyEncipherment, + scep.NopCertsSelector(), + true, + }, + { + "KeyEncipherment not set with Encipherment certificates selector", + x509.KeyUsageCertSign, + scep.EnciphermentCertsSelector(), + false, + }, + { + "KeyEncipherment is set with Encipherment certificates selector", + x509.KeyUsageCertSign | x509.KeyUsageKeyEncipherment, + scep.EnciphermentCertsSelector(), + true, + }, + } { + test := test + t.Run(test.testName, func(t *testing.T) { + t.Parallel() + key, err := newRSAKey(2048) + if err != nil { + t.Fatal(err) + } + derBytes, err := newCSR(key, "john.doe@example.com", "US", "com.apple.scep.2379B935-294B-4AF1-A213-9BD44A2C6688") + if err != nil { + t.Fatal(err) + } + csr, err := x509.ParseCertificateRequest(derBytes) + if err != nil { + t.Fatal(err) + } + clientcert, clientkey := loadClientCredentials(t) + cacert, cakey := createCaCertWithKeyUsage(t, test.keyUsage) + tmpl := &scep.PKIMessage{ + MessageType: scep.PKCSReq, + Recipients: []*x509.Certificate{cacert}, + SignerCert: clientcert, + SignerKey: clientkey, + } + + pkcsreq, err := scep.NewCSRRequest(csr, tmpl, scep.WithCertsSelector(test.certsSelectorFunc)) + if test.shouldCreateCSR && err != nil { + t.Fatalf("keyUsage: %d, failed creating a CSR request: %v", test.keyUsage, err) + } + if !test.shouldCreateCSR && err == nil { + t.Fatalf("keyUsage: %d, shouldn't have created a CSR: %v", test.keyUsage, err) + } + if !test.shouldCreateCSR { + return + } + msg := testParsePKIMessage(t, pkcsreq.Raw) + err = msg.DecryptPKIEnvelope(cacert, cakey) + if err != nil { + t.Fatal(err) + } + }) + } +} + +// create a new RSA private key +func newRSAKey(bits int) (*rsa.PrivateKey, error) { + private, err := rsa.GenerateKey(rand.Reader, bits) + if err != nil { + return nil, err + } + return private, nil +} + +// create a CSR using the same parameters as Keychain Access would produce +func newCSR(priv *rsa.PrivateKey, email, country, cname string) ([]byte, error) { + subj := pkix.Name{ + Country: []string{country}, + CommonName: cname, + ExtraNames: []pkix.AttributeTypeAndValue{{ + Type: []int{1, 2, 840, 113549, 1, 9, 1}, + Value: email, + }}, + } + template := &x509.CertificateRequest{ + Subject: subj, + } + return x509.CreateCertificateRequest(rand.Reader, template, priv) +} + +func loadTestFile(t *testing.T, path string) []byte { + data, err := ioutil.ReadFile(path) + if err != nil { + t.Fatal(err) + } + return data +} + +// createCaCertWithKeyUsage generates a CA key and certificate with keyUsage. +func createCaCertWithKeyUsage(t *testing.T, keyUsage x509.KeyUsage) (*x509.Certificate, *rsa.PrivateKey) { + key, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + t.Fatal(err) + } + caCert := depot.NewCACert( + depot.WithCountry("US"), + depot.WithOrganization("MICROMDM"), + depot.WithCommonName("MICROMDM SCEP CA"), + depot.WithKeyUsage(keyUsage), + ) + crtBytes, err := caCert.SelfSign(rand.Reader, &key.PublicKey, key) + if err != nil { + t.Fatal(err) + } + cert, err := x509.ParseCertificate(crtBytes) + if err != nil { + t.Fatal(err) + } + return cert, key +} + +func loadCACredentials(t *testing.T) (*x509.Certificate, *rsa.PrivateKey) { + cert, err := loadCertFromFile("testdata/testca/ca.crt") + if err != nil { + t.Fatal(err) + } + key, err := loadKeyFromFile("testdata/testca/ca.key") + if err != nil { + t.Fatal(err) + } + return cert, key +} + +func loadClientCredentials(t *testing.T) (*x509.Certificate, *rsa.PrivateKey) { + cert, err := loadCertFromFile("testdata/testclient/client.pem") + if err != nil { + t.Fatal(err) + } + key, err := loadKeyFromFile("testdata/testclient/client.key") + if err != nil { + t.Fatal(err) + } + return cert, key +} + +const ( + rsaPrivateKeyPEMBlockType = "RSA PRIVATE KEY" + certificatePEMBlockType = "CERTIFICATE" +) + +func loadCertFromFile(path string) (*x509.Certificate, error) { + data, err := ioutil.ReadFile(path) + if err != nil { + return nil, err + } + + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("PEM decode failed") + } + if pemBlock.Type != certificatePEMBlockType { + return nil, errors.New("unmatched type or headers") + } + return x509.ParseCertificate(pemBlock.Bytes) +} + +// load an encrypted private key from disk +func loadKeyFromFile(path string) (*rsa.PrivateKey, error) { + data, err := ioutil.ReadFile(path) + if err != nil { + return nil, err + } + + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + return nil, errors.New("PEM decode failed") + } + if pemBlock.Type != rsaPrivateKeyPEMBlockType { + return nil, errors.New("unmatched type or headers") + } + + // testca key has a password + if len(pemBlock.Headers) > 0 { + password := []byte("") + b, err := x509.DecryptPEMBlock(pemBlock, password) + if err != nil { + return nil, err + } + return x509.ParsePKCS1PrivateKey(b) + } + + return x509.ParsePKCS1PrivateKey(pemBlock.Bytes) +} + +func decodePEMCert(t *testing.T, data []byte) *x509.Certificate { + pemBlock, _ := pem.Decode(data) + if pemBlock == nil { + t.Fatal("PEM decode failed") + } + if pemBlock.Type != certificatePEMBlockType { + t.Fatal("unmatched type or headers") + } + + cert, err := x509.ParseCertificate(pemBlock.Bytes) + if err != nil { + t.Fatal(err) + } + return cert +} diff --git a/server/mdm/scep/scep/testdata/CertRep.der b/server/mdm/scep/scep/testdata/CertRep.der new file mode 100755 index 000000000..16ebc2beb Binary files /dev/null and b/server/mdm/scep/scep/testdata/CertRep.der differ diff --git a/server/mdm/scep/scep/testdata/PKCSReq.der b/server/mdm/scep/scep/testdata/PKCSReq.der new file mode 100755 index 000000000..71938c069 Binary files /dev/null and b/server/mdm/scep/scep/testdata/PKCSReq.der differ diff --git a/server/mdm/scep/scep/testdata/testca/ca.crt b/server/mdm/scep/scep/testdata/testca/ca.crt new file mode 100644 index 000000000..037b296cd --- /dev/null +++ b/server/mdm/scep/scep/testdata/testca/ca.crt @@ -0,0 +1,30 @@ +-----BEGIN CERTIFICATE----- +MIIFODCCAyCgAwIBAgIBATANBgkqhkiG9w0BAQsFADAtMQwwCgYDVQQGEwNVU0Ex +EDAOBgNVBAoTB2V0Y2QtY2ExCzAJBgNVBAsTAkNBMB4XDTE2MDUyOTEzNDcwNVoX +DTI2MDUyOTEzNDcwOFowLTEMMAoGA1UEBhMDVVNBMRAwDgYDVQQKEwdldGNkLWNh +MQswCQYDVQQLEwJDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALEG +S866Uf79znmx8+BakJ17tox8VYem0NZzPc2jF4RVWXfT481Yz9jdsjZubMCFuJiI +JzpMBT7RzXvZvuzMzZEe77Tb0mM+83t5kVwWWuxkEz7HQn0tWxuLR7NGaAi5MH53 +pcSGRNH8RgC7WdhyQ/3HwNGWObe0wQT69tfz1pHDSvNR9v7DS9KIiGsMc+dcqayz +n3YQuwEV8nD1KGenxEFjFh0NsP5FKrzDrsvzdFOWLJ3jedfDCSQSe0y33syZIYAQ +wS2/b+io6GMWDQemcirN9QiI1NGkcN9zioPRuYPxkaxGNa0O+3cTgA8egTFMigvI +4ZFsmERfZkJM4sBMK1uUmxXKb87nA1zooPvPk1KGQChXBEnrkHPbkP1VO+yYOS4m +t9LDweGVS6GoC5vjqQgymOHecaNfKpBnU6t7fP/aEZUF+6mxRKofolR/hTknkVNc +q2nrXEJpz8J73Iq8rkL0rNAEu1h83npPAoUgdFhwHzlq9ShRbz+ZQTxdAv5MOVs+ +6F9qcmbv/6C4xc1N1xH2NAJ8aFZTxsw4ny43hi7DgyRh1LJxcb2Bp7JMaD56CMSA +0zJqxIiV5kGUwbmrBjXMyvjYzx/0qI3j3bZl3p8BjZgyjkvOP0nArP3bby5mEUYx +i7+YgPm8dfGIzPh19I4oFReszOJl+JrdLnbf45efAgMBAAGjYzBhMA4GA1UdDwEB +/wQEAwICBDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBT6XD/PBaV7GbFEnxOm +3OJ3deamkzAfBgNVHSMEGDAWgBT6XD/PBaV7GbFEnxOm3OJ3deamkzANBgkqhkiG +9w0BAQsFAAOCAgEAC6yBHrRElZ7ovDrqjVBf8fLG+nINETPJ/kPTlTNtvqClLaeE +NKPH6JVp0/uusoKmqvE0LxyBEdP7waHQVq2XnfYggDCNjAUFxdv7OKAwlBjJ0JGs +5RsJ9DEehyLecnDDDhte92M2xUcfMet1BmuizLDDKaUU17sI1g/UNE+c7hViZA2J +e+wezVOUZqCY0pICsm4ar8JBY/pfUZ+1J00AZJtXuVWqK5GYGkrLZ7ZjNzzDF0cY +UmJxki5rj11XpCCQOZjVB+Pp3t7YpUOey1EC+1fKKrdS40zaRS3VVgh+Guavs5HV +egBzKDQUuRrZDbodJSv28RYlVbFTmkl3hGGNE0l2v0L2XHasZHoBkDZzz9nLuiI8 +ZdhWS+fn7dbswN9WzzB+dPzKS1WkTj5RXL/luI/7+fYNQyvIJYdnNCegyi2C2yTD +a/vmFJkBU+uLHWsW9a8R5Ca7A91ltJobTJE3uwxdXuZMTrmlWKsEbhqHCqO7d0j8 +IgYGxDo9ysfA4AOiNDxlp7lXxV/JFOsuGXNdFKcDFykLZ5u21X9ho9fptWJDP9JN +NNOXjC0Jv2UGZrHze6IqyL5JqxOGpK22PQIwpZwExwijUom+LH5VEXK1zpXzwC93 +WXWVtGOW4yEqv0VTn7vafIeM5GBTJ44ggpkp4RpFWoBMZcAFj8gE/9AUaHo= +-----END CERTIFICATE----- diff --git a/server/mdm/scep/scep/testdata/testca/ca.crt.info b/server/mdm/scep/scep/testdata/testca/ca.crt.info new file mode 100644 index 000000000..d8263ee98 --- /dev/null +++ b/server/mdm/scep/scep/testdata/testca/ca.crt.info @@ -0,0 +1 @@ +2 \ No newline at end of file diff --git a/server/mdm/scep/scep/testdata/testca/ca.key b/server/mdm/scep/scep/testdata/testca/ca.key new file mode 100644 index 000000000..1614d5541 --- /dev/null +++ b/server/mdm/scep/scep/testdata/testca/ca.key @@ -0,0 +1,54 @@ +-----BEGIN RSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: DES-EDE3-CBC,85186376af1462c4 + +jY1gAV22U2GeDZW0cVFw41uABS7fe7zKQen4aQQvkFJ5DPRlZFMI2qZ2MI1Oo265 +ek1Of/pcV52ct139NuVg9JZwkPPog40xUDn72IanZ2ZvJl/dcoHFn99T816Hu0p7 +YGKpvyyy3VYuxaarZV2aUNFye/o4bnAh4P0db6qa7/sGAKAhJ9PusNcSAXWWMz1l +QSCkdD30KrZOtf39StVnNSf2vPWAjAR/w3fEOVKqEehANP7yptDVOthiqrN+p58Q +kOGf3RnA5BJ21EY09W8rbgKE18EPI0UH9LVZEvRabZd4e/VSRNL6leNO7AiMLqA0 +3P7DAvjQDeTpYu3tZNWKFlmXv28KjwNo/4mclQTM8k5nkpQcLhGEVJanMlz0NAvZ ++BOgHGt1Vd8fXp5vBEMIz0tj7jJJnyEyd6tLRc7Pm7GaFjRy70rr/ZCO27HnKeWi +BVwFmZqG6Bcc1WvOGH/w549q/Xq1B3EgjSCShd7WQqINXMFOLJRg+MVZ9/EgWTrT +AEMkWwozb7hDJ56IdjE6PUop/5nbH87YjXHreV4kaGxgz3xD+sj6MGl8uw2JeT7b +6mFrK2d+704NU0Z56w8i0ydYeNn1uFmZqL9alYTDmAjOORXAR/ApvY6ctPXSpPTW +bXgv7LNWbcD8cNWuf/24dpI+kxbrIsKdGmucjQQ066Ce9qa2Kr7HEpH/CxxKCuBx +9KTHpb2ZZI1j6Zd9DQarEQm2D9fPaqEIq9XH46tNq8twXXEaYSTwwodSkwlovB5n +e4HlbiSuHB78ej2lQyFKquqWVYMRQ3dk5CUem/4XPF0L8dPvnifoQgBMpvJvzCG5 +BsIDQXKf0qLhQPrXwemhgY8fnZqDpuRTD6mdEXoPqvJC5L+3hzpPXHtCU94oqIbq +z4lkG1ARi9yS+WfUbXXZO95+7EBBg4lXzEZvXjqY6epUVjWCnoa873H9zfMZBuLL +XkxMQyDOnXaqYeqNsCahbdH4zuobR1SCNL4nt3iSaADaN6Lezwz8LPHxoM1kG1i6 +fvPa/uRo9aVsfWsovO+od2VmqLh1sfPZoOenZSKQsAVPYmEuV8XXVJ/B8NVvNTrt +DrfAR+vFe41liMHdTUndo8uG9/IO7JNC8u98zWjyvcr6cCukqE9H60Y9QvDgaSK5 +yD/D7B4ZAp6UjHOtD+jY1mjV+aL/2XeHJyQaDczHUKm1Vd5Um2c5f6NkZycrbtGE +7z5lR2SccnbbG6XVngYiZxdMLZCrQUnSfhke+zhzYM2Ng/7fxyz3mTyG5EYvxreU +6i0Psceh+vD9IEGHYbpRfV4Uozmk7AhEOfQN3ZDXZTA4LB5Svp7j3DcOmAGtCWGx +PWA3su4KzwrW40b5ommDhndPZNoSJfsrw2GJHV62AdfIxmAi5zvALJ31YdYvZsz2 +e8cf1Cl5oxeF/jgewEy6RTSkOUjvb0iTfVgreu4Tk/sBW37jdKhfW32INasCgEYb +0fq9DLXVcDk2neH/Sb78cE26JNXS3EtW1V4dvdkhvOjqRFP8O8vFggLi1mFQltAH +pmV143MSNkC/ikyOBahpQjGu89HZ0sLnJr2kzKf5LJTcN7kYAfxRejS7ofUByME1 +O9mrHOZGGNVNIgNesBXv42UEd0/SzwF4UKxHY72sEoTNLXliroaJORYbbvWw4GDI +91/vHKJMqMimoC37soS16wrsP/SabzusUXBayHD/PLkkmHBPV9++cO79b+HbVB0Q +6OpxBY7u6QhZnfTJv/W/InG404pumq8oz6bt7bXurbfC2QzviNHuyZ/IenbQ/y41 +K5URD3fdFYLC3OS38SSBBq32yncjJam0FOj2joUZ4iAAXSju1NSDskT8WbVy3BOq +tdTxekrxM9w98p17Og+Uf8966H2mQUIrz53Umc9V1974TVWdu0Y862ghJGSeLEbH +617VGwNN9hINdQE+iYaAVvbogEKSdCfljyVdIx1MuS1jeae5wUgReqqE+bopYgJm +oIXlVNI7tWX2y3JdG1vqCqKpq/UDzciLxAUdyGgwZESt9T3mvqQcdvWxsfREBGwX +XzbiDiGoom735dOOaGxvmyZUtJi7r5AonzJpR+qaRWoHNr8cqeU9be1wxBZ57Kln +2eKpwPIwdBTwxCjnc/kstuTsR45M8G37zOgh9XK38jS6FB/FzFytHtt9oPQBZBeb +3A6p7kqbbb4ynAgDiGEz9ExNNIQf3hQo9RAiaL2WeS9FTFB02hq5QgsgrGVXrR2V +45CKzP874sMPYP8xFQvmrMAXDy//zBXaOrNJHyNOVrtDLPerBNIC6GSKtFp30ynz +Te6GHFhcwqWfrN8N1l2oM79xvc2aKlsvI+YN0xTQklxqSdyCJSdhRUxmCIMN1JM0 +13Ean0HtO+z9u/nH3T2GtAhNySJAPAOXIAAER/74WNXNJNi7SmptNtWJOKKeKK3m +Jon7XC3Bx5NTnTM6UjrrXvwXvsJyf8G+SlkoZXZx9izgQYAANAsSblieSvPVppwM +/EfU6HIby2cBLQ1wTJiEDjYu7E1JKpAPBhqL0cN7aJea9tV7bmjoqzKhbwxACHkI +ymOZ1BDIF67M5fCLFCnCZEJcl2sgx4bRBaP6+p0uRWhplrus+8x1LAtNyB+V17in +nXacPqGELgqv+F6embq03retfaCbIwLwQYmaMU+QHg9jHc9j1AIf6fHSxhIRUPUz +PWMhy7dJdUcmm2GX2EGBrr7jH+H2y33W7y+0I2a4s5WdpIWsYUMFiBU+M+qJdAwY +O/n1Q8ZPdKdY9+c2RMzeO6Zvyc7f1hwoOy0FuYi748qaELV6rx1Tr2MDWl5/uhUa +vYMF4RshsKJY9OCUKvL9waqELZf4zEPyu875ZLm9eoJV2MFcokUuPcpAN+ljj6mx +S+1O9/kRioHo7FMs9rU3bHbCMbphLc0NdI363L/sM2kSFjRWxYv87z5fEQAoZGQR +d7HePVRbp09GC9Jk9p28F6ysgqS7PwlreRRp3Dj5vFJ422QviUWTP/jLj1QfukQR +0KXZhKhs0iSmfW9vlFnADS32l67fmycHMlN9yktvzcytm6dZ/XiQMHVDhZPlIGVC +frJ2R1MhmAdFEgIPZZGuoHeXFdlYq9HMpM9lbykJ1L7M36XqaW6GgRTnhf2g4iKJ +-----END RSA PRIVATE KEY----- diff --git a/server/mdm/scep/scep/testdata/testca/ca.pem b/server/mdm/scep/scep/testdata/testca/ca.pem new file mode 100644 index 000000000..037b296cd --- /dev/null +++ b/server/mdm/scep/scep/testdata/testca/ca.pem @@ -0,0 +1,30 @@ +-----BEGIN CERTIFICATE----- +MIIFODCCAyCgAwIBAgIBATANBgkqhkiG9w0BAQsFADAtMQwwCgYDVQQGEwNVU0Ex +EDAOBgNVBAoTB2V0Y2QtY2ExCzAJBgNVBAsTAkNBMB4XDTE2MDUyOTEzNDcwNVoX +DTI2MDUyOTEzNDcwOFowLTEMMAoGA1UEBhMDVVNBMRAwDgYDVQQKEwdldGNkLWNh +MQswCQYDVQQLEwJDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALEG +S866Uf79znmx8+BakJ17tox8VYem0NZzPc2jF4RVWXfT481Yz9jdsjZubMCFuJiI +JzpMBT7RzXvZvuzMzZEe77Tb0mM+83t5kVwWWuxkEz7HQn0tWxuLR7NGaAi5MH53 +pcSGRNH8RgC7WdhyQ/3HwNGWObe0wQT69tfz1pHDSvNR9v7DS9KIiGsMc+dcqayz +n3YQuwEV8nD1KGenxEFjFh0NsP5FKrzDrsvzdFOWLJ3jedfDCSQSe0y33syZIYAQ +wS2/b+io6GMWDQemcirN9QiI1NGkcN9zioPRuYPxkaxGNa0O+3cTgA8egTFMigvI +4ZFsmERfZkJM4sBMK1uUmxXKb87nA1zooPvPk1KGQChXBEnrkHPbkP1VO+yYOS4m +t9LDweGVS6GoC5vjqQgymOHecaNfKpBnU6t7fP/aEZUF+6mxRKofolR/hTknkVNc +q2nrXEJpz8J73Iq8rkL0rNAEu1h83npPAoUgdFhwHzlq9ShRbz+ZQTxdAv5MOVs+ +6F9qcmbv/6C4xc1N1xH2NAJ8aFZTxsw4ny43hi7DgyRh1LJxcb2Bp7JMaD56CMSA +0zJqxIiV5kGUwbmrBjXMyvjYzx/0qI3j3bZl3p8BjZgyjkvOP0nArP3bby5mEUYx +i7+YgPm8dfGIzPh19I4oFReszOJl+JrdLnbf45efAgMBAAGjYzBhMA4GA1UdDwEB +/wQEAwICBDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBT6XD/PBaV7GbFEnxOm +3OJ3deamkzAfBgNVHSMEGDAWgBT6XD/PBaV7GbFEnxOm3OJ3deamkzANBgkqhkiG +9w0BAQsFAAOCAgEAC6yBHrRElZ7ovDrqjVBf8fLG+nINETPJ/kPTlTNtvqClLaeE +NKPH6JVp0/uusoKmqvE0LxyBEdP7waHQVq2XnfYggDCNjAUFxdv7OKAwlBjJ0JGs +5RsJ9DEehyLecnDDDhte92M2xUcfMet1BmuizLDDKaUU17sI1g/UNE+c7hViZA2J +e+wezVOUZqCY0pICsm4ar8JBY/pfUZ+1J00AZJtXuVWqK5GYGkrLZ7ZjNzzDF0cY +UmJxki5rj11XpCCQOZjVB+Pp3t7YpUOey1EC+1fKKrdS40zaRS3VVgh+Guavs5HV +egBzKDQUuRrZDbodJSv28RYlVbFTmkl3hGGNE0l2v0L2XHasZHoBkDZzz9nLuiI8 +ZdhWS+fn7dbswN9WzzB+dPzKS1WkTj5RXL/luI/7+fYNQyvIJYdnNCegyi2C2yTD +a/vmFJkBU+uLHWsW9a8R5Ca7A91ltJobTJE3uwxdXuZMTrmlWKsEbhqHCqO7d0j8 +IgYGxDo9ysfA4AOiNDxlp7lXxV/JFOsuGXNdFKcDFykLZ5u21X9ho9fptWJDP9JN +NNOXjC0Jv2UGZrHze6IqyL5JqxOGpK22PQIwpZwExwijUom+LH5VEXK1zpXzwC93 +WXWVtGOW4yEqv0VTn7vafIeM5GBTJ44ggpkp4RpFWoBMZcAFj8gE/9AUaHo= +-----END CERTIFICATE----- diff --git a/server/mdm/scep/scep/testdata/testca/sceptest.mobileconfig b/server/mdm/scep/scep/testdata/testca/sceptest.mobileconfig new file mode 100644 index 000000000..adeabc9fb --- /dev/null +++ b/server/mdm/scep/scep/testdata/testca/sceptest.mobileconfig @@ -0,0 +1,115 @@ + + + + + PayloadContent + + + PayloadContent + + KeyType + RSA + Keysize + 1024 + Retries + 3 + RetryDelay + 10 + URL + http://localhost:9001/scep + + PayloadDescription + Configures SCEP settings + PayloadDisplayName + SCEP + PayloadIdentifier + com.apple.security.scep.063D7953-1338-4BF0-8F99-913382996224 + PayloadType + com.apple.security.scep + PayloadUUID + 063D7953-1338-4BF0-8F99-913382996224 + PayloadVersion + 1 + + + PayloadCertificateFileName + ca.crt + PayloadContent + + LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZPRENDQXlD + Z0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREF0TVF3d0Nn + WURWUVFHRXdOVlUwRXgKRURBT0JnTlZCQW9UQjJWMFkyUXRZMkV4 + Q3pBSkJnTlZCQXNUQWtOQk1CNFhEVEUyTURVeU9URXpORGN3TlZv + WApEVEkyTURVeU9URXpORGN3T0Zvd0xURU1NQW9HQTFVRUJoTURW + Vk5CTVJBd0RnWURWUVFLRXdkbGRHTmtMV05oCk1Rc3dDUVlEVlFR + TEV3SkRRVENDQWlJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dJUEFE + Q0NBZ29DZ2dJQkFMRUcKUzg2NlVmNzl6bm14OCtCYWtKMTd0b3g4 + VlllbTBOWnpQYzJqRjRSVldYZlQ0ODFZejlqZHNqWnViTUNGdUpp + SQpKenBNQlQ3UnpYdlp2dXpNelpFZTc3VGIwbU0rODN0NWtWd1dX + dXhrRXo3SFFuMHRXeHVMUjdOR2FBaTVNSDUzCnBjU0dSTkg4UmdD + N1dkaHlRLzNId05HV09iZTB3UVQ2OXRmejFwSERTdk5SOXY3RFM5 + S0lpR3NNYytkY3FheXoKbjNZUXV3RVY4bkQxS0dlbnhFRmpGaDBO + c1A1RktyekRyc3Z6ZEZPV0xKM2plZGZEQ1NRU2UweTMzc3laSVlB + UQp3UzIvYitpbzZHTVdEUWVtY2lyTjlRaUkxTkdrY045emlvUFJ1 + WVB4a2F4R05hME8rM2NUZ0E4ZWdURk1pZ3ZJCjRaRnNtRVJmWmtK + TTRzQk1LMXVVbXhYS2I4N25BMXpvb1B2UGsxS0dRQ2hYQkVucmtI + UGJrUDFWTyt5WU9TNG0KdDlMRHdlR1ZTNkdvQzV2anFRZ3ltT0hl + Y2FOZktwQm5VNnQ3ZlAvYUVaVUYrNm14UktvZm9sUi9oVGtua1ZO + YwpxMm5yWEVKcHo4SjczSXE4cmtMMHJOQUV1MWg4M25wUEFvVWdk + Rmh3SHpscTlTaFJieitaUVR4ZEF2NU1PVnMrCjZGOXFjbWJ2LzZD + NHhjMU4xeEgyTkFKOGFGWlR4c3c0bnk0M2hpN0RneVJoMUxKeGNi + MkJwN0pNYUQ1NkNNU0EKMHpKcXhJaVY1a0dVd2JtckJqWE15dmpZ + engvMHFJM2ozYlpsM3A4QmpaZ3lqa3ZPUDBuQXJQM2JieTVtRVVZ + eAppNytZZ1BtOGRmR0l6UGgxOUk0b0ZSZXN6T0psK0pyZExuYmY0 + NWVmQWdNQkFBR2pZekJoTUE0R0ExVWREd0VCCi93UUVBd0lDQkRB + UEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUNlhE + L1BCYVY3R2JGRW54T20KM09KM2RlYW1rekFmQmdOVkhTTUVHREFX + Z0JUNlhEL1BCYVY3R2JGRW54T20zT0ozZGVhbWt6QU5CZ2txaGtp + Rwo5dzBCQVFzRkFBT0NBZ0VBQzZ5QkhyUkVsWjdvdkRycWpWQmY4 + ZkxHK25JTkVUUEova1BUbFROdHZxQ2xMYWVFCk5LUEg2SlZwMC91 + dXNvS21xdkUwTHh5QkVkUDd3YUhRVnEyWG5mWWdnRENOakFVRnhk + djdPS0F3bEJqSjBKR3MKNVJzSjlERWVoeUxlY25ERERodGU5Mk0y + eFVjZk1ldDFCbXVpekxEREthVVUxN3NJMWcvVU5FK2M3aFZpWkEy + SgplK3dlelZPVVpxQ1kwcElDc200YXI4SkJZL3BmVVorMUowMEFa + SnRYdVZXcUs1R1lHa3JMWjdaak56ekRGMGNZClVtSnhraTVyajEx + WHBDQ1FPWmpWQitQcDN0N1lwVU9leTFFQysxZktLcmRTNDB6YVJT + M1ZWZ2grR3VhdnM1SFYKZWdCektEUVV1UnJaRGJvZEpTdjI4Ulls + VmJGVG1rbDNoR0dORTBsMnYwTDJYSGFzWkhvQmtEWnp6OW5MdWlJ + OApaZGhXUytmbjdkYnN3TjlXenpCK2RQektTMVdrVGo1UlhML2x1 + SS83K2ZZTlF5dklKWWRuTkNlZ3lpMkMyeVRECmEvdm1GSmtCVSt1 + TEhXc1c5YThSNUNhN0E5MWx0Sm9iVEpFM3V3eGRYdVpNVHJtbFdL + c0ViaHFIQ3FPN2QwajgKSWdZR3hEbzl5c2ZBNEFPaU5EeGxwN2xY + eFYvSkZPc3VHWE5kRktjREZ5a0xaNXUyMVg5aG85ZnB0V0pEUDlK + TgpOTk9YakMwSnYyVUdackh6ZTZJcXlMNUpxeE9HcEsyMlBRSXdw + WndFeHdpalVvbStMSDVWRVhLMXpwWHp3QzkzCldYV1Z0R09XNHlF + cXYwVlRuN3ZhZkllTTVHQlRKNDRnZ3BrcDRScEZXb0JNWmNBRmo4 + Z0UvOUFVYUhvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + + PayloadDescription + Configures certificate settings. + PayloadDisplayName + ca.crt + PayloadIdentifier + com.apple.security.root.2E5B325F-84CA-4914-844F-703F9C4B11CE + PayloadType + com.apple.security.root + PayloadUUID + 2E5B325F-84CA-4914-844F-703F9C4B11CE + PayloadVersion + 1 + + + PayloadDisplayName + scept + PayloadIdentifier + vvrantchmbp.local.A795EFE8-60CA-47DA-92F3-FE2E435D800F + PayloadRemovalDisallowed + + PayloadType + Configuration + PayloadUUID + 7F0CF6ED-09FF-490E-AD53-89ACB920CD37 + PayloadVersion + 1 + + diff --git a/server/mdm/scep/scep/testdata/testca2/CertRep_NoCertificatesForSigners.der b/server/mdm/scep/scep/testdata/testca2/CertRep_NoCertificatesForSigners.der new file mode 100644 index 000000000..6ed5f52f5 Binary files /dev/null and b/server/mdm/scep/scep/testdata/testca2/CertRep_NoCertificatesForSigners.der differ diff --git a/server/mdm/scep/scep/testdata/testca2/ca2.pem b/server/mdm/scep/scep/testdata/testca2/ca2.pem new file mode 100644 index 000000000..b45d4981d --- /dev/null +++ b/server/mdm/scep/scep/testdata/testca2/ca2.pem @@ -0,0 +1,101 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 47:00:00:00:02:f5:40:0d:75:85:dd:87:88:00:00:00:00:00:02 + Signature Algorithm: sha256WithRSAEncryption + Issuer: DC = org, DC = example, CN = example-CERT-PROV-CA + Validity + Not Before: Oct 30 19:07:21 2020 GMT + Not After : Oct 30 19:07:21 2022 GMT + Subject: C = US, CN = CERT-PROV-CA-MSCEP-RA + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (2048 bit) + Modulus: + 00:a1:0b:13:03:0a:fc:2f:ed:92:22:3f:a5:01:2d: + f9:46:f5:2c:fb:38:f9:0e:f1:b9:48:65:30:61:95: + 84:15:a2:15:68:16:61:5d:cb:d5:83:d4:69:c7:da: + 4a:79:a3:c4:a2:e7:5b:33:b1:bc:e0:a1:3e:4e:1b: + 96:be:ff:34:a1:a5:da:ca:cd:ee:99:5e:91:e3:dc: + 96:ba:92:0a:77:01:82:2c:cb:c9:b5:30:69:de:39: + d3:59:34:86:35:e4:ce:7f:ca:b5:7c:2a:58:14:21: + 2a:4f:d8:0d:c6:90:17:a3:29:2d:b2:1f:89:e9:53: + 10:5b:e3:36:01:af:6c:01:08:2c:e8:43:cc:89:3d: + 99:13:39:85:76:d8:18:3f:df:db:1a:4d:fd:fa:39: + fa:f6:7c:86:d9:70:1b:0f:3a:e8:6b:fa:3d:e5:e4: + 38:c1:3e:3c:d1:c5:c7:74:ca:77:74:a1:0c:f1:dd: + f4:28:9b:d9:99:7d:1e:e9:36:9f:6a:da:64:6e:90: + 59:58:d6:db:e8:e3:5c:08:41:30:bd:14:35:de:0c: + 4a:9a:9c:1c:1e:ce:86:7d:cc:be:47:37:f6:5c:c4: + 91:86:7e:9a:9f:9f:d0:de:49:e4:bd:71:b0:d1:33: + b1:1f:ca:43:fe:e7:b0:f1:48:cf:40:79:1a:2e:f8: + 30:d5 + Exponent: 65537 (0x10001) + X509v3 extensions: + 1.3.6.1.4.1.311.20.2: + .,.E.n.r.o.l.l.m.e.n.t.A.g.e.n.t.O.f.f.l.i.n.e + X509v3 Extended Key Usage: + 1.3.6.1.4.1.311.20.2.1 + X509v3 Key Usage: critical + Digital Signature + X509v3 Subject Key Identifier: + D2:DB:A3:DF:13:2B:EE:D5:88:A1:3E:8C:28:0E:2A:00:7B:C7:18:19 + X509v3 Authority Key Identifier: + keyid:4B:C5:63:29:BB:CF:68:AF:18:1E:C5:99:E8:DF:55:F7:23:9A:08:EB + + X509v3 CRL Distribution Points: + + Full Name: + URI:ldap:///CN=example-CERT-PROV-CA,CN=cert-prov-ca,CN=CDP,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=example,DC=org?certificateRevocationList?base?objectClass=cRLDistributionPoint + + Authority Information Access: + CA Issuers - URI:ldap:///CN=example-CERT-PROV-CA,CN=AIA,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=example,DC=org?cACertificate?base?objectClass=certificationAuthority + + Signature Algorithm: sha256WithRSAEncryption + 06:c8:d1:6f:ba:9b:48:84:a3:63:8e:4b:0d:73:85:91:7d:e4: + ce:50:9b:de:09:99:91:a3:1e:e6:ce:6f:ec:bf:2b:ce:bd:8d: + 0a:6c:0e:98:3f:b2:1f:cc:ab:53:62:2a:99:61:2d:76:9d:16: + 5d:27:f4:db:b6:9d:08:91:5b:cb:0c:18:09:b0:ab:38:e8:66: + ad:7b:45:53:81:11:16:aa:b2:5f:f6:ca:58:c0:fc:3c:98:04: + a6:0b:cd:28:28:8f:74:96:c4:57:7b:d1:a6:df:c8:ac:2f:cf: + 79:69:2e:ae:7c:e4:af:ad:ef:74:6f:c9:42:f7:03:3d:fe:48: + 25:05:d5:23:96:4a:4b:ed:a2:15:cf:b6:fe:06:d9:53:72:8e: + d2:14:3f:ab:83:db:22:e1:9b:16:51:f5:b6:ec:05:13:ad:2b: + fb:a4:1c:4c:97:17:29:5e:15:b9:f9:49:fb:33:7c:6d:b5:89: + ad:3d:50:64:9d:38:59:87:4d:9c:4f:39:44:48:34:96:77:f2: + 4b:1c:ad:84:94:e9:b7:9f:f7:1b:35:8a:c7:ab:18:14:59:d1: + f2:14:93:c3:a8:8f:6b:47:53:c9:9f:e2:f5:59:00:34:e6:23: + 2f:ce:5e:84:f9:81:ad:6b:cc:b3:ef:2c:04:5c:de:16:54:ba: + eb:38:f4:3b +-----BEGIN CERTIFICATE----- +MIIFVDCCBDygAwIBAgITRwAAAAL1QA11hd2HiAAAAAAAAjANBgkqhkiG9w0BAQsF +ADBNMRMwEQYKCZImiZPyLGQBGRYDb3JnMRcwFQYKCZImiZPyLGQBGRYHZXhhbXBs +ZTEdMBsGA1UEAxMUZXhhbXBsZS1DRVJULVBST1YtQ0EwHhcNMjAxMDMwMTkwNzIx +WhcNMjIxMDMwMTkwNzIxWjAtMQswCQYDVQQGEwJVUzEeMBwGA1UEAxMVQ0VSVC1Q +Uk9WLUNBLU1TQ0VQLVJBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA +oQsTAwr8L+2SIj+lAS35RvUs+zj5DvG5SGUwYZWEFaIVaBZhXcvVg9Rpx9pKeaPE +oudbM7G84KE+ThuWvv80oaXays3umV6R49yWupIKdwGCLMvJtTBp3jnTWTSGNeTO +f8q1fCpYFCEqT9gNxpAXoyktsh+J6VMQW+M2Aa9sAQgs6EPMiT2ZEzmFdtgYP9/b +Gk39+jn69nyG2XAbDzroa/o95eQ4wT480cXHdMp3dKEM8d30KJvZmX0e6Tafatpk +bpBZWNbb6ONcCEEwvRQ13gxKmpwcHs6Gfcy+Rzf2XMSRhn6an5/Q3knkvXGw0TOx +H8pD/uew8UjPQHkaLvgw1QIDAQABo4ICSzCCAkcwOwYJKwYBBAGCNxQCBC4eLABF +AG4AcgBvAGwAbABtAGUAbgB0AEEAZwBlAG4AdABPAGYAZgBsAGkAbgBlMBUGA1Ud +JQQOMAwGCisGAQQBgjcUAgEwDgYDVR0PAQH/BAQDAgeAMB0GA1UdDgQWBBTS26Pf +Eyvu1YihPowoDioAe8cYGTAfBgNVHSMEGDAWgBRLxWMpu89orxgexZno31X3I5oI +6zCB1wYDVR0fBIHPMIHMMIHJoIHGoIHDhoHAbGRhcDovLy9DTj1leGFtcGxlLUNF +UlQtUFJPVi1DQSxDTj1jZXJ0LXByb3YtY2EsQ049Q0RQLENOPVB1YmxpYyUyMEtl +eSUyMFNlcnZpY2VzLENOPVNlcnZpY2VzLENOPUNvbmZpZ3VyYXRpb24sREM9ZXhh +bXBsZSxEQz1vcmc/Y2VydGlmaWNhdGVSZXZvY2F0aW9uTGlzdD9iYXNlP29iamVj +dENsYXNzPWNSTERpc3RyaWJ1dGlvblBvaW50MIHGBggrBgEFBQcBAQSBuTCBtjCB +swYIKwYBBQUHMAKGgaZsZGFwOi8vL0NOPWV4YW1wbGUtQ0VSVC1QUk9WLUNBLENO +PUFJQSxDTj1QdWJsaWMlMjBLZXklMjBTZXJ2aWNlcyxDTj1TZXJ2aWNlcyxDTj1D +b25maWd1cmF0aW9uLERDPWV4YW1wbGUsREM9b3JnP2NBQ2VydGlmaWNhdGU/YmFz +ZT9vYmplY3RDbGFzcz1jZXJ0aWZpY2F0aW9uQXV0aG9yaXR5MA0GCSqGSIb3DQEB +CwUAA4IBAQAGyNFvuptIhKNjjksNc4WRfeTOUJveCZmRox7mzm/svyvOvY0KbA6Y +P7IfzKtTYiqZYS12nRZdJ/Tbtp0IkVvLDBgJsKs46Gate0VTgREWqrJf9spYwPw8 +mASmC80oKI90lsRXe9Gm38isL895aS6ufOSvre90b8lC9wM9/kglBdUjlkpL7aIV +z7b+BtlTco7SFD+rg9si4ZsWUfW27AUTrSv7pBxMlxcpXhW5+Un7M3xttYmtPVBk +nThZh02cTzlESDSWd/JLHK2ElOm3n/cbNYrHqxgUWdHyFJPDqI9rR1PJn+L1WQA0 +5iMvzl6E+YGta8yz7ywEXN4WVLrrOPQ7 +-----END CERTIFICATE----- diff --git a/server/mdm/scep/scep/testdata/testclient/client.key b/server/mdm/scep/scep/testdata/testclient/client.key new file mode 100644 index 000000000..be2febb14 --- /dev/null +++ b/server/mdm/scep/scep/testdata/testclient/client.key @@ -0,0 +1,19 @@ +Bag Attributes + friendlyName: com.apple.security.scep.063D7953-1338-4BF0-8F99-913382996224 + localKeyID: A1 30 90 76 1A 30 F6 66 64 F8 5D 37 43 3D 20 65 E1 11 2B A1 +Key Attributes: +-----BEGIN RSA PRIVATE KEY----- +MIICXgIBAAKBgQDT9YGr0H8dpozAEi5l2XkWyKy2JD3yEybI9A1ZDXcK/78UPQ+C +4tBb6BTRJWDWoZFlFcHUGbZWXbySPw6ggBsLl4feF1A+hjtCjlZsRF4mnfctixkr +dP+UGl37UunsW63mn8uM6oM+7elhB2zRscZrZPBDKZx1V+Et+BFrX49xNwIDAQAB +AoGAP9bzDmfG0YxnWjZfqSd+NCGO+3EhAzdHeEEhgA/xKevrhlH5yQc9kGDvXCrw +5tRU8WhDL/nqlEq5UCcT5b2P5zp1L0PY+X6gD5C7KGEIio5SvimAnQMh2HCKftDc +KX9NA9EJLq9BUsqK9HjXdsfIGzyoqhZQpDGTrgyVlfm/zwkCQQDyuKJvxm/6tWry +GBN0eBLyA4F738MFP/3kb87zUsGTD8dh92vwjMhGv1woKp09POs2s2MnADw2wOEa +hh+v6R5zAkEA344KSwaKZZSf7E5qFayg3qG9M55uhbus9CazoAWiceYMR6ImIevA +EtnhwQczIRGI8Bp4jgUtO9gu4IgjCUHNLQJBAIJkS+cuPGP75+sMog70noDi/0GT +0MnWOcfphMzUzWb6mAr6B0Of7cuL668sTXJjcpzdO8vs5WworAU6vnUbEB8CQQCB ++Hy3fcf8otoPcs9uZnzosrPjTNsI2UIGeHG6OUxmV88P3o+47O0wiIgdx2fMc/tf +TKSGPTA9OMSYOc3U1fLJAkEArLyCHxxDzcEuhyHnmoct/dTUg5Q8CYKIQfo5oVKZ +jvtL/r0udFpxLUDxZ7590I32cTSrtfgNBJegHr54YKN9KA== +-----END RSA PRIVATE KEY----- diff --git a/server/mdm/scep/scep/testdata/testclient/client.pem b/server/mdm/scep/scep/testdata/testclient/client.pem new file mode 100644 index 000000000..3de20270b --- /dev/null +++ b/server/mdm/scep/scep/testdata/testclient/client.pem @@ -0,0 +1,58 @@ +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 6f:4f:31:6a:b2:da:d4:ce:d0:fc:09:fb:b9:26:90:03:d6:09:a4:8c + Signature Algorithm: sha256WithRSAEncryption + Issuer: C = US, ST = USA, O = etcd-ca, OU = CA, CN = etcd-ca + Validity + Not Before: Feb 16 12:11:45 2021 GMT + Not After : Dec 26 12:11:45 2030 GMT + Subject: C = US, ST = USA, O = etcd-ca, OU = CA, CN = etcd-ca + Subject Public Key Info: + Public Key Algorithm: rsaEncryption + RSA Public-Key: (1024 bit) + Modulus: + 00:d3:f5:81:ab:d0:7f:1d:a6:8c:c0:12:2e:65:d9: + 79:16:c8:ac:b6:24:3d:f2:13:26:c8:f4:0d:59:0d: + 77:0a:ff:bf:14:3d:0f:82:e2:d0:5b:e8:14:d1:25: + 60:d6:a1:91:65:15:c1:d4:19:b6:56:5d:bc:92:3f: + 0e:a0:80:1b:0b:97:87:de:17:50:3e:86:3b:42:8e: + 56:6c:44:5e:26:9d:f7:2d:8b:19:2b:74:ff:94:1a: + 5d:fb:52:e9:ec:5b:ad:e6:9f:cb:8c:ea:83:3e:ed: + e9:61:07:6c:d1:b1:c6:6b:64:f0:43:29:9c:75:57: + e1:2d:f8:11:6b:5f:8f:71:37 + Exponent: 65537 (0x10001) + X509v3 extensions: + X509v3 Subject Key Identifier: + A1:30:90:76:1A:30:F6:66:64:F8:5D:37:43:3D:20:65:E1:11:2B:A1 + X509v3 Authority Key Identifier: + keyid:A1:30:90:76:1A:30:F6:66:64:F8:5D:37:43:3D:20:65:E1:11:2B:A1 + + X509v3 Basic Constraints: critical + CA:TRUE + Signature Algorithm: sha256WithRSAEncryption + 96:f0:7b:28:3b:7a:06:2e:cd:37:23:19:f3:98:0c:a2:d3:16: + 9e:5a:b7:56:ca:9d:9d:ca:a4:59:78:b3:29:b1:3c:18:e8:dc: + 4c:f6:64:62:84:a3:19:ca:ca:b0:34:ed:d2:6f:9b:a6:38:20: + 98:64:db:c5:cb:a4:ce:b2:9c:62:a2:0e:e2:76:cb:f4:a1:c5: + 40:ee:c5:b4:18:9d:9e:5a:bf:bd:72:29:96:f8:82:05:87:d3: + fb:84:12:91:ea:e0:86:02:b1:63:c2:59:6a:10:9a:b7:7d:e2: + be:f3:19:31:31:3e:bb:21:4d:a0:16:f9:c0:94:ba:0f:e6:3d: + 37:26 +-----BEGIN CERTIFICATE----- +MIICdDCCAd2gAwIBAgIUb08xarLa1M7Q/An7uSaQA9YJpIwwDQYJKoZIhvcNAQEL +BQAwTDELMAkGA1UEBhMCVVMxDDAKBgNVBAgMA1VTQTEQMA4GA1UECgwHZXRjZC1j +YTELMAkGA1UECwwCQ0ExEDAOBgNVBAMMB2V0Y2QtY2EwHhcNMjEwMjE2MTIxMTQ1 +WhcNMzAxMjI2MTIxMTQ1WjBMMQswCQYDVQQGEwJVUzEMMAoGA1UECAwDVVNBMRAw +DgYDVQQKDAdldGNkLWNhMQswCQYDVQQLDAJDQTEQMA4GA1UEAwwHZXRjZC1jYTCB +nzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA0/WBq9B/HaaMwBIuZdl5FsistiQ9 +8hMmyPQNWQ13Cv+/FD0PguLQW+gU0SVg1qGRZRXB1Bm2Vl28kj8OoIAbC5eH3hdQ +PoY7Qo5WbEReJp33LYsZK3T/lBpd+1Lp7Fut5p/LjOqDPu3pYQds0bHGa2TwQymc +dVfhLfgRa1+PcTcCAwEAAaNTMFEwHQYDVR0OBBYEFKEwkHYaMPZmZPhdN0M9IGXh +ESuhMB8GA1UdIwQYMBaAFKEwkHYaMPZmZPhdN0M9IGXhESuhMA8GA1UdEwEB/wQF +MAMBAf8wDQYJKoZIhvcNAQELBQADgYEAlvB7KDt6Bi7NNyMZ85gMotMWnlq3Vsqd +ncqkWXizKbE8GOjcTPZkYoSjGcrKsDTt0m+bpjggmGTbxcukzrKcYqIO4nbL9KHF +QO7FtBidnlq/vXIplviCBYfT+4QSkerghgKxY8JZahCat33ivvMZMTE+uyFNoBb5 +wJS6D+Y9NyY= +-----END CERTIFICATE----- diff --git a/server/mdm/scep/server/csrsigner.go b/server/mdm/scep/server/csrsigner.go new file mode 100644 index 000000000..7e2fdd88f --- /dev/null +++ b/server/mdm/scep/server/csrsigner.go @@ -0,0 +1,44 @@ +package scepserver + +import ( + "crypto/subtle" + "crypto/x509" + "errors" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" +) + +// CSRSigner is a handler for CSR signing by the CA/RA +// +// SignCSR should take the CSR in the CSRReqMessage and return a +// Certificate signed by the CA. +type CSRSigner interface { + SignCSR(*scep.CSRReqMessage) (*x509.Certificate, error) +} + +// CSRSignerFunc is an adapter for CSR signing by the CA/RA +type CSRSignerFunc func(*scep.CSRReqMessage) (*x509.Certificate, error) + +// SignCSR calls f(m) +func (f CSRSignerFunc) SignCSR(m *scep.CSRReqMessage) (*x509.Certificate, error) { + return f(m) +} + +// NopCSRSigner does nothing +func NopCSRSigner() CSRSignerFunc { + return func(m *scep.CSRReqMessage) (*x509.Certificate, error) { + return nil, nil + } +} + +// ChallengeMiddleware wraps next in a CSRSigner that validates the challenge from the CSR +func ChallengeMiddleware(challenge string, next CSRSigner) CSRSignerFunc { + challengeBytes := []byte(challenge) + return func(m *scep.CSRReqMessage) (*x509.Certificate, error) { + // TODO: compare challenge only for PKCSReq? + if subtle.ConstantTimeCompare(challengeBytes, []byte(m.ChallengePassword)) != 1 { + return nil, errors.New("invalid challenge") + } + return next.SignCSR(m) + } +} diff --git a/server/mdm/scep/server/csrsigner_test.go b/server/mdm/scep/server/csrsigner_test.go new file mode 100644 index 000000000..54576d1c7 --- /dev/null +++ b/server/mdm/scep/server/csrsigner_test.go @@ -0,0 +1,26 @@ +package scepserver + +import ( + "testing" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" +) + +func TestChallengeMiddleware(t *testing.T) { + testPW := "RIGHT" + signer := ChallengeMiddleware(testPW, NopCSRSigner()) + + csrReq := &scep.CSRReqMessage{ChallengePassword: testPW} + + _, err := signer.SignCSR(csrReq) + if err != nil { + t.Error(err) + } + + csrReq.ChallengePassword = "WRONG" + + _, err = signer.SignCSR(csrReq) + if err == nil { + t.Error("invalid challenge should generate an error") + } +} diff --git a/server/mdm/scep/server/endpoint.go b/server/mdm/scep/server/endpoint.go new file mode 100644 index 000000000..205d834b0 --- /dev/null +++ b/server/mdm/scep/server/endpoint.go @@ -0,0 +1,190 @@ +package scepserver + +import ( + "bytes" + "context" + "net/url" + "strings" + "sync" + "time" + + "github.com/go-kit/kit/endpoint" + "github.com/go-kit/kit/log" + httptransport "github.com/go-kit/kit/transport/http" +) + +// possible SCEP operations +const ( + getCACaps = "GetCACaps" + getCACert = "GetCACert" + pkiOperation = "PKIOperation" + getNextCACert = "GetNextCACert" +) + +type Endpoints struct { + GetEndpoint endpoint.Endpoint + PostEndpoint endpoint.Endpoint + + mtx sync.RWMutex + capabilities []byte +} + +func (e *Endpoints) GetCACaps(ctx context.Context) ([]byte, error) { + request := SCEPRequest{Operation: getCACaps} + response, err := e.GetEndpoint(ctx, request) + if err != nil { + return nil, err + } + resp := response.(SCEPResponse) + + e.mtx.Lock() + e.capabilities = resp.Data + e.mtx.Unlock() + + return resp.Data, resp.Err +} + +func (e *Endpoints) Supports(cap string) bool { + e.mtx.RLock() + defer e.mtx.RUnlock() + + if len(e.capabilities) == 0 { + e.mtx.RUnlock() + _, _ = e.GetCACaps(context.Background()) + e.mtx.RLock() + } + return bytes.Contains(e.capabilities, []byte(cap)) +} + +func (e *Endpoints) GetCACert(ctx context.Context, message string) ([]byte, int, error) { + request := SCEPRequest{Operation: getCACert, Message: []byte(message)} + response, err := e.GetEndpoint(ctx, request) + if err != nil { + return nil, 0, err + } + resp := response.(SCEPResponse) + return resp.Data, resp.CACertNum, resp.Err +} + +func (e *Endpoints) PKIOperation(ctx context.Context, msg []byte) ([]byte, error) { + var ee endpoint.Endpoint + if e.Supports("POSTPKIOperation") || e.Supports("SCEPStandard") { + ee = e.PostEndpoint + } else { + ee = e.GetEndpoint + } + + request := SCEPRequest{Operation: pkiOperation, Message: msg} + response, err := ee(ctx, request) + if err != nil { + return nil, err + } + resp := response.(SCEPResponse) + return resp.Data, resp.Err +} + +func (e *Endpoints) GetNextCACert(ctx context.Context) ([]byte, error) { + var request SCEPRequest + response, err := e.GetEndpoint(ctx, request) + if err != nil { + return nil, err + } + resp := response.(SCEPResponse) + return resp.Data, resp.Err +} + +func MakeServerEndpoints(svc Service) *Endpoints { + e := MakeSCEPEndpoint(svc) + return &Endpoints{ + GetEndpoint: e, + PostEndpoint: e, + } +} + +// MakeClientEndpoints returns an Endpoints struct where each endpoint invokes +// the corresponding method on the remote instance, via a transport/http.Client. +// Useful in a SCEP client. +func MakeClientEndpoints(instance string) (*Endpoints, error) { + if !strings.HasPrefix(instance, "http") { + instance = "http://" + instance + } + tgt, err := url.Parse(instance) + if err != nil { + return nil, err + } + + options := []httptransport.ClientOption{} + + return &Endpoints{ + GetEndpoint: httptransport.NewClient( + "GET", + tgt, + EncodeSCEPRequest, + DecodeSCEPResponse, + options...).Endpoint(), + PostEndpoint: httptransport.NewClient( + "POST", + tgt, + EncodeSCEPRequest, + DecodeSCEPResponse, + options...).Endpoint(), + }, nil +} + +func MakeSCEPEndpoint(svc Service) endpoint.Endpoint { + return func(ctx context.Context, request interface{}) (interface{}, error) { + req := request.(SCEPRequest) + resp := SCEPResponse{operation: req.Operation} + switch req.Operation { + case "GetCACaps": + resp.Data, resp.Err = svc.GetCACaps(ctx) + case "GetCACert": + resp.Data, resp.CACertNum, resp.Err = svc.GetCACert(ctx, string(req.Message)) + case "PKIOperation": + resp.Data, resp.Err = svc.PKIOperation(ctx, req.Message) + default: + return nil, &BadRequestError{Message: "operation not implemented"} + } + return resp, nil + } +} + +// SCEPRequest is a SCEP server request. +type SCEPRequest struct { + Operation string + Message []byte +} + +func (r SCEPRequest) scepOperation() string { return r.Operation } + +// SCEPResponse is a SCEP server response. +// Business errors will be encoded as a CertRep message +// with pkiStatus FAILURE and a failInfo attribute. +type SCEPResponse struct { + operation string + CACertNum int + Data []byte + Err error +} + +func (r SCEPResponse) scepOperation() string { return r.operation } + +// EndpointLoggingMiddleware returns an endpoint middleware that logs the +// duration of each invocation, and the resulting error, if any. +func EndpointLoggingMiddleware(logger log.Logger) endpoint.Middleware { + return func(next endpoint.Endpoint) endpoint.Endpoint { + return func(ctx context.Context, request interface{}) (response interface{}, err error) { + var keyvals []interface{} + // check if this is a scep endpoint, if it is, append the method to the log. + if oper, ok := request.(interface { + scepOperation() string + }); ok { + keyvals = append(keyvals, "op", oper.scepOperation()) + } + defer func(begin time.Time) { + logger.Log(append(keyvals, "error", err, "took", time.Since(begin))...) + }(time.Now()) + return next(ctx, request) + } + } +} diff --git a/server/mdm/scep/server/service.go b/server/mdm/scep/server/service.go new file mode 100644 index 000000000..9b56e57a3 --- /dev/null +++ b/server/mdm/scep/server/service.go @@ -0,0 +1,138 @@ +package scepserver + +import ( + "context" + "crypto/rsa" + "crypto/x509" + "errors" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + + "github.com/go-kit/kit/log" +) + +// Service is the interface for all supported SCEP server operations. +type Service interface { + // GetCACaps returns a list of options + // which are supported by the server. + GetCACaps(ctx context.Context) ([]byte, error) + + // GetCACert returns CA certificate or + // a CA certificate chain with intermediates + // in a PKCS#7 Degenerate Certificates format + // message is an optional string for the CA + GetCACert(ctx context.Context, message string) ([]byte, int, error) + + // PKIOperation handles incoming SCEP messages such as PKCSReq and + // sends back a CertRep PKIMessag. + PKIOperation(ctx context.Context, msg []byte) ([]byte, error) + + // GetNextCACert returns a replacement certificate or certificate chain + // when the old one expires. The response format is a PKCS#7 Degenerate + // Certificates type. + GetNextCACert(ctx context.Context) ([]byte, error) +} + +type service struct { + // The service certificate and key for SCEP exchanges. These are + // quite likely the same as the CA keypair but may be its own SCEP + // specific keypair in the case of e.g. RA (proxy) operation. + crt *x509.Certificate + key *rsa.PrivateKey + + // Optional additional CA certificates for e.g. RA (proxy) use. + // Only used in this service when responding to GetCACert. + addlCa []*x509.Certificate + + // The (chainable) CSR signing function. Intended to handle all + // SCEP request functionality such as CSR & challenge checking, CA + // issuance, RA proxying, etc. + signer CSRSigner + + /// info logging is implemented in the service middleware layer. + debugLogger log.Logger +} + +func (svc *service) GetCACaps(ctx context.Context) ([]byte, error) { + defaultCaps := []byte("Renewal\nSHA-1\nSHA-256\nAES\nDES3\nSCEPStandard\nPOSTPKIOperation") + return defaultCaps, nil +} + +func (svc *service) GetCACert(ctx context.Context, _ string) ([]byte, int, error) { + if svc.crt == nil { + return nil, 0, errors.New("missing CA certificate") + } + if len(svc.addlCa) < 1 { + return svc.crt.Raw, 1, nil + } + certs := []*x509.Certificate{svc.crt} + certs = append(certs, svc.addlCa...) + data, err := scep.DegenerateCertificates(certs) + return data, len(svc.addlCa) + 1, err +} + +func (svc *service) PKIOperation(ctx context.Context, data []byte) ([]byte, error) { + if len(data) == 0 { + return nil, &BadRequestError{Message: "missing data for PKIOperation"} + } + msg, err := scep.ParsePKIMessage(data, scep.WithLogger(svc.debugLogger)) + if err != nil { + return nil, err + } + if err := msg.DecryptPKIEnvelope(svc.crt, svc.key); err != nil { + return nil, err + } + + crt, err := svc.signer.SignCSR(msg.CSRReqMessage) + if err == nil && crt == nil { + err = errors.New("no signed certificate") + } + if err != nil { + svc.debugLogger.Log("msg", "failed to sign CSR", "err", err) + certRep, err := msg.Fail(svc.crt, svc.key, scep.BadRequest) + return certRep.Raw, err + } + + certRep, err := msg.Success(svc.crt, svc.key, crt) + return certRep.Raw, err +} + +func (svc *service) GetNextCACert(ctx context.Context) ([]byte, error) { + panic("not implemented") +} + +// ServiceOption is a server configuration option +type ServiceOption func(*service) error + +// WithLogger configures a logger for the SCEP Service. +// By default, a no-op logger is used. +func WithLogger(logger log.Logger) ServiceOption { + return func(s *service) error { + s.debugLogger = logger + return nil + } +} + +// WithAddlCA appends an additional certificate to the slice of CA certs +func WithAddlCA(ca *x509.Certificate) ServiceOption { + return func(s *service) error { + s.addlCa = append(s.addlCa, ca) + return nil + } +} + +// NewService creates a new scep service +func NewService(crt *x509.Certificate, key *rsa.PrivateKey, signer CSRSigner, opts ...ServiceOption) (Service, error) { + s := &service{ + crt: crt, + key: key, + signer: signer, + debugLogger: log.NewNopLogger(), + } + for _, opt := range opts { + if err := opt(s); err != nil { + return nil, err + } + } + return s, nil +} diff --git a/server/mdm/scep/server/service_bolt_test.go b/server/mdm/scep/server/service_bolt_test.go new file mode 100644 index 000000000..f85ef1d65 --- /dev/null +++ b/server/mdm/scep/server/service_bolt_test.go @@ -0,0 +1,213 @@ +package scepserver_test + +import ( + "bytes" + "context" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" + "fmt" + "io/ioutil" + "math/big" + "os" + "testing" + "time" + + scepdepot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" + boltdepot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot/bolt" + "github.com/fleetdm/fleet/v4/server/mdm/scep/scep" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" + + "github.com/boltdb/bolt" +) + +func TestCaCert(t *testing.T) { + // init bolt depot CA + boltDepot := createDB(0o666, nil) + key, err := boltDepot.CreateOrLoadKey(2048) + if err != nil { + t.Fatal(err) + } + _, err = boltDepot.CreateOrLoadCA(key, 5, "MicroMDM", "US") + if err != nil { + t.Fatal(err) + } + + // use exported interface + depot := scepdepot.Depot(boltDepot) + + // load CA & key again + certs, key, err := depot.CA([]byte{}) + if err != nil { + t.Fatal(err) + } + caCert := certs[0] + + // SCEP service + svc, err := scepserver.NewService(caCert, key, scepdepot.NewSigner(depot)) + if err != nil { + t.Fatal(err) + } + + // generate scep "client" keys, csr, cert + selfKey, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + t.Fatal(err) + } + csrBytes, err := newCSR(selfKey, "ou", "loc", "province", "country", "cname", "org") + if err != nil { + t.Fatal(err) + } + csr, err := x509.ParseCertificateRequest(csrBytes) + if err != nil { + t.Fatal(err) + } + signerCert, err := selfSign(selfKey, csr) + if err != nil { + t.Fatal(err) + } + + roots := x509.NewCertPool() + roots.AddCert(caCert) + var serCollector []*big.Int + + ctx := context.Background() + for i := 0; i < 5; i++ { + // check CA + caBytes, num, err := svc.GetCACert(ctx, "") + if err != nil { + t.Fatal(err) + } + if have, want := num, 1; have != want { + t.Errorf("i=%d, have %d, want %d", i, have, want) + } + + if have, want := caBytes, caCert.Raw; !bytes.Equal(have, want) { + t.Errorf("i=%d, have %v, want %v", i, have, want) + } + + // create scep "client" request + tmpl := &scep.PKIMessage{ + MessageType: scep.PKCSReq, + Recipients: []*x509.Certificate{caCert}, + SignerKey: selfKey, + SignerCert: signerCert, + } + msg, err := scep.NewCSRRequest(csr, tmpl) + if err != nil { + t.Fatal(err) + } + + // submit to service + respMsgBytes, err := svc.PKIOperation(ctx, msg.Raw) + if err != nil { + t.Fatal(err) + } + + // read and decrypt reply + respMsg, err := scep.ParsePKIMessage(respMsgBytes) + if err != nil { + t.Fatal(err) + } + + err = respMsg.DecryptPKIEnvelope(signerCert, selfKey) + if err != nil { + t.Fatal(err) + } + + // verify issued certificate is from the CA + respCert := respMsg.CertRepMessage.Certificate + opts := x509.VerifyOptions{ + Roots: roots, + KeyUsages: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth}, + } + chains, err := respCert.Verify(opts) + if err != nil { + t.Error(err) + } + if len(chains) < 1 { + t.Error("no established chain between issued cert and CA") + } + + // verify unique certificate serials + for _, ser := range serCollector { + if respCert.SerialNumber.Cmp(ser) == 0 { + t.Error("seen serial number before!") + } + } + serCollector = append(serCollector, respCert.SerialNumber) + } +} + +func createDB(mode os.FileMode, options *bolt.Options) *boltdepot.Depot { + // Create temporary path. + f, _ := ioutil.TempFile("", "bolt-") + f.Close() + os.Remove(f.Name()) + + db, err := bolt.Open(f.Name(), mode, options) + if err != nil { + panic(err.Error()) + } + d, err := boltdepot.NewBoltDepot(db) + if err != nil { + panic(err.Error()) + } + return d +} + +func newCSR(priv *rsa.PrivateKey, ou string, locality string, province string, country string, cname, org string) ([]byte, error) { + subj := pkix.Name{ + CommonName: cname, + } + if len(org) > 0 { + subj.Organization = []string{org} + } + if len(ou) > 0 { + subj.OrganizationalUnit = []string{ou} + } + if len(province) > 0 { + subj.Province = []string{province} + } + if len(locality) > 0 { + subj.Locality = []string{locality} + } + if len(country) > 0 { + subj.Country = []string{country} + } + template := &x509.CertificateRequest{ + Subject: subj, + } + return x509.CreateCertificateRequest(rand.Reader, template, priv) +} + +func selfSign(priv *rsa.PrivateKey, csr *x509.CertificateRequest) (*x509.Certificate, error) { + serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) + serialNumber, err := rand.Int(rand.Reader, serialNumberLimit) + if err != nil { + return nil, fmt.Errorf("failed to generate serial number: %s", err) + } + + notBefore := time.Now() + notAfter := notBefore.Add(time.Hour * 1) + template := x509.Certificate{ + SerialNumber: serialNumber, + Subject: pkix.Name{ + CommonName: "SCEP SIGNER", + Organization: csr.Subject.Organization, + }, + NotBefore: notBefore, + NotAfter: notAfter, + + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + } + + derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) + if err != nil { + return nil, err + } + return x509.ParseCertificate(derBytes) +} diff --git a/server/mdm/scep/server/service_logging.go b/server/mdm/scep/server/service_logging.go new file mode 100644 index 000000000..c67cd24fa --- /dev/null +++ b/server/mdm/scep/server/service_logging.go @@ -0,0 +1,55 @@ +package scepserver + +import ( + "context" + "time" + + "github.com/go-kit/kit/log" +) + +type loggingService struct { + logger log.Logger + Service +} + +// NewLoggingService creates adds logging to the SCEP service +func NewLoggingService(logger log.Logger, s Service) Service { + return &loggingService{logger, s} +} + +func (mw *loggingService) GetCACaps(ctx context.Context) (caps []byte, err error) { + defer func(begin time.Time) { + _ = mw.logger.Log( + "method", "GetCACaps", + "err", err, + "took", time.Since(begin), + ) + }(time.Now()) + caps, err = mw.Service.GetCACaps(ctx) + return +} + +func (mw *loggingService) GetCACert(ctx context.Context, message string) (cert []byte, certNum int, err error) { + defer func(begin time.Time) { + _ = mw.logger.Log( + "method", "GetCACert", + "message", message, + "err", err, + "took", time.Since(begin), + ) + }(time.Now()) + cert, certNum, err = mw.Service.GetCACert(ctx, message) + return +} + +func (mw *loggingService) PKIOperation(ctx context.Context, data []byte) (certRep []byte, err error) { + defer func(begin time.Time) { + _ = mw.logger.Log( + "method", "PKIOperation", + "err", err, + "took", time.Since(begin), + ) + }(time.Now()) + certRep, err = mw.Service.PKIOperation(ctx, data) + return +} diff --git a/server/mdm/scep/server/transport.go b/server/mdm/scep/server/transport.go new file mode 100644 index 000000000..b3cb92857 --- /dev/null +++ b/server/mdm/scep/server/transport.go @@ -0,0 +1,211 @@ +package scepserver + +import ( + "bytes" + "context" + "encoding/base64" + "errors" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + + kitlog "github.com/go-kit/kit/log" + kithttp "github.com/go-kit/kit/transport/http" + "github.com/gorilla/mux" + "github.com/groob/finalizer/logutil" +) + +func MakeHTTPHandler(e *Endpoints, svc Service, logger kitlog.Logger) http.Handler { + opts := []kithttp.ServerOption{ + kithttp.ServerErrorLogger(logger), + kithttp.ServerFinalizer(logutil.NewHTTPLogger(logger).LoggingFinalizer), + } + + r := mux.NewRouter() + r.Methods("GET").Handler(kithttp.NewServer( + e.GetEndpoint, + decodeSCEPRequest, + encodeSCEPResponse, + opts..., + )) + r.Methods("POST").Handler(kithttp.NewServer( + e.PostEndpoint, + decodeSCEPRequest, + encodeSCEPResponse, + opts..., + )) + + return r +} + +// EncodeSCEPRequest encodes a SCEP HTTP Request. Used by the client. +func EncodeSCEPRequest(ctx context.Context, r *http.Request, request interface{}) error { + req := request.(SCEPRequest) + params := r.URL.Query() + params.Set("operation", req.Operation) + switch r.Method { + case "GET": + if len(req.Message) > 0 { + var msg string + if req.Operation == "PKIOperation" { + msg = base64.URLEncoding.EncodeToString(req.Message) + } else { + msg = string(req.Message) + } + params.Set("message", msg) + } + r.URL.RawQuery = params.Encode() + return nil + case "POST": + body := bytes.NewReader(req.Message) + // recreate the request here because IIS does not support chunked encoding by default + // and Go doesn't appear to set Content-Length if we use an io.ReadCloser + u := r.URL + u.RawQuery = params.Encode() + rr, err := http.NewRequest("POST", u.String(), body) + rr.Header.Set("Content-Type", "application/octet-stream") + if err != nil { + return errors.Join(err, fmt.Errorf("creating new POST request for %s", req.Operation)) + } + *r = *rr + return nil + default: + return fmt.Errorf("scep: %s method not supported", r.Method) + } +} + +const maxPayloadSize = 2 << 20 + +func decodeSCEPRequest(ctx context.Context, r *http.Request) (interface{}, error) { + msg, err := message(r) + if err != nil { + return nil, err + } + defer r.Body.Close() + + operation := r.URL.Query().Get("operation") + if len(operation) == 0 { + return nil, &BadRequestError{Message: "missing operation"} + } + + request := SCEPRequest{ + Message: msg, + Operation: r.URL.Query().Get("operation"), + } + + return request, nil +} + +// extract message from request +func message(r *http.Request) ([]byte, error) { + switch r.Method { + case "GET": + var msg string + q := r.URL.Query() + if _, ok := q["message"]; ok { + msg = q.Get("message") + } + op := q.Get("operation") + if op == "PKIOperation" { + if len(msg) == 0 { + return nil, &BadRequestError{Message: "missing PKIOperation message"} + } + + msg2, err := url.PathUnescape(msg) + if err != nil { + return nil, &BadRequestError{Message: fmt.Sprintf("invalid PKIOperation message: %s", msg)} + } + + decoded, err := base64.StdEncoding.DecodeString(msg2) + if err != nil { + return nil, &BadRequestError{Message: fmt.Sprintf("failed to base64 decode message: %s: %s", err.Error(), msg2)} + } + + return decoded, nil + } + return []byte(msg), nil + case "POST": + return ioutil.ReadAll(io.LimitReader(r.Body, maxPayloadSize)) + default: + return nil, errors.New("method not supported") + } +} + +// BadRequestError is an error type that generates a 400 status code. +type BadRequestError struct { + Message string +} + +// Error returns the error message. +func (e *BadRequestError) Error() string { + return e.Message +} + +// StatusCode implements the kithttp StatusCoder interface +func (e *BadRequestError) StatusCode() int { return http.StatusBadRequest } + +// EncodeSCEPResponse writes a SCEP response back to the SCEP client. +func encodeSCEPResponse(ctx context.Context, w http.ResponseWriter, response interface{}) error { + resp := response.(SCEPResponse) + if resp.Err != nil { + status := http.StatusInternalServerError + var esc kithttp.StatusCoder + if errors.As(resp.Err, &esc) { + status = esc.StatusCode() + } + + http.Error(w, resp.Err.Error(), status) + return nil + } + w.Header().Set("Content-Type", contentHeader(resp.operation, resp.CACertNum)) + _, _ = w.Write(resp.Data) + return nil +} + +// DecodeSCEPResponse decodes a SCEP response +func DecodeSCEPResponse(ctx context.Context, r *http.Response) (interface{}, error) { + if r.StatusCode != http.StatusOK && r.StatusCode >= 400 { + body, _ := ioutil.ReadAll(io.LimitReader(r.Body, 4096)) + return nil, fmt.Errorf("http request failed with status %s, msg: %s", + r.Status, + string(body), + ) + } + data, err := ioutil.ReadAll(io.LimitReader(r.Body, maxPayloadSize)) + if err != nil { + return nil, err + } + defer r.Body.Close() + resp := SCEPResponse{ + Data: data, + } + header := r.Header.Get("Content-Type") + if header == certChainHeader { + // we only set it to two to indicate a cert chain. + // the actual number of certs will be in the payload. + resp.CACertNum = 2 + } + return resp, nil +} + +const ( + certChainHeader = "application/x-x509-ca-ra-cert" + leafHeader = "application/x-x509-ca-cert" + pkiOpHeader = "application/x-pki-message" +) + +func contentHeader(op string, certNum int) string { + switch op { + case "GetCACert": + if certNum > 1 { + return certChainHeader + } + return leafHeader + case "PKIOperation": + return pkiOpHeader + default: + return "text/plain" + } +} diff --git a/server/mdm/scep/server/transport_test.go b/server/mdm/scep/server/transport_test.go new file mode 100644 index 000000000..ff8d51b78 --- /dev/null +++ b/server/mdm/scep/server/transport_test.go @@ -0,0 +1,250 @@ +package scepserver_test + +import ( + "bytes" + "context" + "crypto/x509" + "encoding/base64" + "io/ioutil" + "net/http" + "net/http/httptest" + "os" + "strings" + "testing" + + "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" + filedepot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot/file" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" + "github.com/gorilla/mux" + + kitlog "github.com/go-kit/kit/log" +) + +func TestCACaps(t *testing.T) { + server, _, teardown := newServer(t) + defer teardown() + url := server.URL + "/scep?operation=GetCACaps" + resp, err := http.Get(url) //nolint:gosec + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != http.StatusOK { + t.Error("expected", http.StatusOK, "got", resp.StatusCode) + } +} + +func TestEncodePKCSReq_Request(t *testing.T) { + pkcsreq := loadTestFile(t, "../scep/testdata/PKCSReq.der") + msg := scepserver.SCEPRequest{ + Operation: "PKIOperation", + Message: pkcsreq, + } + methods := []string{"POST", "GET"} + for _, method := range methods { + t.Run(method, func(t *testing.T) { + r := httptest.NewRequest(method, "http://acme.co/scep", nil) + rr := *r + if err := scepserver.EncodeSCEPRequest(context.Background(), &rr, msg); err != nil { + t.Fatal(err) + } + + q := r.URL.Query() + if have, want := q.Get("operation"), msg.Operation; have != want { + t.Errorf("have %s, want %s", have, want) + } + + if method == "POST" { + if have, want := rr.ContentLength, int64(len(msg.Message)); have != want { + t.Errorf("have %d, want %d", have, want) + } + } + + if method == "GET" { + if q.Get("message") == "" { + t.Errorf("expected GET PKIOperation to have a non-empty message field") + } + } + }) + } +} + +func TestGetCACertMessage(t *testing.T) { + testMsg := "testMsg" + sr := scepserver.SCEPRequest{Operation: "GetCACert", Message: []byte(testMsg)} + req, err := http.NewRequest("GET", "http://127.0.0.1:8080/scep", nil) + if err != nil { + t.Fatal(err) + } + err = scepserver.EncodeSCEPRequest(context.Background(), req, sr) + if err != nil { + t.Fatal(err) + } + if !strings.Contains(req.URL.RawQuery, "message="+testMsg) { + t.Fatal("RawQuery does not contain message") + } +} + +func TestPKIOperation(t *testing.T) { + server, _, teardown := newServer(t) + defer teardown() + pkcsreq := loadTestFile(t, "../scep/testdata/PKCSReq.der") + body := bytes.NewReader(pkcsreq) + url := server.URL + "/scep?operation=PKIOperation" + resp, err := http.Post(url, "", body) //nolint:gosec + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != http.StatusOK { + t.Error("expected", http.StatusOK, "got", resp.StatusCode) + } +} + +func TestPKIOperationGET(t *testing.T) { + server, _, teardown := newServer(t) + defer teardown() + pkcsreq := loadTestFile(t, "../scep/testdata/PKCSReq.der") + message := base64.StdEncoding.EncodeToString(pkcsreq) + req, err := http.NewRequest("GET", server.URL+"/scep", nil) + if err != nil { + t.Fatal(err) + } + params := req.URL.Query() + params.Set("operation", "PKIOperation") + params.Set("message", message) + req.URL.RawQuery = params.Encode() + resp, err := http.DefaultClient.Do(req) + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != http.StatusOK { + t.Error("expected", http.StatusOK, "got", resp.StatusCode) + } +} + +func TestInvalidReqs(t *testing.T) { + server, _, teardown := newServer(t) + defer teardown() + // Check that invalid requests return status 400. + req, err := http.NewRequest("GET", server.URL+"/scep", nil) + if err != nil { + t.Fatal(err) + } + + resp, err := http.DefaultClient.Do(req) + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != 400 { + t.Error("expected", http.StatusBadRequest, "got", resp.StatusCode) + } + + req, err = http.NewRequest("GET", server.URL+"/scep", nil) + if err != nil { + t.Fatal(err) + } + + params := req.URL.Query() + params.Set("operation", "PKIOperation") + params.Set("message", "") + req.URL.RawQuery = params.Encode() + + resp, err = http.DefaultClient.Do(req) + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != 400 { + t.Error("expected", http.StatusBadRequest, "got", resp.StatusCode) + } + + params = req.URL.Query() + params.Set("operation", "InvalidOperation") + req.URL.RawQuery = params.Encode() + + resp, err = http.DefaultClient.Do(req) + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != 400 { + t.Error("expected", http.StatusBadRequest, "got", resp.StatusCode) + } + + postReq, err := http.NewRequest("POST", server.URL+"/scep", nil) + if err != nil { + t.Fatal(err) + } + + params = req.URL.Query() + params.Set("operation", "PKIOperation") + req.URL.RawQuery = params.Encode() + + resp, err = http.DefaultClient.Do(postReq) + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != 400 { + t.Error("expected", http.StatusBadRequest, "got", resp.StatusCode) + } + + params = req.URL.Query() + params.Set("operation", "InvalidOperation") + req.URL.RawQuery = params.Encode() + + resp, err = http.DefaultClient.Do(postReq) + if err != nil { + t.Fatal(err) + } + if resp.StatusCode != 400 { + t.Error("expected", http.StatusBadRequest, "got", resp.StatusCode) + } +} + +func newServer(t *testing.T, opts ...scepserver.ServiceOption) (*httptest.Server, scepserver.Service, func()) { + var err error + var depot depot.Depot // cert storage + { + depot, err = filedepot.NewFileDepot("../scep/testdata/testca") + if err != nil { + t.Fatal(err) + } + depot = &noopDepot{depot} + } + crt, key, err := depot.CA([]byte{}) + if err != nil { + t.Fatal(err) + } + var svc scepserver.Service // scep service + { + svc, err = scepserver.NewService(crt[0], key, scepserver.NopCSRSigner()) + if err != nil { + t.Fatal(err) + } + } + logger := kitlog.NewNopLogger() + e := scepserver.MakeServerEndpoints(svc) + scepHandler := scepserver.MakeHTTPHandler(e, svc, logger) + r := mux.NewRouter() + r.Handle("/scep", scepHandler) + server := httptest.NewServer(r) + teardown := func() { + server.Close() + os.Remove("../scep/testdata/testca/serial") + os.Remove("../scep/testdata/testca/index.txt") + } + return server, svc, teardown +} + +type noopDepot struct{ depot.Depot } + +func (d *noopDepot) Put(name string, crt *x509.Certificate) error { + return nil +} + +/* helpers */ + +func loadTestFile(t *testing.T, path string) []byte { + data, err := ioutil.ReadFile(path) + if err != nil { + t.Fatal(err) + } + return data +} diff --git a/server/mock/datastore_mdm_mock.go b/server/mock/datastore_mdm_mock.go index a3cb7f66e..25d914921 100644 --- a/server/mock/datastore_mdm_mock.go +++ b/server/mock/datastore_mdm_mock.go @@ -6,6 +6,7 @@ import ( "context" "crypto/tls" "sync" + "time" "github.com/fleetdm/fleet/v4/server/fleet" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" @@ -47,7 +48,7 @@ type EnrollmentHasCertHashFunc func(r *mdm.Request, hash string) (bool, error) type IsCertHashAssociatedFunc func(r *mdm.Request, hash string) (bool, error) -type AssociateCertHashFunc func(r *mdm.Request, hash string) error +type AssociateCertHashFunc func(r *mdm.Request, hash string, certNotValidAfter time.Time) error type RetrieveMigrationCheckinsFunc func(p0 context.Context, p1 chan<- interface{}) error @@ -241,11 +242,11 @@ func (fs *MDMAppleStore) IsCertHashAssociated(r *mdm.Request, hash string) (bool return fs.IsCertHashAssociatedFunc(r, hash) } -func (fs *MDMAppleStore) AssociateCertHash(r *mdm.Request, hash string) error { +func (fs *MDMAppleStore) AssociateCertHash(r *mdm.Request, hash string, certNotValidAfter time.Time) error { fs.mu.Lock() fs.AssociateCertHashFuncInvoked = true fs.mu.Unlock() - return fs.AssociateCertHashFunc(r, hash) + return fs.AssociateCertHashFunc(r, hash, certNotValidAfter) } func (fs *MDMAppleStore) RetrieveMigrationCheckins(p0 context.Context, p1 chan<- interface{}) error { diff --git a/server/mock/datastore_mock.go b/server/mock/datastore_mock.go index 35038f6e5..a676dd791 100644 --- a/server/mock/datastore_mock.go +++ b/server/mock/datastore_mock.go @@ -550,6 +550,10 @@ type GetHostDiskEncryptionKeyFunc func(ctx context.Context, hostID uint) (*fleet type SetDiskEncryptionResetStatusFunc func(ctx context.Context, hostID uint, status bool) error +type GetHostCertAssociationsToExpireFunc func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) + +type SetCommandForPendingSCEPRenewalFunc func(ctx context.Context, assocs []fleet.SCEPIdentityAssociation, cmdUUID string) error + type UpdateHostMDMProfilesVerificationFunc func(ctx context.Context, host *fleet.Host, toVerify []string, toFail []string, toRetry []string) error type GetHostMDMProfilesExpectedForVerificationFunc func(ctx context.Context, host *fleet.Host) (map[string]*fleet.ExpectedMDMProfile, error) @@ -1633,6 +1637,12 @@ type DataStore struct { SetDiskEncryptionResetStatusFunc SetDiskEncryptionResetStatusFunc SetDiskEncryptionResetStatusFuncInvoked bool + GetHostCertAssociationsToExpireFunc GetHostCertAssociationsToExpireFunc + GetHostCertAssociationsToExpireFuncInvoked bool + + SetCommandForPendingSCEPRenewalFunc SetCommandForPendingSCEPRenewalFunc + SetCommandForPendingSCEPRenewalFuncInvoked bool + UpdateHostMDMProfilesVerificationFunc UpdateHostMDMProfilesVerificationFunc UpdateHostMDMProfilesVerificationFuncInvoked bool @@ -3924,6 +3934,20 @@ func (s *DataStore) SetDiskEncryptionResetStatus(ctx context.Context, hostID uin return s.SetDiskEncryptionResetStatusFunc(ctx, hostID, status) } +func (s *DataStore) GetHostCertAssociationsToExpire(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + s.mu.Lock() + s.GetHostCertAssociationsToExpireFuncInvoked = true + s.mu.Unlock() + return s.GetHostCertAssociationsToExpireFunc(ctx, expiryDays, limit) +} + +func (s *DataStore) SetCommandForPendingSCEPRenewal(ctx context.Context, assocs []fleet.SCEPIdentityAssociation, cmdUUID string) error { + s.mu.Lock() + s.SetCommandForPendingSCEPRenewalFuncInvoked = true + s.mu.Unlock() + return s.SetCommandForPendingSCEPRenewalFunc(ctx, assocs, cmdUUID) +} + func (s *DataStore) UpdateHostMDMProfilesVerification(ctx context.Context, host *fleet.Host, toVerify []string, toFail []string, toRetry []string) error { s.mu.Lock() s.UpdateHostMDMProfilesVerificationFuncInvoked = true diff --git a/server/mock/scep/depot.go b/server/mock/scep/depot.go index 1565d75cc..202dd44d4 100644 --- a/server/mock/scep/depot.go +++ b/server/mock/scep/depot.go @@ -8,7 +8,7 @@ import ( "math/big" "sync" - "github.com/micromdm/scep/v2/depot" + "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" ) var _ depot.Depot = (*Depot)(nil) diff --git a/server/service/apple_mdm.go b/server/service/apple_mdm.go index 0de6726aa..84404979e 100644 --- a/server/service/apple_mdm.go +++ b/server/service/apple_mdm.go @@ -9,7 +9,6 @@ import ( "io" "mime/multipart" "net/http" - "net/url" "strconv" "strings" "sync" @@ -20,6 +19,7 @@ import ( "github.com/fleetdm/fleet/v4/pkg/optjson" "github.com/fleetdm/fleet/v4/server" "github.com/fleetdm/fleet/v4/server/authz" + "github.com/fleetdm/fleet/v4/server/config" "github.com/fleetdm/fleet/v4/server/contexts/ctxerr" "github.com/fleetdm/fleet/v4/server/contexts/license" "github.com/fleetdm/fleet/v4/server/contexts/logging" @@ -32,8 +32,8 @@ import ( "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" "github.com/fleetdm/fleet/v4/server/sso" "github.com/fleetdm/fleet/v4/server/worker" - kitlog "github.com/go-kit/kit/log" - "github.com/go-kit/kit/log/level" + kitlog "github.com/go-kit/log" + "github.com/go-kit/log/level" "github.com/google/uuid" "github.com/groob/plist" "github.com/micromdm/nanodep/godep" @@ -1087,16 +1087,9 @@ func (svc *Service) GetMDMAppleEnrollmentProfileByToken(ctx context.Context, tok return nil, ctxerr.Wrap(ctx, err) } - enrollURL := appConfig.ServerSettings.ServerURL - if ref != "" { - u, err := url.Parse(enrollURL) - if err != nil { - return nil, ctxerr.Wrap(ctx, err, "parsing configured server URL") - } - q := u.Query() - q.Add(mobileconfig.FleetEnrollReferenceKey, ref) - u.RawQuery = q.Encode() - enrollURL = u.String() + enrollURL, err := apple_mdm.AddEnrollmentRefToFleetURL(appConfig.ServerSettings.ServerURL, ref) + if err != nil { + return nil, ctxerr.Wrap(ctx, err, "adding reference to fleet URL") } mobileconfig, err := apple_mdm.GenerateEnrollmentProfileMobileconfig( @@ -2824,3 +2817,123 @@ func (svc *Service) getConfigAppleBMDefaultTeamID(ctx context.Context, appCfg *f return tmID, nil } + +// scepCertRenewalThresholdDays defines the number of days before a SCEP +// certificate must be renewed. +const scepCertRenewalThresholdDays = 30 + +// maxCertsRenewalPerRun specifies the maximum number of certificates to renew +// in a single cron run. +// +// Assuming that the cron runs every hour, we'll enqueue 24,000 renewals per +// day, and we have room for 24,000 * scepCertRenewalThresholdDays total +// renewals. +// +// For a default of 30 days as a threshold this gives us room for a fleet of +// 720,000 devices expiring at the same time. +const maxCertsRenewalPerRun = 100 + +func RenewSCEPCertificates( + ctx context.Context, + logger kitlog.Logger, + ds fleet.Datastore, + config *config.FleetConfig, + commander *apple_mdm.MDMAppleCommander, +) error { + if !config.MDM.IsAppleSCEPSet() { + logger.Log("inf", "skipping renewal of macOS SCEP certificates as MDM is not fully configured") + return nil + } + + if commander == nil { + logger.Log("inf", "skipping renewal of macOS SCEP certificates as apple_mdm.MDMAppleCommander was not provided") + return nil + } + + // for each hash, grab the host that uses it as its identity certificate + certAssociations, err := ds.GetHostCertAssociationsToExpire(ctx, scepCertRenewalThresholdDays, maxCertsRenewalPerRun) + if err != nil { + return ctxerr.Wrap(ctx, err, "getting host cert associations") + } + + appConfig, err := ds.AppConfig(ctx) + if err != nil { + return ctxerr.Wrap(ctx, err, "getting AppConfig") + } + + mdmPushCertTopic, err := config.MDM.AppleAPNsTopic() + if err != nil { + return ctxerr.Wrap(ctx, err, "getting certificate topic") + } + + // assocsWithRefs stores hosts that have enrollment references on their + // enrollment profiles. This is the case for ADE-enrolled hosts using + // SSO to authenticate. + assocsWithRefs := []fleet.SCEPIdentityAssociation{} + // assocsWithoutRefs stores hosts that don't have an enrollment + // reference in their enrollment profile. + assocsWithoutRefs := []fleet.SCEPIdentityAssociation{} + for _, assoc := range certAssociations { + if assoc.EnrollReference != "" { + assocsWithRefs = append(assocsWithRefs, assoc) + continue + } + assocsWithoutRefs = append(assocsWithoutRefs, assoc) + } + + // send a single command for all the hosts without references. + if len(assocsWithoutRefs) > 0 { + profile, err := apple_mdm.GenerateEnrollmentProfileMobileconfig( + appConfig.OrgInfo.OrgName, + appConfig.ServerSettings.ServerURL, + config.MDM.AppleSCEPChallenge, + mdmPushCertTopic, + ) + if err != nil { + return ctxerr.Wrap(ctx, err, "generating enrollment profile for hosts without enroll reference") + } + + cmdUUID := uuid.NewString() + var uuids []string + for _, assoc := range assocsWithoutRefs { + uuids = append(uuids, assoc.HostUUID) + assoc.RenewCommandUUID = cmdUUID + } + + if err := commander.InstallProfile(ctx, uuids, profile, cmdUUID); err != nil { + return ctxerr.Wrapf(ctx, err, "sending InstallProfile command for hosts %s", assocsWithoutRefs) + } + + if err := ds.SetCommandForPendingSCEPRenewal(ctx, assocsWithoutRefs, cmdUUID); err != nil { + return ctxerr.Wrap(ctx, err, "setting pending command associations") + } + } + + // send individual commands for each host with a reference + for _, assoc := range assocsWithRefs { + enrollURL, err := apple_mdm.AddEnrollmentRefToFleetURL(appConfig.ServerSettings.ServerURL, assoc.EnrollReference) + if err != nil { + return ctxerr.Wrap(ctx, err, "adding reference to fleet URL") + } + + profile, err := apple_mdm.GenerateEnrollmentProfileMobileconfig( + appConfig.OrgInfo.OrgName, + enrollURL, + config.MDM.AppleSCEPChallenge, + mdmPushCertTopic, + ) + if err != nil { + return ctxerr.Wrap(ctx, err, "generating enrollment profile for hosts with enroll reference") + } + cmdUUID := uuid.NewString() + if err := commander.InstallProfile(ctx, []string{assoc.HostUUID}, profile, cmdUUID); err != nil { + return ctxerr.Wrapf(ctx, err, "sending InstallProfile command for hosts %s", assocsWithRefs) + } + + if err := ds.SetCommandForPendingSCEPRenewal(ctx, []fleet.SCEPIdentityAssociation{assoc}, cmdUUID); err != nil { + return ctxerr.Wrap(ctx, err, "setting pending command associations") + } + } + + return nil +} diff --git a/server/service/apple_mdm_test.go b/server/service/apple_mdm_test.go index 660502e8a..c7662f2f6 100644 --- a/server/service/apple_mdm_test.go +++ b/server/service/apple_mdm_test.go @@ -3,17 +3,25 @@ package service import ( "bytes" "context" + "crypto/rand" + "crypto/rsa" "crypto/tls" + "crypto/x509" + "crypto/x509/pkix" + "encoding/asn1" "encoding/base64" "encoding/json" + "encoding/pem" "errors" "fmt" + "math/big" "net/http" "net/http/httptest" "os" "strings" "sync/atomic" "testing" + "time" "github.com/fleetdm/fleet/v4/server/authz" "github.com/fleetdm/fleet/v4/server/config" @@ -23,6 +31,7 @@ import ( fleetmdm "github.com/fleetdm/fleet/v4/server/mdm" apple_mdm "github.com/fleetdm/fleet/v4/server/mdm/apple" "github.com/fleetdm/fleet/v4/server/mdm/apple/mobileconfig" + "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/log/stdlogfmt" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" nanomdm_pushsvc "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/push/service" "github.com/fleetdm/fleet/v4/server/mock" @@ -2656,3 +2665,248 @@ func mobileconfigForTestWithContent(outerName, outerIdentifier, innerIdentifier, `, innerName, innerIdentifier, innerType, outerName, outerIdentifier, uuid.New().String())) } + +func generateCertWithAPNsTopic() ([]byte, []byte, error) { + // generate a new private key + priv, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + return nil, nil, err + } + + // set up the OID for UID + oidUID := asn1.ObjectIdentifier{0, 9, 2342, 19200300, 100, 1, 1} + + // set up a certificate template with the required UID in the Subject + notBefore := time.Now() + notAfter := notBefore.Add(365 * 24 * time.Hour) + serialNumber, err := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128)) + if err != nil { + return nil, nil, err + } + + template := x509.Certificate{ + SerialNumber: serialNumber, + Subject: pkix.Name{ + ExtraNames: []pkix.AttributeTypeAndValue{ + { + Type: oidUID, + Value: "com.apple.mgmt.Example", + }, + }, + }, + NotBefore: notBefore, + NotAfter: notAfter, + + KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + } + + // create a self-signed certificate + derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) + if err != nil { + return nil, nil, err + } + + // encode to PEM + certPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes}) + keyPEM := pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)}) + + return certPEM, keyPEM, nil +} + +func setupTest(t *testing.T) (context.Context, kitlog.Logger, *mock.Store, *config.FleetConfig, *mock.MDMAppleStore, *apple_mdm.MDMAppleCommander) { + ctx := context.Background() + logger := kitlog.NewNopLogger() + cfg := config.TestConfig() + testCertPEM, testKeyPEM, err := generateCertWithAPNsTopic() + require.NoError(t, err) + config.SetTestMDMConfig(t, &cfg, testCertPEM, testKeyPEM, testBMToken, "../../server/service/testdata") + ds := new(mock.Store) + mdmStorage := &mock.MDMAppleStore{} + pushFactory, _ := newMockAPNSPushProviderFactory() + pusher := nanomdm_pushsvc.New( + mdmStorage, + mdmStorage, + pushFactory, + stdlogfmt.New(), + ) + commander := apple_mdm.NewMDMAppleCommander(mdmStorage, pusher) + + return ctx, logger, ds, &cfg, mdmStorage, commander +} + +func TestRenewSCEPCertificatesMDMConfigNotSet(t *testing.T) { + ctx, logger, ds, cfg, _, commander := setupTest(t) + cfg.MDM = config.MDMConfig{} // ensure MDM is not fully configured + err := RenewSCEPCertificates(ctx, logger, ds, cfg, commander) + require.NoError(t, err) +} + +func TestRenewSCEPCertificatesCommanderNil(t *testing.T) { + ctx, logger, ds, cfg, _, _ := setupTest(t) + err := RenewSCEPCertificates(ctx, logger, ds, cfg, nil) + require.NoError(t, err) +} + +func TestRenewSCEPCertificatesBranches(t *testing.T) { + tests := []struct { + name string + customExpectations func(*testing.T, *mock.Store, *config.FleetConfig, *mock.MDMAppleStore, *apple_mdm.MDMAppleCommander) + expectedError bool + }{ + { + name: "No Certs to Renew", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return nil, nil + } + }, + expectedError: false, + }, + { + name: "GetHostCertAssociationsToExpire Errors", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return nil, errors.New("database error") + } + }, + expectedError: true, + }, + { + name: "AppConfig Errors", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + ds.AppConfigFunc = func(ctx context.Context) (*fleet.AppConfig, error) { + return nil, errors.New("app config error") + } + }, + expectedError: true, + }, + { + name: "InstallProfile for hostsWithoutRefs", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + var wantCommandUUID string + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return []fleet.SCEPIdentityAssociation{{HostUUID: "hostUUID1", EnrollReference: ""}}, nil + } + + appleStore.EnqueueCommandFunc = func(ctx context.Context, id []string, cmd *mdm.Command) (map[string]error, error) { + require.Equal(t, "InstallProfile", cmd.Command.RequestType) + wantCommandUUID = cmd.CommandUUID + return map[string]error{}, nil + } + ds.SetCommandForPendingSCEPRenewalFunc = func(ctx context.Context, assocs []fleet.SCEPIdentityAssociation, cmdUUID string) error { + require.Len(t, assocs, 1) + require.Equal(t, "hostUUID1", assocs[0].HostUUID) + require.Equal(t, cmdUUID, wantCommandUUID) + return nil + } + + t.Cleanup(func() { + require.True(t, appleStore.EnqueueCommandFuncInvoked) + require.True(t, ds.SetCommandForPendingSCEPRenewalFuncInvoked) + }) + }, + expectedError: false, + }, + { + name: "InstallProfile for hostsWithoutRefs fails", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return []fleet.SCEPIdentityAssociation{{HostUUID: "hostUUID1", EnrollReference: ""}}, nil + } + + appleStore.EnqueueCommandFunc = func(ctx context.Context, id []string, cmd *mdm.Command) (map[string]error, error) { + return map[string]error{}, errors.New("foo") + } + }, + expectedError: true, + }, + { + name: "InstallProfile for hostsWithRefs", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + var wantCommandUUID string + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return []fleet.SCEPIdentityAssociation{{HostUUID: "hostUUID2", EnrollReference: "ref1"}}, nil + } + appleStore.EnqueueCommandFunc = func(ctx context.Context, id []string, cmd *mdm.Command) (map[string]error, error) { + require.Equal(t, "InstallProfile", cmd.Command.RequestType) + wantCommandUUID = cmd.CommandUUID + return map[string]error{}, nil + } + ds.SetCommandForPendingSCEPRenewalFunc = func(ctx context.Context, assocs []fleet.SCEPIdentityAssociation, cmdUUID string) error { + require.Len(t, assocs, 1) + require.Equal(t, "hostUUID2", assocs[0].HostUUID) + require.Equal(t, cmdUUID, wantCommandUUID) + return nil + } + t.Cleanup(func() { + require.True(t, appleStore.EnqueueCommandFuncInvoked) + require.True(t, ds.SetCommandForPendingSCEPRenewalFuncInvoked) + }) + }, + expectedError: false, + }, + { + name: "InstallProfile for hostsWithRefs fails", + customExpectations: func(t *testing.T, ds *mock.Store, cfg *config.FleetConfig, appleStore *mock.MDMAppleStore, commander *apple_mdm.MDMAppleCommander) { + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return []fleet.SCEPIdentityAssociation{{HostUUID: "hostUUID1", EnrollReference: "ref1"}}, nil + } + + appleStore.EnqueueCommandFunc = func(ctx context.Context, id []string, cmd *mdm.Command) (map[string]error, error) { + return map[string]error{}, errors.New("foo") + } + }, + expectedError: true, + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + ctx, logger, ds, cfg, appleStorage, commander := setupTest(t) + + ds.AppConfigFunc = func(ctx context.Context) (*fleet.AppConfig, error) { + appCfg := &fleet.AppConfig{} + appCfg.OrgInfo.OrgName = "fl33t" + appCfg.ServerSettings.ServerURL = "https://foo.example.com" + return appCfg, nil + } + + ds.GetHostCertAssociationsToExpireFunc = func(ctx context.Context, expiryDays int, limit int) ([]fleet.SCEPIdentityAssociation, error) { + return []fleet.SCEPIdentityAssociation{}, nil + } + + ds.SetCommandForPendingSCEPRenewalFunc = func(ctx context.Context, assocs []fleet.SCEPIdentityAssociation, cmdUUID string) error { + return nil + } + + appleStorage.RetrievePushInfoFunc = func(ctx context.Context, targets []string) (map[string]*mdm.Push, error) { + pushes := make(map[string]*mdm.Push, len(targets)) + for _, uuid := range targets { + pushes[uuid] = &mdm.Push{ + PushMagic: "magic" + uuid, + Token: []byte("token" + uuid), + Topic: "topic" + uuid, + } + } + + return pushes, nil + } + + appleStorage.RetrievePushCertFunc = func(ctx context.Context, topic string) (*tls.Certificate, string, error) { + cert, err := tls.LoadX509KeyPair("./testdata/server.pem", "./testdata/server.key") + return &cert, "", err + } + + tc.customExpectations(t, ds, cfg, appleStorage, commander) + + err := RenewSCEPCertificates(ctx, logger, ds, cfg, commander) + if tc.expectedError { + require.Error(t, err) + } else { + require.NoError(t, err) + } + }) + } +} diff --git a/server/service/async/async.go b/server/service/async/async.go index 8fe02692e..5f341b09d 100644 --- a/server/service/async/async.go +++ b/server/service/async/async.go @@ -10,7 +10,6 @@ import ( "github.com/fleetdm/fleet/v4/server/contexts/ctxerr" "github.com/fleetdm/fleet/v4/server/datastore/redis" "github.com/fleetdm/fleet/v4/server/fleet" - "github.com/getsentry/sentry-go" kitlog "github.com/go-kit/kit/log" "github.com/go-kit/kit/log/level" redigo "github.com/gomodule/redigo/redis" @@ -47,7 +46,6 @@ func NewTask(ds fleet.Datastore, pool fleet.RedisPool, clck clock.Clock, conf co func (t *Task) StartCollectors(ctx context.Context, logger kitlog.Logger) { collectorErrHandler := func(name string, err error) { level.Error(logger).Log("err", fmt.Sprintf("%s collector", name), "details", err) - sentry.CaptureException(err) ctxerr.Handle(ctx, err) } diff --git a/server/service/handler.go b/server/service/handler.go index 6c917312d..5df68c0b8 100644 --- a/server/service/handler.go +++ b/server/service/handler.go @@ -22,6 +22,8 @@ import ( "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/service/multi" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/service/nanomdm" nanomdm_storage "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/storage" + scep_depot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" + scepserver "github.com/fleetdm/fleet/v4/server/mdm/scep/server" "github.com/fleetdm/fleet/v4/server/service/middleware/authzcheck" "github.com/fleetdm/fleet/v4/server/service/middleware/mdmconfigured" "github.com/fleetdm/fleet/v4/server/service/middleware/ratelimit" @@ -30,8 +32,6 @@ import ( "github.com/go-kit/kit/log/level" kithttp "github.com/go-kit/kit/transport/http" "github.com/gorilla/mux" - scep_depot "github.com/micromdm/scep/v2/depot" - scepserver "github.com/micromdm/scep/v2/server" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" "github.com/throttled/throttled/v2" @@ -131,7 +131,7 @@ func MakeHandler( setRequestsContexts(svc), ), kithttp.ServerErrorHandler(&errorHandler{logger}), - kithttp.ServerErrorEncoder(encodeErrorAndTrySentry(config.Sentry.Dsn != "")), + kithttp.ServerErrorEncoder(encodeError), kithttp.ServerAfter( kithttp.SetContentType("application/json; charset=utf-8"), logRequestEnd(logger), diff --git a/server/service/integration_mdm_test.go b/server/service/integration_mdm_test.go index 643007a84..368d567c3 100644 --- a/server/service/integration_mdm_test.go +++ b/server/service/integration_mdm_test.go @@ -85,6 +85,7 @@ type integrationMDMTestSuite struct { onDEPScheduleDone func() // function called when depSchedule.Trigger() job completed mdmStorage *mysql.NanoMDMStorage worker *worker.Worker + mdmCommander *apple_mdm.MDMAppleCommander } func (s *integrationMDMTestSuite) SetupSuite() { @@ -200,6 +201,7 @@ func (s *integrationMDMTestSuite) SetupSuite() { s.depSchedule = depSchedule s.profileSchedule = profileSchedule s.mdmStorage = mdmStorage + s.mdmCommander = mdmCommander macosJob := &worker.MacosSetupAssistant{ Datastore: s.ds, @@ -7092,8 +7094,13 @@ func (s *integrationMDMTestSuite) downloadAndVerifyEnrollmentProfile(path string require.NoError(t, err) require.Equal(t, len(body), headerLen) + return s.verifyEnrollmentProfile(body, "") +} + +func (s *integrationMDMTestSuite) verifyEnrollmentProfile(rawProfile []byte, enrollmentRef string) *enrollmentProfile { + t := s.T() var profile enrollmentProfile - require.NoError(t, plist.Unmarshal(body, &profile)) + require.NoError(t, plist.Unmarshal(rawProfile, &profile)) for _, p := range profile.PayloadContent { switch p.PayloadType { @@ -7101,8 +7108,10 @@ func (s *integrationMDMTestSuite) downloadAndVerifyEnrollmentProfile(path string require.Equal(t, s.getConfig().ServerSettings.ServerURL+apple_mdm.SCEPPath, p.PayloadContent.URL) require.Equal(t, s.fleetCfg.MDM.AppleSCEPChallenge, p.PayloadContent.Challenge) case "com.apple.mdm": - // Use Contains as the url may have query params require.Contains(t, p.ServerURL, s.getConfig().ServerSettings.ServerURL+apple_mdm.MDMPath) + if enrollmentRef != "" { + require.Contains(t, p.ServerURL, enrollmentRef) + } default: require.Failf(t, "unrecognized payload type in enrollment profile: %s", p.PayloadType) } @@ -11323,3 +11332,112 @@ func (s *integrationMDMTestSuite) TestZCustomConfigurationWebURL() { func (s *integrationMDMTestSuite) TestGetManualEnrollmentProfile() { s.downloadAndVerifyEnrollmentProfile("/api/latest/fleet/mdm/manual_enrollment_profile") } + +func (s *integrationMDMTestSuite) TestSCEPCertExpiration() { + t := s.T() + ctx := context.Background() + // ensure there's a token for automatic enrollments + s.mockDEPResponse(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write([]byte(`{"auth_session_token": "xyz"}`)) + })) + s.runDEPSchedule() + + // add a device that's manually enrolled + desktopToken := uuid.New().String() + manualHost := createOrbitEnrolledHost(t, "darwin", "h1", s.ds) + err := s.ds.SetOrUpdateDeviceAuthToken(context.Background(), manualHost.ID, desktopToken) + require.NoError(t, err) + manualEnrolledDevice := mdmtest.NewTestMDMClientAppleDesktopManual(s.server.URL, desktopToken) + manualEnrolledDevice.UUID = manualHost.UUID + err = manualEnrolledDevice.Enroll() + require.NoError(t, err) + + // add a device that's automatically enrolled + automaticHost := createOrbitEnrolledHost(t, "darwin", "h2", s.ds) + depURLToken := loadEnrollmentProfileDEPToken(t, s.ds) + automaticEnrolledDevice := mdmtest.NewTestMDMClientAppleDEP(s.server.URL, depURLToken) + automaticEnrolledDevice.UUID = automaticHost.UUID + automaticEnrolledDevice.SerialNumber = automaticHost.HardwareSerial + err = automaticEnrolledDevice.Enroll() + require.NoError(t, err) + + // add a device that's automatically enrolled with a server ref + automaticHostWithRef := createOrbitEnrolledHost(t, "darwin", "h3", s.ds) + automaticEnrolledDeviceWithRef := mdmtest.NewTestMDMClientAppleDEP(s.server.URL, depURLToken) + automaticEnrolledDeviceWithRef.UUID = automaticHostWithRef.UUID + automaticEnrolledDeviceWithRef.SerialNumber = automaticHostWithRef.HardwareSerial + err = automaticEnrolledDeviceWithRef.Enroll() + require.NoError(t, s.ds.SetOrUpdateMDMData(ctx, automaticHostWithRef.ID, false, true, s.server.URL, true, fleet.WellKnownMDMFleet, "foo")) + require.NoError(t, err) + + cert, key, err := generateCertWithAPNsTopic() + require.NoError(t, err) + fleetCfg := config.TestConfig() + config.SetTestMDMConfig(s.T(), &fleetCfg, cert, key, testBMToken, "") + logger := kitlog.NewJSONLogger(os.Stdout) + + // run without expired certs, no command enqueued + err = RenewSCEPCertificates(ctx, logger, s.ds, &fleetCfg, s.mdmCommander) + require.NoError(t, err) + cmd, err := manualEnrolledDevice.Idle() + require.NoError(t, err) + require.Nil(t, cmd) + + cmd, err = automaticEnrolledDevice.Idle() + require.NoError(t, err) + require.Nil(t, cmd) + + cmd, err = automaticEnrolledDeviceWithRef.Idle() + require.NoError(t, err) + require.Nil(t, cmd) + + // expire all the certs we just created + mysql.ExecAdhocSQL(t, s.ds, func(q sqlx.ExtContext) error { + _, err := q.ExecContext(ctx, ` + UPDATE nano_cert_auth_associations + SET cert_not_valid_after = DATE_SUB(CURDATE(), INTERVAL 1 YEAR) + WHERE id IN (?, ?, ?) + `, manualHost.UUID, automaticHost.UUID, automaticHostWithRef.UUID) + return err + }) + + // generate a new config here so we can manipulate the certs. + err = RenewSCEPCertificates(ctx, logger, s.ds, &fleetCfg, s.mdmCommander) + require.NoError(t, err) + + checkRenewCertCommand := func(device *mdmtest.TestAppleMDMClient, enrollRef string) { + var renewCmd *micromdm.CommandPayload + cmd, err := device.Idle() + require.NoError(t, err) + for cmd != nil { + if cmd.Command.RequestType == "InstallProfile" { + renewCmd = cmd + } + cmd, err = device.Acknowledge(cmd.CommandUUID) + require.NoError(t, err) + } + require.NotNil(t, renewCmd) + s.verifyEnrollmentProfile(renewCmd.Command.InstallProfile.Payload, enrollRef) + } + + checkRenewCertCommand(manualEnrolledDevice, "") + checkRenewCertCommand(automaticEnrolledDevice, "") + checkRenewCertCommand(automaticEnrolledDeviceWithRef, "foo") + + // another cron run shouldn't enqueue more commands + err = RenewSCEPCertificates(ctx, logger, s.ds, &fleetCfg, s.mdmCommander) + require.NoError(t, err) + + cmd, err = manualEnrolledDevice.Idle() + require.NoError(t, err) + require.Nil(t, cmd) + + cmd, err = automaticEnrolledDevice.Idle() + require.NoError(t, err) + require.Nil(t, cmd) + + cmd, err = automaticEnrolledDeviceWithRef.Idle() + require.NoError(t, err) + require.Nil(t, cmd) +} diff --git a/server/service/mdm_test.go b/server/service/mdm_test.go index 06b8e1385..d8a6f783e 100644 --- a/server/service/mdm_test.go +++ b/server/service/mdm_test.go @@ -20,11 +20,11 @@ import ( "github.com/fleetdm/fleet/v4/server/contexts/license" "github.com/fleetdm/fleet/v4/server/contexts/viewer" "github.com/fleetdm/fleet/v4/server/fleet" + "github.com/fleetdm/fleet/v4/server/mdm/scep/cryptoutil/x509util" "github.com/fleetdm/fleet/v4/server/mock" "github.com/fleetdm/fleet/v4/server/ptr" "github.com/fleetdm/fleet/v4/server/test" "github.com/google/uuid" - "github.com/micromdm/scep/v2/cryptoutil/x509util" "github.com/stretchr/testify/require" ) @@ -1339,7 +1339,8 @@ func TestMDMBatchSetProfiles(t *testing.T) { nil, nil, []fleet.MDMProfileBatchPayload{ - {Name: "foo", Contents: []byte(` + { + Name: "foo", Contents: []byte(` @@ -1372,7 +1373,8 @@ func TestMDMBatchSetProfiles(t *testing.T) { 1 `), - }}, + }, + }, "unsupported PayloadType(s)", }, } diff --git a/server/service/orbit_client.go b/server/service/orbit_client.go index 5220eaf96..f171de55c 100644 --- a/server/service/orbit_client.go +++ b/server/service/orbit_client.go @@ -7,6 +7,7 @@ import ( "errors" "fmt" "io/fs" + "net" "net/http" "os" "path/filepath" @@ -35,7 +36,9 @@ type OrbitClient struct { lastRecordedErrMu sync.Mutex lastRecordedErr error - configCache configCache + configCache configCache + onGetConfigErrFns *OnGetConfigErrFuncs + lastNetErrOnGetConfigLogged time.Time // TestNodeKey is used for testing only. TestNodeKey string @@ -84,11 +87,26 @@ func (oc *OrbitClient) request(verb string, path string, params interface{}, res return nil } +// OnGetConfigErrFuncs defines functions to be executed on GetConfig errors. +type OnGetConfigErrFuncs struct { + // OnNetErrFunc receives network and 5XX errors on GetConfig requests. + // These errors are rate limited to once every 5 minutes. + OnNetErrFunc func(err error) + // DebugErrFunc receives all errors on GetConfig requests. + DebugErrFunc func(err error) +} + +var ( + netErrInterval = 5 * time.Minute + configRetryOnNetworkError = 30 * time.Second +) + // NewOrbitClient creates a new OrbitClient. // // - rootDir is the Orbit's root directory, where the Orbit node key is loaded-from/stored. // - addr is the address of the Fleet server. // - orbitHostInfo is the host system information used for enrolling to Fleet. +// - onGetConfigErrFns can be used to handle errors in the GetConfig request. func NewOrbitClient( rootDir string, addr string, @@ -97,6 +115,7 @@ func NewOrbitClient( enrollSecret string, fleetClientCert *tls.Certificate, orbitHostInfo fleet.OrbitHostInfo, + onGetConfigErrFns *OnGetConfigErrFuncs, ) (*OrbitClient, error) { orbitCapabilities := fleet.CapabilityMap{} bc, err := newBaseClient(addr, insecureSkipVerify, rootCA, "", fleetClientCert, orbitCapabilities) @@ -106,25 +125,51 @@ func NewOrbitClient( nodeKeyFilePath := filepath.Join(rootDir, constant.OrbitNodeKeyFileName) return &OrbitClient{ - nodeKeyFilePath: nodeKeyFilePath, - baseClient: bc, - enrollSecret: enrollSecret, - hostInfo: orbitHostInfo, - enrolled: false, + nodeKeyFilePath: nodeKeyFilePath, + baseClient: bc, + enrollSecret: enrollSecret, + hostInfo: orbitHostInfo, + enrolled: false, + onGetConfigErrFns: onGetConfigErrFns, }, nil } // GetConfig returns the Orbit config fetched from Fleet server for this instance of OrbitClient. -// Since this method is called in multiple places, we use a cache with configCacheTTL time-to-live to reduce traffic to the Fleet server. +// Since this method is called in multiple places, we use a cache with configCacheTTL time-to-live +// to reduce traffic to the Fleet server. +// Upon network errors, this method will retry the get config request (every 30 seconds). func (oc *OrbitClient) GetConfig() (*fleet.OrbitConfig, error) { oc.configCache.mu.Lock() defer oc.configCache.mu.Unlock() + // If time-to-live passed, we update the config cache now := time.Now() if now.After(oc.configCache.lastUpdated.Add(configCacheTTL)) { verb, path := "POST", "/api/fleet/orbit/config" - var resp fleet.OrbitConfig - err := oc.authenticatedRequest(verb, path, &orbitGetConfigRequest{}, &resp) + var ( + resp fleet.OrbitConfig + err error + ) + // Retry until we don't get a network error or a 5XX error. + _ = retry.Do(func() error { + err = oc.authenticatedRequest(verb, path, &orbitGetConfigRequest{}, &resp) + var ( + netErr net.Error + statusCodeErr *statusCodeErr + ) + if err != nil && oc.onGetConfigErrFns != nil && oc.onGetConfigErrFns.DebugErrFunc != nil { + oc.onGetConfigErrFns.DebugErrFunc(err) + } + if errors.As(err, &netErr) || (errors.As(err, &statusCodeErr) && statusCodeErr.code >= 500) { + now := time.Now() + if oc.onGetConfigErrFns != nil && oc.onGetConfigErrFns.OnNetErrFunc != nil && now.After(oc.lastNetErrOnGetConfigLogged.Add(netErrInterval)) { + oc.onGetConfigErrFns.OnNetErrFunc(err) + oc.lastNetErrOnGetConfigLogged = now + } + return err // retry on network or server 5XX errors + } + return nil + }, retry.WithInterval(configRetryOnNetworkError)) oc.configCache.config = &resp oc.configCache.err = err oc.configCache.lastUpdated = now diff --git a/server/service/schedule/schedule.go b/server/service/schedule/schedule.go index 5bc139ee0..eabe66c39 100644 --- a/server/service/schedule/schedule.go +++ b/server/service/schedule/schedule.go @@ -12,7 +12,6 @@ import ( "github.com/fleetdm/fleet/v4/server/contexts/ctxerr" "github.com/fleetdm/fleet/v4/server/fleet" - "github.com/getsentry/sentry-go" "github.com/go-kit/kit/log" "github.com/go-kit/kit/log/level" ) @@ -166,7 +165,6 @@ func (s *Schedule) Start() { prevScheduledRun, _, err := s.getLatestStats() if err != nil { level.Error(s.logger).Log("err", "start schedule", "details", err) - sentry.CaptureException(err) ctxerr.Handle(s.ctx, err) } s.setIntervalStartedAt(prevScheduledRun.CreatedAt) @@ -208,7 +206,6 @@ func (s *Schedule) Start() { prevScheduledRun, _, err := s.getLatestStats() if err != nil { level.Error(s.logger).Log("err", "trigger get cron stats", "details", err) - sentry.CaptureException(err) ctxerr.Handle(s.ctx, err) } @@ -241,7 +238,6 @@ func (s *Schedule) Start() { prevScheduledRun, prevTriggeredRun, err := s.getLatestStats() if err != nil { level.Error(s.logger).Log("err", "get cron stats", "details", err) - sentry.CaptureException(err) ctxerr.Handle(s.ctx, err) // skip ahead to the next interval schedTicker.Reset(schedInterval) @@ -331,7 +327,7 @@ func (s *Schedule) Start() { newInterval, err := s.configReloadIntervalFn(s.ctx) if err != nil { level.Error(s.logger).Log("err", "schedule interval config reload failed", "details", err) - sentry.CaptureException(err) + ctxerr.Handle(s.ctx, err) continue } @@ -411,7 +407,6 @@ func (s *Schedule) runWithStats(statsType fleet.CronStatsType) { statsID, err := s.insertStats(statsType, fleet.CronStatsStatusPending) if err != nil { level.Error(s.logger).Log("err", fmt.Sprintf("insert cron stats %s", s.name), "details", err) - sentry.CaptureException(err) ctxerr.Handle(s.ctx, err) } level.Info(s.logger).Log("status", "pending") @@ -420,7 +415,6 @@ func (s *Schedule) runWithStats(statsType fleet.CronStatsType) { if err := s.updateStats(statsID, fleet.CronStatsStatusCompleted); err != nil { level.Error(s.logger).Log("err", fmt.Sprintf("update cron stats %s", s.name), "details", err) - sentry.CaptureException(err) ctxerr.Handle(s.ctx, err) } level.Info(s.logger).Log("status", "completed") @@ -432,7 +426,6 @@ func (s *Schedule) runAllJobs() { level.Debug(s.logger).Log("msg", "starting", "jobID", job.ID) if err := runJob(s.ctx, job.Fn); err != nil { level.Error(s.logger).Log("err", "running job", "details", err, "jobID", job.ID) - sentry.CaptureException(err) ctxerr.Handle(s.ctx, err) } } @@ -507,7 +500,7 @@ func (s *Schedule) acquireLock() bool { ok, err := s.locker.Lock(s.ctx, s.getLockName(), s.instanceID, s.getSchedInterval()) if err != nil { level.Error(s.logger).Log("msg", "lock failed", "err", err) - sentry.CaptureException(err) + ctxerr.Handle(s.ctx, err) return false } if !ok { @@ -521,7 +514,7 @@ func (s *Schedule) releaseLock() { err := s.locker.Unlock(s.ctx, s.getLockName(), s.instanceID) if err != nil { level.Error(s.logger).Log("msg", "unlock failed", "err", err) - sentry.CaptureException(err) + ctxerr.Handle(s.ctx, err) } } diff --git a/server/service/testing_utils.go b/server/service/testing_utils.go index 4f424deaa..4cfc23f2e 100644 --- a/server/service/testing_utils.go +++ b/server/service/testing_utils.go @@ -26,6 +26,7 @@ import ( "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/mdm" "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/push" nanomdm_push "github.com/fleetdm/fleet/v4/server/mdm/nanomdm/push" + scep_depot "github.com/fleetdm/fleet/v4/server/mdm/scep/depot" nanodep_mock "github.com/fleetdm/fleet/v4/server/mock/nanodep" "github.com/fleetdm/fleet/v4/server/ptr" "github.com/fleetdm/fleet/v4/server/service/async" @@ -35,7 +36,6 @@ import ( kitlog "github.com/go-kit/kit/log" "github.com/google/uuid" nanodep_storage "github.com/micromdm/nanodep/storage" - scep_depot "github.com/micromdm/scep/v2/depot" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/throttled/throttled/v2" diff --git a/server/service/transport_error.go b/server/service/transport_error.go index 4de130f6c..d7ebf7a86 100644 --- a/server/service/transport_error.go +++ b/server/service/transport_error.go @@ -4,16 +4,12 @@ import ( "context" "encoding/json" "errors" - "fmt" "net" "net/http" "strconv" "github.com/fleetdm/fleet/v4/server/contexts/ctxerr" - "github.com/fleetdm/fleet/v4/server/contexts/host" - "github.com/fleetdm/fleet/v4/server/contexts/viewer" "github.com/fleetdm/fleet/v4/server/fleet" - "github.com/getsentry/sentry-go" kithttp "github.com/go-kit/kit/transport/http" "github.com/go-sql-driver/mysql" ) @@ -72,16 +68,6 @@ type conflictErrorInterface interface { IsConflict() bool } -func encodeErrorAndTrySentry(sentryEnabled bool) func(ctx context.Context, err error, w http.ResponseWriter) { - if !sentryEnabled { - return encodeError - } - return func(ctx context.Context, err error, w http.ResponseWriter) { - encodeError(ctx, err, w) - sendToSentry(ctx, err) - } -} - // encode error and status header to the client func encodeError(ctx context.Context, err error, w http.ResponseWriter) { ctxerr.Handle(ctx, err) @@ -223,21 +209,3 @@ func encodeError(ctx context.Context, err error, w http.ResponseWriter) { enc.Encode(jsonErr) //nolint:errcheck } - -func sendToSentry(ctx context.Context, err error) { - v, haveUser := viewer.FromContext(ctx) - h, haveHost := host.FromContext(ctx) - localHub := sentry.CurrentHub().Clone() - if haveUser { - localHub.ConfigureScope(func(scope *sentry.Scope) { - scope.SetTag("email", v.User.Email) - scope.SetTag("user_id", fmt.Sprint(v.User.ID)) - }) - } else if haveHost { - localHub.ConfigureScope(func(scope *sentry.Scope) { - scope.SetTag("hostname", h.Hostname) - scope.SetTag("host_id", fmt.Sprint(h.ID)) - }) - } - localHub.CaptureException(err) -} diff --git a/terraform/byo-vpc/byo-db/byo-ecs/variables.tf b/terraform/byo-vpc/byo-db/byo-ecs/variables.tf index 164dd33ac..bf017d4a2 100644 --- a/terraform/byo-vpc/byo-db/byo-ecs/variables.tf +++ b/terraform/byo-vpc/byo-db/byo-ecs/variables.tf @@ -13,7 +13,7 @@ variable "fleet_config" { type = object({ mem = optional(number, 4096) cpu = optional(number, 512) - image = optional(string, "fleetdm/fleet:v4.44.1") + image = optional(string, "fleetdm/fleet:v4.45.0") family = optional(string, "fleet") sidecars = optional(list(any), []) depends_on = optional(list(any), []) diff --git a/terraform/byo-vpc/byo-db/variables.tf b/terraform/byo-vpc/byo-db/variables.tf index cf2e994f6..04ec0c443 100644 --- a/terraform/byo-vpc/byo-db/variables.tf +++ b/terraform/byo-vpc/byo-db/variables.tf @@ -74,7 +74,7 @@ variable "fleet_config" { type = object({ mem = optional(number, 4096) cpu = optional(number, 512) - image = optional(string, "fleetdm/fleet:v4.44.1") + image = optional(string, "fleetdm/fleet:v4.45.0") family = optional(string, "fleet") sidecars = optional(list(any), []) depends_on = optional(list(any), []) diff --git a/terraform/byo-vpc/example/main.tf b/terraform/byo-vpc/example/main.tf index 0ca4e4883..6b607c1fa 100644 --- a/terraform/byo-vpc/example/main.tf +++ b/terraform/byo-vpc/example/main.tf @@ -17,7 +17,7 @@ provider "aws" { } locals { - fleet_image = "fleetdm/fleet:v4.44.1" + fleet_image = "fleetdm/fleet:v4.45.0" domain_name = "example.com" } diff --git a/terraform/byo-vpc/variables.tf b/terraform/byo-vpc/variables.tf index 91182619b..1e1434e4c 100644 --- a/terraform/byo-vpc/variables.tf +++ b/terraform/byo-vpc/variables.tf @@ -165,7 +165,7 @@ variable "fleet_config" { type = object({ mem = optional(number, 4096) cpu = optional(number, 512) - image = optional(string, "fleetdm/fleet:v4.44.1") + image = optional(string, "fleetdm/fleet:v4.45.0") family = optional(string, "fleet") sidecars = optional(list(any), []) depends_on = optional(list(any), []) diff --git a/terraform/variables.tf b/terraform/variables.tf index 0e5f1d4ca..ebd0fb4c2 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -215,7 +215,7 @@ variable "fleet_config" { type = object({ mem = optional(number, 4096) cpu = optional(number, 512) - image = optional(string, "fleetdm/fleet:v4.44.1") + image = optional(string, "fleetdm/fleet:v4.45.0") family = optional(string, "fleet") sidecars = optional(list(any), []) depends_on = optional(list(any), []) diff --git a/tools/fleetctl-npm/package.json b/tools/fleetctl-npm/package.json index 80bccf314..3ad54a3e9 100644 --- a/tools/fleetctl-npm/package.json +++ b/tools/fleetctl-npm/package.json @@ -1,6 +1,6 @@ { "name": "fleetctl", - "version": "v4.44.1", + "version": "v4.45.0", "description": "Installer for the fleetctl CLI tool", "bin": { "fleetctl": "./run.js" diff --git a/tools/sentry-self-hosted/README.md b/tools/sentry-self-hosted/README.md new file mode 100644 index 000000000..d2b248ef8 --- /dev/null +++ b/tools/sentry-self-hosted/README.md @@ -0,0 +1,17 @@ +# Running self-hosted Sentry + +It may be useful to run a local, self-hosted version of Sentry for tests or to aid in monitoring a local development environment. + +It is possible to do so by following the [steps documented on Sentry's website](https://develop.sentry.dev/self-hosted/). + +While Sentry's documentation is canonical, the high-level steps are documented here and annotated with Fleet specific information: + +1. `git clone` the [Sentry self-hosted repository](https://github.com/getsentry/self-hosted) +2. `git checkout` a specific version (e.g. `git checkout 24.2.0`) +3. Run `sudo ./install.sh` script (you may want to review the install scripts first, this takes a while to complete - maybe 30 minutes or so, you'll be prompted to create a Sentry user and password towards the end) +4. Once done, you should be able to run `docker-compose up -d` to bring up the self-hosted Sentry stack (that's a lot of containers to start) +5. Once the stack is up, you should be able to login at `http://localhost:9000` (on Google Chrome, after login I was met with a CSRF protection failure page, but it worked on Firefox) +6. In the "Issues" page, you should see a button labelled "Installation Instructions"; clicking on it will bring a page with the DSN that you can copy to use with Fleet (e.g. `http://@localhost:9000/1`) +7. Start `fleet serve`, passing the `--sentry_dsn http://` flag to enable Sentry + +You may now login to Fleet and any errors should show up in this local self-hosted version of Sentry. diff --git a/tools/tuf/test/README.md b/tools/tuf/test/README.md index 7ca043c06..de54e4f0d 100644 --- a/tools/tuf/test/README.md +++ b/tools/tuf/test/README.md @@ -35,6 +35,7 @@ GENERATE_MSI=1 \ ENROLL_SECRET=6/EzU/+jPkxfTamWnRv1+IJsO4T9Etju \ FLEET_DESKTOP=1 \ USE_FLEET_SERVER_CERTIFICATE=1 \ +DEBUG=1 \ ./tools/tuf/test/main.sh ``` diff --git a/tools/tuf/test/gen_pkgs.sh b/tools/tuf/test/gen_pkgs.sh index 85dc91e1c..fc23f43c1 100755 --- a/tools/tuf/test/gen_pkgs.sh +++ b/tools/tuf/test/gen_pkgs.sh @@ -29,6 +29,7 @@ set -ex # USE_FLEET_SERVER_CERTIFICATE: Whether to use a custom certificate bundle. # USE_UPDATE_SERVER_CERTIFICATE: Whether to use a custom certificate bundle. # FLEET_DESKTOP_ALTERNATIVE_BROWSER_HOST: Alternative host:port to use for the Fleet Desktop browser URLs. +# DEBUG: Whether or not to build the package with --debug. if [ -n "$GENERATE_PKG" ]; then echo "Generating pkg..." @@ -40,7 +41,7 @@ if [ -n "$GENERATE_PKG" ]; then ${USE_FLEET_SERVER_CERTIFICATE:+--fleet-certificate=./tools/osquery/fleet.crt} \ ${USE_UPDATE_SERVER_CERTIFICATE:+--update-tls-certificate=./tools/osquery/fleet.crt} \ ${INSECURE:+--insecure} \ - --debug \ + ${DEBUG:+--debug} \ --update-roots="$ROOT_KEYS" \ --update-interval=10s \ --disable-open-folder \ @@ -64,7 +65,7 @@ if [ -n "$GENERATE_DEB" ]; then ${USE_FLEET_SERVER_CERTIFICATE:+--fleet-certificate=./tools/osquery/fleet.crt} \ ${USE_UPDATE_SERVER_CERTIFICATE:+--update-tls-certificate=./tools/osquery/fleet.crt} \ ${INSECURE:+--insecure} \ - --debug \ + ${DEBUG:+--debug} \ --update-roots="$ROOT_KEYS" \ --update-interval=10s \ --disable-open-folder \ @@ -87,7 +88,7 @@ if [ -n "$GENERATE_RPM" ]; then ${USE_FLEET_SERVER_CERTIFICATE:+--fleet-certificate=./tools/osquery/fleet.crt} \ ${USE_UPDATE_SERVER_CERTIFICATE:+--update-tls-certificate=./tools/osquery/fleet.crt} \ ${INSECURE:+--insecure} \ - --debug \ + ${DEBUG:+--debug} \ --update-roots="$ROOT_KEYS" \ --update-interval=10s \ --disable-open-folder \ @@ -110,7 +111,7 @@ if [ -n "$GENERATE_MSI" ]; then ${USE_FLEET_SERVER_CERTIFICATE:+--fleet-certificate=./tools/osquery/fleet.crt} \ ${USE_UPDATE_SERVER_CERTIFICATE:+--update-tls-certificate=./tools/osquery/fleet.crt} \ ${INSECURE:+--insecure} \ - --debug \ + ${DEBUG:+--debug} \ --update-roots="$ROOT_KEYS" \ --update-interval=10s \ --disable-open-folder \ diff --git a/website/assets/images/articles/fleet-4.45.0-1600x900@2x.png b/website/assets/images/articles/fleet-4.45.0-1600x900@2x.png new file mode 100644 index 000000000..c64ceb4de Binary files /dev/null and b/website/assets/images/articles/fleet-4.45.0-1600x900@2x.png differ diff --git a/website/assets/images/device-management-transparency-380x320@2x.png b/website/assets/images/device-management-transparency-380x320@2x.png index 11fad6e54..a22c10994 100644 Binary files a/website/assets/images/device-management-transparency-380x320@2x.png and b/website/assets/images/device-management-transparency-380x320@2x.png differ diff --git a/website/assets/images/icon-external-link-13x13@2x.png b/website/assets/images/icon-external-link-13x13@2x.png new file mode 100644 index 000000000..70b7d4e82 Binary files /dev/null and b/website/assets/images/icon-external-link-13x13@2x.png differ diff --git a/website/assets/js/components/call-to-action.component.js b/website/assets/js/components/call-to-action.component.js index 549703a93..c6843132d 100644 --- a/website/assets/js/components/call-to-action.component.js +++ b/website/assets/js/components/call-to-action.component.js @@ -19,7 +19,6 @@ parasails.registerComponent('callToAction', { 'primaryButtonHref', // Required: the url that the call to action button leads 'secondaryButtonText', // Optional: if provided with a `secondaryButtonHref`, a second button will be added to the call to action with this value as the button text 'secondaryButtonHref', // Optional: if provided with a `secondaryButtonText`, a second button will be added to the call to action with this value as the href - 'preset',// Optional: if provided, all other values will be ignored, and this component will display a specified varient, can be set to 'premium-upgrade' or 'mdm-beta'. ], // ╦╔╗╔╦╔╦╗╦╔═╗╦ ╔═╗╔╦╗╔═╗╔╦╗╔═╗ @@ -50,7 +49,7 @@ parasails.registerComponent('callToAction', { // ╩ ╩ ╩ ╩ ╩╩═╝ template: `
-
+
{{callToActionTitle}}
{{callToActionText}}
@@ -62,32 +61,6 @@ parasails.registerComponent('callToAction', {
-
-
-
-

Get even more control
with Fleet Premium

- Learn more -
-
- A computer reporting it's disk encryption status -
-
-
-
-
-
- Fleet city (on a cloud) -
-
- Limited beta - A better MDM - Fleet’s cross-platform MDM gives IT teams more visibility out of the box. - Request access -
-
-
-
-
`, @@ -98,39 +71,31 @@ parasails.registerComponent('callToAction', { }, mounted: async function() { - if(this.preset){ - if(_.contains(['premium-upgrade', 'mdm-beta'], this.preset)){ - this.callToActionPreset = this.preset; - } else { - throw new Error('Incomplete usage of : If providing a type, it must be either \'premium-upgrade\' or \'mdm-beta\''); - } + if (this.title) { + this.callToActionTitle = this.title; } else { - if (this.title) { - this.callToActionTitle = this.title; - } else { - throw new Error('Incomplete usage of : Please provide a `title` example: title="Secure laptops & servers"'); - } - if (this.text) { - this.callToActionText = this.text; - } else { - throw new Error('Incomplete usage of : Please provide a `text` example: text="Get up and running with a test environment of Fleet within minutes"'); - } - if (this.primaryButtonText) { - this.calltoActionPrimaryBtnText = this.primaryButtonText; - } else { - throw new Error('Incomplete usage of : Please provide a `primaryButtonText`. example: primary-button-text="Get started"'); - } - if (this.primaryButtonHref) { - this.calltoActionPrimaryBtnHref = this.primaryButtonHref; - } else { - throw new Error('Incomplete usage of : Please provide a `primaryButtonHref` example: primary-button-href="/get-started?try-it-now"'); - } - if (this.secondaryButtonText) { - this.calltoActionSecondaryBtnText = this.secondaryButtonText; - } - if (this.secondaryButtonHref) { - this.calltoActionSecondaryBtnHref = this.secondaryButtonHref; - } + throw new Error('Incomplete usage of : Please provide a `title` example: title="Secure laptops & servers"'); + } + if (this.text) { + this.callToActionText = this.text; + } else { + throw new Error('Incomplete usage of : Please provide a `text` example: text="Get up and running with a test environment of Fleet within minutes"'); + } + if (this.primaryButtonText) { + this.calltoActionPrimaryBtnText = this.primaryButtonText; + } else { + throw new Error('Incomplete usage of : Please provide a `primaryButtonText`. example: primary-button-text="Get started"'); + } + if (this.primaryButtonHref) { + this.calltoActionPrimaryBtnHref = this.primaryButtonHref; + } else { + throw new Error('Incomplete usage of : Please provide a `primaryButtonHref` example: primary-button-href="/get-started?try-it-now"'); + } + if (this.secondaryButtonText) { + this.calltoActionSecondaryBtnText = this.secondaryButtonText; + } + if (this.secondaryButtonHref) { + this.calltoActionSecondaryBtnHref = this.secondaryButtonHref; } }, watch: { diff --git a/website/assets/js/pages/fleetctl-preview.page.js b/website/assets/js/pages/fleetctl-preview.page.js index d8939a298..cf7274f7b 100644 --- a/website/assets/js/pages/fleetctl-preview.page.js +++ b/website/assets/js/pages/fleetctl-preview.page.js @@ -3,7 +3,7 @@ parasails.registerPage('fleetctl-preview', { // ║║║║║ ║ ║╠═╣║ ╚═╗ ║ ╠═╣ ║ ║╣ // ╩╝╚╝╩ ╩ ╩╩ ╩╩═╝ ╚═╝ ╩ ╩ ╩ ╩ ╚═╝ data: { - //… + selectedPlatform: 'macos' }, // ╦ ╦╔═╗╔═╗╔═╗╦ ╦╔═╗╦ ╔═╗ diff --git a/website/assets/js/pages/homepage.page.js b/website/assets/js/pages/homepage.page.js index b7a612731..1f92d07a3 100644 --- a/website/assets/js/pages/homepage.page.js +++ b/website/assets/js/pages/homepage.page.js @@ -4,7 +4,6 @@ parasails.registerPage('homepage', { // ╩╝╚╝╩ ╩ ╩╩ ╩╩═╝ ╚═╝ ╩ ╩ ╩ ╩ ╚═╝ data: { modal: undefined, - selectedCategory: 'device-management' }, // ╦ ╦╔═╗╔═╗╔═╗╦ ╦╔═╗╦ ╔═╗ diff --git a/website/assets/resources/install-fleet.sh b/website/assets/resources/install-fleet.sh new file mode 100644 index 000000000..0717e46dd --- /dev/null +++ b/website/assets/resources/install-fleet.sh @@ -0,0 +1,119 @@ +#!/bin/bash + +set -e + +FLEETCTL_INSTALL_DIR="${HOME}/.fleetctl/" +FLEETCTL_BINARY_NAME="fleetctl" + + +# Check for necessary commands +for cmd in curl tar grep sed; do + if ! command -v $cmd &> /dev/null; then + echo "Error: $cmd is not installed." >&2 + exit 1 + fi +done + +echo "Fetching the latest version of fleetctl..." + + +# Fetch the latest version number from NPM +latest_strippedVersion=$(curl -s "https://registry.npmjs.org/fleetctl/latest" | grep -o '"version": *"[^"]*"' | cut -d'"' -f4) +echo "Latest version available on NPM: $latest_strippedVersion" + +version_gt() { + test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"; +} + +# Determine operating system (Linux or MacOS) +OS="$(uname -s)" + +case "${OS}" in + Linux*) OS='linux';; + Darwin*) OS='macos';; + *) echo "Unsupported operating system: ${OS}"; exit 1;; +esac + +# Download the fleetctl binary and extract it into the install directory +download_and_extract() { + echo "Downloading fleetctl ${latest_strippedVersion} for ${OS}..." + curl -sSL $DOWNLOAD_URL | tar -xz -C $FLEETCTL_INSTALL_DIR --strip-components=1 fleetctl_v${latest_strippedVersion}_${OS}/ +} + +# Check to see if the fleetctl binary exists in the script's install directory. +check_installed_version() { + # If the fleetctl binary exists, we'll check the version of it using fleetctl -v. + if [ -x "${FLEETCTL_INSTALL_DIR}/fleetctl" ]; then + installed_version=$("${FLEETCTL_INSTALL_DIR}/fleetctl" -v | awk 'NR==1{print $NF}' | sed 's/^v//') + echo "Installed version: ${installed_version}" + else + return 1 + fi +} + +# Create the install directory if it does not exist. +mkdir -p ${FLEETCTL_INSTALL_DIR} + +# Construct download URL +# ex: https://github.com/fleetdm/fleet/releases/download/fleet-v4.43.3/fleetctl_v4.43.3_macos.zip +DOWNLOAD_URL="https://github.com/fleetdm/fleet/releases/download/fleet-v${latest_strippedVersion}/fleetctl_v${latest_strippedVersion}_${OS}.tar.gz" + + +if check_installed_version; then + if version_gt $latest_strippedVersion $installed_version; then + # Prompt the user for an upgrade + read -p "A newer version of fleetctl ($latest_strippedVersion) is available. Would you like to upgrade? (y/n): " upgrade_choice + + if [[ "$upgrade_choice" =~ ^[Yy](es)?$ ]]; then + # Remove the old binary + rm -f "${FLEETCTL_INSTALL_DIR}/fleetctl" + echo "Removing an older version of fleetctl." + + # Download and install the new version + download_and_extract + echo "fleetctl installed successfully in ${FLEETCTL_INSTALL_DIR}" + echo + echo "To start the local demo:" + echo + echo "1. Start Docker Desktop" + echo "2. Run ~/.fleetctl/fleetctl preview" + else + echo "Upgrade canceled." + fi + else + read -p "You are already using the latest version of fleetctl ($latest_strippedVersion) Would you like to reinstall it? (y/n): " reinstall_choice + + if [[ "$reinstall_choice" =~ ^[Yy](es)?$ ]]; then + # Remove the old binary + rm -f "${FLEETCTL_INSTALL_DIR}/fleetctl" + echo "Removing an older version of fleetctl." + + # Download and install the new version + download_and_extract + echo "fleetctl reinstalled successfully in ${FLEETCTL_INSTALL_DIR}" + echo + echo "To start the local demo:" + echo + echo "1. Start Docker Desktop" + echo "2. Run ~/.fleetctl/fleetctl preview" + else + echo "Install canceled." + fi + fi +else + # If there is no existing fleetctl binary, download the latest version and extract it. + download_and_extract + echo "fleetctl installed successfully in ${FLEETCTL_INSTALL_DIR}" + echo + echo "To start the local demo:" + echo + echo "1. Start Docker Desktop" + echo "2. Run ~/.fleetctl/fleetctl preview" +fi + +# Verify if the binary is executable +if [[ ! -x "${FLEETCTL_INSTALL_DIR}/fleetctl" ]]; then + echo "Failed to install or upgrade fleetctl. Please check your permissions and try running this script again." + exit 1 +fi + diff --git a/website/assets/styles/pages/fleetctl-preview.less b/website/assets/styles/pages/fleetctl-preview.less index 669000502..7d1973f9f 100644 --- a/website/assets/styles/pages/fleetctl-preview.less +++ b/website/assets/styles/pages/fleetctl-preview.less @@ -1,42 +1,290 @@ #fleetctl-preview { + @heading-lineheight: 120%; + @text-lineheight: 150%; - a:not(.btn) { - color: @core-vibrant-blue; + h1 { + margin-bottom: 40px; + font-size: 32px; + line-height: @heading-lineheight; + font-weight: 800; + } + h2 { + padding-top: 24px; + font-size: 20px; + font-weight: 800; + margin-bottom: 24px; + line-height: @heading-lineheight; } - code { background-color: @ui-off-white; border: 1px solid @border-lt-gray; color: @core-fleet-black-75; - font-size: 13px; - padding: 4px 8px; - line-height: 16px; + font-size: 14px; + padding: 4px; + line-height: @text-lineheight; font-family: @code-font; display: inline-block; - border-radius: 6px; + border-radius: 3px; + } + a { + color: @core-vibrant-blue; + line-height: @text-lineheight; + } + p { + font-size: 16px; + line-height: @text-lineheight; } - [purpose='get-started-buttons'] { - a { - font-size: 16px; - line-height: 25px; - padding: 16px 24px; + [purpose='page-title'] { + margin-bottom: 40px; + } + + [purpose='installation-steps'] { + margin-bottom: 40px; + } + + [purpose='prerequisites'] { + margin-bottom: 40px; + p { + margin-bottom: 24px; } } + [purpose='platform-selector'] { + margin-bottom: 60px; + border-bottom: 1px solid @core-vibrant-blue-15; + [purpose='selector-tab'] { + cursor: pointer; + padding: 12px; + margin-right: 24px; + p { + margin-bottom: 0px; + } + } + [purpose='selector-tab'].selected { + border-bottom: 2px solid @core-vibrant-blue; + } + } + + + [purpose='page-container'] { + padding: 80px 64px 64px 64px; + padding-bottom: 64px; + max-width: 928px; + } + + [purpose='numbered-steps'] { + counter-reset: custom-counter; + } + [purpose='step'] { + counter-increment: custom-counter; + margin-left: 36px; + margin-bottom: 40px; + margin-top: -26px; + &::before { + position: relative; + top: 26px; + left: -36px; + content: counter(custom-counter); + background-color: #E2E4EA; + width: 24px; + font-size: 13px; + display: inline-block; + border-radius: 50%; + margin-right: 10px; + padding: 2px 4px; + text-align: center; + line-height: 20px; + text-indent: 0px; + } + } [purpose='terminal-commands'] { - padding: 24px; + padding: 16px 24px; border: 1px solid @core-fleet-black-25; border-radius: 4px; margin: 16px 0px 0px; background: @ui-off-white; + width: 100%; + overflow-x: scroll; p { - color: @core-fleet-black; + white-space: nowrap; + color: @core-fleet-black-75; font-family: @code-font; font-weight: 400; - font-size: 18px; - line-height: 32px; + font-size: 14px; + line-height: @text-lineheight; margin-bottom: 0px; + padding-right: 24px; } } + + + [purpose='docs-button'] { + margin-right: 32px; + color: #FFF; + font-size: 16px; + font-weight: 700; + line-height: 21px; + display: flex; + height: 48px; + padding: 16px 32px; + justify-content: center; + align-items: center; + } + + [purpose='view-script-link'] { + display: flex; + align-items: center; + white-space: nowrap; + font-size: 12px; + font-weight: 400; + line-height: 18px; + color: @core-fleet-black-75; + margin-bottom: 16px; + img { + margin-left: 4px; + height: 13px; + display: inline; + } + } + [purpose='docker-button'] { + padding: 12px 20px; + width: 153px; + font-size: 14px; + font-style: normal; + font-weight: 700; + line-height: 21px; + img { + width: auto; + height: 20px; + margin-right: 8px; + } + } + [purpose='node-button'] { + padding: 12px 20px; + width: 240px; + font-size: 14px; + font-style: normal; + font-weight: 700; + line-height: 21px; + img { + width: auto; + height: 24px; + margin-right: 8px; + } + } + [purpose='animated-arrow-button-red'] { + display: flex; + padding-right: 26px; + cursor: pointer; + position: relative; + width: fit-content; + min-width: auto; + font-weight: bold; + user-select: none; + transition: 0.2s ease-in-out; + -o-transition: 0.2s ease-in-out; + -ms-transition: 0.2s ease-in-out; + -moz-transition: 0.2s ease-in-out; + -webkit-transition: 0.2s ease-in-out; + color: @core-fleet-black; + text-decoration: none; + &:after { + content: url('/images/arrow-right-red-16x16@2x.png'); + transform: scale(0.5); + position: absolute; + top: -5px; + right: -5px; // <--- here + transition: 0.2s ease-in-out; + -o-transition: 0.2s ease-in-out; + -ms-transition: 0.2s ease-in-out; + -moz-transition: 0.2s ease-in-out; + -webkit-transition: 0.2s ease-in-out; + /* opacity: 0; */ + } + &:hover:after { + right: -10px; // <--- here + transition: 0.2s ease-in-out; + -o-transition: 0.2s ease-in-out; + -ms-transition: 0.2s ease-in-out; + -moz-transition: 0.2s ease-in-out; + -webkit-transition: 0.2s ease-in-out; + /* opacity:1; */ + } + } + + [parasails-component='parallax-city'] { + background: linear-gradient(180deg, #FFF 0%, #E4F3F4 59.5%); + } + + + @media (max-width: 991px) { + [purpose='page-container'] { + padding: 80px 40px 64px 40px; + max-width: 928px; + } + + } + @media (max-width: 768px) { + [purpose='page-container'] { + padding: 80px 32px 64px 32px; + max-width: 928px; + } + + } + @media (max-width: 575px) { + [purpose='page-container'] { + padding: 64px 24px; + max-width: 928px; + } + [purpose='platform-selector'] { + margin-bottom: 60px; + border-bottom: 1px solid @core-vibrant-blue-15; + [purpose='selector-tab'] { + cursor: pointer; + padding: 12px; + margin-right: auto; + } + } + [purpose='docs-button'] { + width: 100%; + margin-bottom: 32px; + margin-right: auto; + } + [purpose='view-script-link'] { + margin-left: 4px; + } + + [purpose='docker-button'] { + width: 100%; + } + [purpose='node-button'] { + width: 100%; + } + [purpose='get-started-buttons'] { + a:first-of-type { + margin-bottom: 16px; + } + } + + } + @media (max-width: 375px) { + [purpose='page-container'] { + padding: 40px 16px 64px 16px; + max-width: 928px; + } + [purpose='platform-selector'] { + [purpose='selector-tab'] { + cursor: pointer; + padding: 6px; + margin-right: auto; + } + } + } + + + + + } + diff --git a/website/assets/styles/pages/homepage.less b/website/assets/styles/pages/homepage.less index c0bbb2796..3735660cb 100644 --- a/website/assets/styles/pages/homepage.less +++ b/website/assets/styles/pages/homepage.less @@ -1016,7 +1016,6 @@ max-width: 480px; [purpose='category-button'] { width: 100%; - margin-bottom: 60px; } } [purpose='endpoint-ops-image'] { diff --git a/website/config/custom.js b/website/config/custom.js index b5f5e4db7..2937ef259 100644 --- a/website/config/custom.js +++ b/website/config/custom.js @@ -182,7 +182,7 @@ module.exports.custom = { // Reference, config surface, built-in queries, API, and other documentation 'docs': ['rachaelshaw'],// (default for docs) 'docs/01-Using-Fleet/standard-query-library/standard-query-library.yml': ['rachaelshaw'],// (standard query library) - 'schema': ['rachaelshaw'],// (Osquery table schema) + 'schema': ['eashaw'],// (Osquery table schema) 'ee/cis': ['sharon-fdm', 'lucasmrod', 'rachelElysia', 'rachaelshaw'], // Articles and release notes diff --git a/website/config/routes.js b/website/config/routes.js index 4ae838ad4..30ef74d23 100644 --- a/website/config/routes.js +++ b/website/config/routes.js @@ -557,7 +557,6 @@ module.exports.routes = { 'POST /api/v1/webhooks/receive-usage-analytics': { action: 'webhooks/receive-usage-analytics', csrf: false }, '/api/v1/webhooks/github': { action: 'webhooks/receive-from-github', csrf: false }, 'POST /api/v1/webhooks/receive-from-stripe': { action: 'webhooks/receive-from-stripe', csrf: false }, - 'POST /api/v1/webhooks/receive-from-customer-fleet-instance': { action: 'webhooks/receive-from-customer-fleet-instance', csrf: false}, // ╔═╗╔═╗╦ ╔═╗╔╗╔╔╦╗╔═╗╔═╗╦╔╗╔╔╦╗╔═╗ // ╠═╣╠═╝║ ║╣ ║║║ ║║╠═╝║ ║║║║║ ║ ╚═╗ diff --git a/website/views/pages/fleetctl-preview.ejs b/website/views/pages/fleetctl-preview.ejs index 27b2730df..b389c76ea 100644 --- a/website/views/pages/fleetctl-preview.ejs +++ b/website/views/pages/fleetctl-preview.ejs @@ -1,64 +1,116 @@ -
-
-

Get started

-

Try out a preview of Fleet and osquery on your laptop before deploying at scale by following the guide below.

-
-

1. Install Node (optional) and Docker

-

The quickest way to install Fleet is with Node.js and Docker.

- -

A small circle with an 'I' inside of itWe - use NPM to install fleetctl. It can also be installed via the release page or built from source.

+
+
+
+

Try Fleet

+

The quickest way to try Fleet is to run a local demo with Docker.

+

Follow the instructions below to test Fleet on your macOS, Windows, and Linux device.

-
-

2. Run Fleet

-

Install the fleetctl command line tool and start the Fleet preview experience:

-
-

# Install the Fleet command-line tool

-

npm install -g fleetctl

-

# Run a local demo of the Fleet server

-

fleetctl preview

+
+

Prerequisites

+

An installation of Docker is required to try Fleet locally.

+ + Docker logo + Get Docker + +
+
+

Install Fleet

+
+

macOS

+

Windows

+

Linux

+

NPM

+
+ <%/* macOS install steps */%> +
+
+
+

Install the fleetctl command line tool:

+

curl -SsLO https://fleetdm.com/resources/install-fleet.sh -o install_fleet.sh && shasum -a 256 install_fleet.sh

+
+
+

Run a local demo of the Fleet server:

+

~/.fleetctl/fleetctl preview

+
+
+

The Fleet UI is now available at http://localhost:1337. Use the credentials below to login:

+

Email: admin@example.com

+

Password: preview1337#

+
+
+
+ <%/* Linux install steps */%> +
+
+
+

Install the fleetctl command line tool:

+

curl -SsLO https://fleetdm.com/resources/install-fleet.sh -o install_fleet.sh && shasum -a 256 install_fleet.sh

+
+
+

Run a local demo of the Fleet server:

+

~/.fleetctl/fleetctl preview

+
+
+

The Fleet UI is now available at http://localhost:1337. Use the credentials below to login:

+

Email: admin@example.com

+

Password: preview1337#

+
+
+
+ <%/* Windows install steps */%> +
+
+
+

Install the fleetctl command line tool:

+

for /f "tokens=1,* delims=:" %a in ('curl -s https://api.github.com/repos/fleetdm/fleet/releases/latest ^| findstr "browser_download_url" ^| findstr "_windows.zip"') do (curl -kOL %b) && if not exist "%USERPROFILE%\.fleetctl" mkdir "%USERPROFILE%\.fleetctl" && for /f "delims=" %a in ('dir /b fleetctl_*_windows.zip') do tar -xf "%a" --strip-components=1 -C "%USERPROFILE%\.fleetctl" && del "%a"

+
+
+

Run a local demo of the Fleet server:

+

%USERPROFILE%\.fleetctl\fleetctl preview

+
+
+

The Fleet UI is now available at http://localhost:1337. Use the credentials below to login:

+

Email: admin@example.com

+

Password: preview1337#

+
+
+
+ <%/* NPM install steps */%> +
+
+
+

Install Node.js

+ + NodeJS logo + Find my Node installer + +
+
+

Install the fleetctl command line tool:

+

npm install fleetctl -g

+
+
+

Run a local demo of the Fleet server:

+

fleetctl preview

+
+
+

The Fleet UI is now available at http://localhost:1337. Use the credentials below to login:

+

Email: admin@example.com

+

Password: preview1337#

+
+
-
-

3. Log in to Fleet

-

The Fleet UI is now available at http://localhost:1337. Use the credentials below to login.

-

Email: admin@example.com

-

Password: preview1337#

+

Next steps

+ - -
- +
diff --git a/website/views/pages/homepage.ejs b/website/views/pages/homepage.ejs index 45debef0d..b37392b36 100644 --- a/website/views/pages/homepage.ejs +++ b/website/views/pages/homepage.ejs @@ -86,15 +86,8 @@
-
-
Device management
-
Endpoint ops
-
Vulnerability management
-
-
- <%/* Device management block */%> -
+

Device management

Manage everything in one place

@@ -109,7 +102,10 @@
<%/* Endpoint ops block */%> -
+
+
+ Endpoint ops +

Endpoint ops

Focus on data, not vendors

@@ -118,13 +114,10 @@ Start with endpoint ops
-
- Endpoint ops -
<%/* Vulnerability management block */%> -
+

Vulnerability management

Build the vulnerability program you actually want

diff --git a/website/views/pages/transparency.ejs b/website/views/pages/transparency.ejs index 4c8d5d00a..e4e4144c9 100644 --- a/website/views/pages/transparency.ejs +++ b/website/views/pages/transparency.ejs @@ -51,7 +51,7 @@

- Fleet can take action on your device remotely like trigger a restart for your device. This is useful for IT teams to help you troubleshoot remotely if you run into any issues with your device. + Fleet can take action on your device remotely like trigger a restart, lock, or wipe your device. This is useful for IT teams to help you troubleshoot remotely if you run into any issues with your device.

diff --git a/yarn.lock b/yarn.lock index f06a2de0c..537bff8b0 100644 --- a/yarn.lock +++ b/yarn.lock @@ -10664,9 +10664,9 @@ invariant@^2.0.0, invariant@^2.2.1, invariant@^2.2.4: loose-envify "^1.0.0" ip@^2.0.0: - version "2.0.0" - resolved "https://registry.yarnpkg.com/ip/-/ip-2.0.0.tgz#4cf4ab182fee2314c75ede1276f8c80b479936da" - integrity sha512-WKa+XuLG1A1R0UWhl2+1XQSi+fZWMsYKffMZTTYsiZaUD8k2yDAj5atimTUD2TZkyCkNEeYE5NhFZmupOGtjYQ== + version "2.0.1" + resolved "https://registry.yarnpkg.com/ip/-/ip-2.0.1.tgz#e8f3595d33a3ea66490204234b77636965307105" + integrity sha512-lJUL9imLTNi1ZfXT+DU6rBBdbiKGBuay9B6xGSPVjUeQwaH1RIGqef8RZkUtHioLmSNpPR5M4HVKJGm1j8FWVQ== ipaddr.js@1.9.1: version "1.9.1"