init repo

This commit is contained in:
Dmitry Skokov 2021-03-15 12:17:15 +03:00
commit 412d77e6b7
120 changed files with 18253 additions and 0 deletions

179
LICENSE Normal file
View File

@ -0,0 +1,179 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

186
README.md Normal file
View File

@ -0,0 +1,186 @@
Helm Charts Repo
=========
В этом репозитории находятся экспериментальные чарты для ядерных сервисов
платформы RBK.money. Структура каталога следующая:
- services - чарты сервисов, по каталогу на сервис
- config - настройки чартов, по каталогу на сервис
- libraries - чарты вспомогательных библиотек, по каталогу на библиотеку
- docs - документация
- tools - вспомогательные скрипты для миникуба
Требования
----------
Для работы с сервисами требуется Helm 3.2.1+, [Helmfile v0.116.0](https://github.com/roboll/helmfile), kubectl, minikube и VirtualBox. Без VirtualBox можно обойтись если запускать миникуб с другим драйвером, но этот сценарий - за рамками ридми.
Для запуска всего стека рекомендуется выделить на minikube **4 CPU, 10GB RAM, 40GB Disk**
Запуск
------
Холодный старт (~20 минут)
```shell
$ ./tools/cold_reset.sh && helmfile sync --concurrency 2
```
Быстрый резет без повторного скачивания образов (~7 минут)
```shell
$ ./tools/quick_reset.sh && helmfile sync --concurrency 2
```
Пример запуска сервисов:
```shell
$ helmfile sync
Building dependency release=zookeeper, chart=services/zookeeper
...
UPDATED RELEASES:
NAME CHART VERSION
machinegun ./services/machinegun 0.1.0
kafka ./services/kafka 0.21.2
riak ./services/riak 0.1.0
consul ./services/consul 3.9.5
zookeeper ./services/zookeeper 2.1.3
```
После этого можно убедиться, что запущенные сервисы живы. Например, проверим machinegun
```shell
$ helmfile --selector name=machinegun test
Testing machinegun
Pod machinegun-test-connection pending
Pod machinegun-test-connection pending
Pod machinegun-test-connection succeeded
NAME: machinegun
LAST DEPLOYED: Sun May 1 13:22:20 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: machinegun-test-connection
Last Started: Sun May 1 13:27:14 2020
Last Completed: Sun May 1 13:27:18 2020
Phase: Succeeded
NOTES:
You can use machinegun:8022 to connect to the machinegun woody interface.
```
Работа с Vault
----------
Волт запускается в dev режиме, то есть сразу инициированный и unseal.
Референс для работы с секретами в [доке vault](https://www.hashicorp.com/blog/dynamic-database-credentials-with-vault-and-kubernetes/)
<details>
<summary>Здесь немного комментов к тому, что происходит автоматом при запуске пода vault</summary>
```
# kubectl exec -ti vault-0 -- sh
```
```
#Включим движки:
vault auth enable kubernetes
vault secrets enable database
#Укажем адрес kube-api, к которому стоит обращаться для проверки токен сервис аккаунта приложения:
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443 \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
#Создадим роль, которая позволит перечисленным в `bound_service_account_names` сервисаккаунтам получать доступы к БД:
vault write auth/kubernetes/role/db-app \
bound_service_account_names="*" \
bound_service_account_namespaces=default \
policies=db-app \
ttl=1h
#теперь настраиваем подключение к постгресу:
vault write database/config/mydatabase \
plugin_name=postgresql-database-plugin \
allowed_roles="*" \
connection_url="postgresql://{{username}}:{{password}}@postgres-postgresql.default:5432/?sslmode=disable" \
username="postgres" \
password="H@ckM3"
vault write database/roles/db-app \
db_name=mydatabase \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
```
</details>
Чтобы зайти в вебку волта нужно получить себе новый токен:
```
kubectl exec vault-0 -- vault token create
```
включить портфорвард на локалхост
```
kubectl port-forward vault-0 8200:8200 &
```
и с полученым токеном идти в браузере на http://127.0.0.1:8200
Для того, чтобы приложение получило свои секретный логины-пароли к БД нужно добавить к описанию сервиса аннотации, [а тут смотреть целый манифест deployments](docs/service-with-vault-injected-creds-sample.yaml):
```
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/db-app"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/db-app" -}}
"db_connection": "postgresql://{{ .Data.username }}:{{ .Data.password }}@postgres-postgresql:5432/?sslmode=disable"
{{- end }}
vault.hashicorp.com/role: "db-app"
```
После этого в поде с сервисом будет лежать файл `/vault/secrets/db-creds` со строкой подключения к БД
Как включить сбор метрик
----------
- Настроить сервис таким образом, чтобы метрики в формате Prometheus отдавались:
- на `/metrics` с порта `api` для erlang-приложения
- на `/actuator/prometheus` с порта `management` для java-приложения
- Повесить на соответствующий сервис label:
- `prometheus.metrics.erlang.enabled: "true"` для erlang-приложения
- `prometheus.metrics.java.enabled: "true"` для java-приложения
Для получения доступа к веб-интерфейсу Prometheus на http://localhost:31337:
```
kubectl port-forward -n monitoring svc/prometheus-kube-prometheus-prometheus 31337:9090
```
Доступ к логам в kibana
-----------
[docs reference](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html)
our name is "rbk" not "quickstart"
Use kubectl port-forward to access Kibana from your local workstation:
```
kubectl port-forward service/rbkmoney-kb-http 5601
```
Open https://localhost:5601 in your browser. Your browser will show a warning because the self-signed certificate configured by default is not verified by a known certificate authority and not trusted by your browser. You can temporarily acknowledge the warning for the purposes of this quick start but it is highly recommended that you configure valid certificates for any production deployments.
Login as the elastic user. The password can be obtained with the following command:
```
kubectl get secret rbk-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
```
Доступ к grafana и синк dashboards
-----------
Используем kubectl port-forward
```
kubectl -n monitoring port-forward <grafana-pod> 3000
```
grafana доступна в браузере https://localhost:3000. Получить пароль для входа:
```
kubectl get secret --namespace monitoring prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
```

106
config/anapi/sys.config Normal file
View File

@ -0,0 +1,106 @@
%% -*- mode: erlang -*-
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}},
{handler, access_logger, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, not_equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{anapi, [
{ip, "::"},
{port, 8080},
{service_type, real},
{access_conf, #{
jwt => #{
signee => capi,
keyset => #{
keycloak => {pem_file, "/var/lib/anapi/keys/keycloak/keycloak.pubkey.pem"}
}
},
access => #{
service_name => <<"common-api">>,
resource_hierarchy => #{
invoices => #{},
payments => #{},
party => #{}
}
}
}},
{swagger_handler_opts, #{
validation_opts => #{
schema => #{
response => mild
}
}
}},
{oops_bodies, #{
500 => "/var/lib/anapi/oops-bodies/oopsBody1",
501 => "/var/lib/anapi/oops-bodies/oopsBody1",
502 => "/var/lib/anapi/oops-bodies/oopsBody1",
503 => "/var/lib/anapi/oops-bodies/oopsBody2",
504 => "/var/lib/anapi/oops-bodies/oopsBody2"
}},
{health_check, #{
disk => {erl_health, disk, ["/", 99]},
memory => {erl_health, cg_memory, [70]},
service => {erl_health, service, [<<"anapi">>]}
}},
{max_request_deadline, 60000} % milliseconds
]},
{anapi_woody_client, [
{service_urls, #{
merchant_stat => "http://magista-kafka:8022/stat",
reporting => "http://reporter:8022/reports/new-proto",
analytics => "http://analytics:8022/analytics/v1",
party_shop => "http://party-shop:8022/party-shop/v1"
}},
{service_deadlines, #{
merchant_stat => 30000, % milliseconds
reporting => 30000, % milliseconds
analytics => 30000, % milliseconds
party_shop => 10000 % milliseconds
}}
]},
{how_are_you, [
{metrics_publishers, []}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{snowflake, [{machine_id, hostname_hash}]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,86 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/anapi
tag: 86990bcc3ee81b909240b64d03f2575d5677c6ae
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
fetchKeycloakPubkey: |
{{- readFile "../api-common/fetch-keycloak-pubkey.sh" | nindent 6 }}
oopsBody1: |
{{- readFile "../api-common/oops-bodies/sad-kitty1" | nindent 6 }}
oopsBody2: |
{{- readFile "../api-common/oops-bodies/sad-kitty2" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
apiInitContainers:
enabled: true
volumeMounts:
- name: config-volume
mountPath: /opt/anapi/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/anapi/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/anapi/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: config-volume
mountPath: /var/lib/anapi/oops-bodies/oopsBody1
subPath: oopsBody1
readOnly: true
- name: config-volume
mountPath: /var/lib/anapi/oops-bodies/oopsBody2
subPath: oopsBody2
readOnly: true
- name: keycloak-pubkey
mountPath: /var/lib/anapi/keys/keycloak
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: keycloak-pubkey
emptyDir: {}
service:
type: ClusterIP
ports:
- name: api
port: 8080
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ingress:
enabled: true
hosts:
- host: api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /lk/v1
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080

View File

@ -0,0 +1,61 @@
#!/bin/sh
set -o pipefail
KK_HOST=${KK_HOST:-keycloak-headless}
KK_PORT=${KK_PORT:-8080}
KK_REALM=${KK_REALM:-external}
TARGET=${TARGET:-secret}
MAX_RETRY_TIMEOUT=${MAX_RETRY_TIMEOUT:-10}
TIMEOUT=0
LOG_FILE=${SCRIPT_LOGFILE:-/dev/null}
function log() {
local severity=$1
local msg=$2
local log_msg="$(date -Iseconds) [ $severity ] $msg"
echo "$0: $log_msg"
echo $log_msg >> $LOG_FILE
}
while true; do
REALM_FAIL=false
log INFO "Attempting to fetch Keycloak key..."
REALM_DATA=$(wget --quiet --timeout=10 "http://${KK_HOST}:${KK_PORT}/auth/realms/${KK_REALM}" -O -)
EXIT_CODE=$?
if [ "${EXIT_CODE}" -ne "0" ]; then
REALM_FAIL=true
log ERROR "Keycloak realm data fetching failed with exit code: ${EXIT_CODE}"
fi
if [ -z "${REALM_DATA}" ]; then
REALM_FAIL=true
log ERROR "Keycloak realm data is empty"
fi
if [ "$REALM_FAIL" == false ]; then
break
else
TIMEOUT=$((TIMEOUT + 1))
TIMEOUT=$([ $TIMEOUT -le $MAX_RETRY_TIMEOUT ] && echo "$TIMEOUT" || echo "$MAX_RETRY_TIMEOUT")
fi
log ERROR "Keycloak request timeout: ${TIMEOUT}"
sleep $TIMEOUT
done
log INFO "Keycloak realm data fetched successfully"
log DEBUG "${REALM_DATA}"
log INFO "Writing public key to: ${TARGET} ..."
echo "-----BEGIN PUBLIC KEY-----" > ${TARGET}
echo "${REALM_DATA}" | \
sed -E -e 's/^.*"public_key":"([^"]*)".*$/\1/' | \
fold -w80 \
>> ${TARGET}
echo "-----END PUBLIC KEY-----" >> ${TARGET}
log INFO "Everything is ok"

View File

@ -0,0 +1,15 @@
-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQCsUSRFysHJhysA43FGrepj4m85MmVnh5Mt0pyWQD+BF/nUpcQr
2rpE3qzEoXD/q0DzPiDBms5h2Y3Rwlw1dviGl7krPUxwcnQksttSuO+jNf39qNdX
ufhro0WCkr6G1vLpzL22YsXRU4STCKQOpDAUwAOkjcYbozVOTjv04XBHqwIDAQAB
AoGBAIUsqNXvn9l6x7eGEFPJsa7En6Ua19gtpYfyj+ZnfSzuNL0t5/DkuLTlS60k
AEr4NdhIGdTHKd3h34NPrSf87JED+CfsxEVhZZ+wl7nNe8CTBKInVbPBRf8AC9sh
6qbxaCzPcRYn0XZTVmaph7iAStLZmy9pbfw31piKsS/KC7HxAkEA2UCYKkQ0i1jw
EeXohy11MWN08xJ7+ye4qrYT2M+taEJDp/t4f5st12nzrpCP0CeQIX+8TuLVAieu
zAlM/oirlQJBAMsM3jeIhXbyR9BSAesNGrTpWtj3wn07Yj5YfIP8C/wxy5PfdSV9
rhB+kOrJ7MoW/3TjTpJgr1CGKoPwG8kCVj8CQDvobA17sWGbrNfCplRgXKi53E4L
EtU3Jt0sSFzJJ/BQFYgE+D139TQpq2C/zGiCAGS8bJj0Q/jMKI9rISgvV+ECQQC/
vRECI7rUTYke4LHLAf7cIxeUlrFjjHYDJY+/Gn0+0s7IflSi6IE8NigmbjNZyknE
WPlTJFWolmkDWfMC52AFAkBsQa3mUuFDn50H4t9hLxkqICKFrK5IGY26bPDzQrcl
NOuuhK6pAH1C3kfpUx83Ky9xuIogRpacycAuaXQdfrpo
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,10 @@
{
"use": "enc",
"kty": "EC",
"kid": "yx4bq8apE13Sv4R69YrAioQr2b7QhLovSiu2Pt7hdoA",
"crv": "P-256",
"alg": "ECDH-ES",
"x": "5SKZ7tAdtxIRMp6BTbUs535xklduymk1kM_rxzstok4",
"y": "5kjB7foc65yJ3sHshwH3pAwaXX5I6shs4t90Lv6mDkk",
"d": "0rJc5gYA7LcrnUlVZ40KdVclUINexqKx3GJM1cRox_4"
}

View File

@ -0,0 +1,22 @@
█▄░░░░░░░░░░░░░░░░░░░░░░░░▄▄███
███▄░░░░░░░░░░░░░░░░░░░░▄██████
█████▄░░░░░░░░░░░░░░░░░▄███████
███████▄░░░░▄▄▄▄▄░░░░▄█████████
█████████▄▀▀░░░░░▀▀▀▄██████████
▀█████▀░░░░░░░░░░░░░░▀████████░
░▀██▀░░░░░░░░░░░░░░░░░░░▀████▌░
░░██░░░░░░░░░░░░░░░░░░░░░░███░░
░░█▀░░░░░░░░░░░░░░░░░░░░░░░██░░
░░█░░▄████▄░░░░░▄████▄░░░░░░█░░
░░█░░█▐▄█▐█░░░░░█▐▄█▐█░░░░░░█▄░
░░█░░██▄▄██░░░░░██▄▄██░░░░░░░█░
░▐▌░░░░░░░░░░░░░░░░░░░░░░░░░░▐▌
░▐▌░░░░░░░▀▄▄▄▄▀░░░░░░░░░░░░░▐▌
░▐▌░░░░░░░░░▐▌░░░░░░░░░░░░░░░▐▌
░▐▌░░░░░░░▄▀▀▀▀▄░░░░░░░░░░░░░▐▌
░░█▄░░░░░▀░░░░░░▀░░░░░░░░░░░░█▌
░░▐█▀▄▄░░░░░░░░░░░░░░░░░░▄▄▀▀░█
░▐▌░░░░▀▀▄▄░░░░░░░░▄▄▄▄▀▀░░░░░█
░█░░░░░░░░░▀▀▄▄▄▀▀▀░░░░░░░░░░░█
▐▌░░░░░░░░░░░░░░░░░░░░░░░░░░░░█
▐▌░░░░░░░░░░░░░░░░░░░░░░░░░░░░█

View File

@ -0,0 +1,27 @@
───────────────────────▄▄
──▄──────────────────▄███▌
─▐██▄───────────────▄█░░██
─▐█░███────────────██░░░██
─▐█▌░███──────────██▌░░░▐█
──██▌░░██▄███████▄███▌░██
──███▌░███░░░░░░░░░░█▌░█▌
───██████░░░░░░░░░░░░███
───████░░░░░░░░░░░░░░░██
───▐██░░░░░▄█░░█▄░░░░░██▌
───██▌░▄▓▓▓██░░██▓▓▓▄░██▌
───██░▐██████░░██████▌░██
──▐██░█▄▐▓▌█▓░░▓█▐▓▌▄█░██
──███░▓█▄▄▄█▓░░▓█▄▄▄█▓░██▌
──██▌░▀█████▓░░▓████▓▀░░██
─▐██░░░▀███▀░░░░▀███▀░░░██
─███░░░░░░░░▀▄▄▀░░░░░░░░██
─██▌░░░░░░░░░▐▌░░░░░░░░░██▌
─██░░░░░░░░▄▀▀▀▀▄░░░░░░░░██
▐█▌░░░░░░░▀░░░░░░▀░░░░░░░██▌
██░░░░░░░░░░░░░░░░░░░░░░░░██
██░░░░░░░░░░░░░░░░░░░░░░░░██
██░░░░░░░░░░░░░░░░░░░░░░░░██
██░░░░░░░░░░░░░░░░░░░░░░░░██
██░░░░░░░░░░░░░░░░░░░░░░░░██
██░░░░░░░░░░░░░░░░░░░░░░░░██
██░░░░░░░░░░░░░░░░░░░░░░░░██

119
config/bender/sys.config Normal file
View File

@ -0,0 +1,119 @@
%% -*- mode: erlang -*-
[
{bender, [
{service, #{
path => <<"/v1/bender">>
}},
{generator, #{
path => <<"/v1/stateproc/bender_generator">>,
schema => machinery_mg_schema_generic,
url => <<"http://machinegun:8022/v1/automaton">>, % mandatory
transport_opts => #{
pool => generator,
timeout => 5000,
max_connections => 1000
}
}},
{sequence, #{
path => <<"/v1/stateproc/bender_sequence">>,
schema => machinery_mg_schema_generic,
url => <<"http://machinegun:8022/v1/automaton">>, % mandatory
transport_opts => #{
pool => generator,
timeout => 5000,
max_connections => 1000
}
}},
{route_opts, #{
% handler_limits => #{}
}},
{ip, "::"},
{port, 8022},
{protocol_opts, #{
request_timeout => 5000 % time in ms with no requests before Cowboy closes the connection
}},
{shutdown_timeout, 7000}, % time in ms before woody forces connections closing
{transport_opts, #{
handshake_timeout => 5000, % timeout() | infinity, default is 5000
max_connections => 10000, % maximum number of incoming connections, default is 1024
num_acceptors => 100 % size of acceptors pool, default is 10
}},
{woody_event_handlers, [
hay_woody_event_handler,
{scoper_woody_event_handler, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000,
max_printable_string_length => 80
}
}
}}
]},
{health_check, #{
disk => {erl_health, disk , ["/", 99]},
memory => {erl_health, cg_memory, [99]},
service => {erl_health, service , [<<"bender">>]}
}}
]},
{kernel, [
{logger_sasl_compatible, false},
{logger_level, debug},
{logger, [
{handler, default, logger_std_h, #{
level => debug,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
formatter => {logger_logstash_formatter, #{}}
}}
]}
]},
{hackney, [
{mod_metrics, woody_client_metrics}
]},
{how_are_you, [
{metrics_handlers, [
hay_vm_handler,
hay_cgroup_handler
]},
{metrics_publishers, [
% {hay_statsd_publisher, #{
% key_prefix => <<"bender.">>,
% host => "localhost",
% port => 8125
% }}
]}
]},
{os_mon, [
% for better compatibility with busybox coreutils
{disksup_posix_only, true}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{snowflake, [
{max_backward_clock_moving, 1000}, % 1 second
{machine_id, {env_match, "HOSTNAME", "(?!-)([0-9]+)$"}}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,51 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/bender
tag: b0eea3098f05606fa244cc8ffc1fa20d101d42b7
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
volumeMounts:
- name: config-volume
mountPath: /opt/bender/releases/1.0.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/bender/releases/1.0.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/bender/erl_inetrc
subPath: erl_inetrc
readOnly: true
ciliumPolicies:
- filters:
- port: 8022
type: TCP
name: machinegun
namespace: {{ .Release.Namespace }}

96
config/binapi/sys.config Normal file
View File

@ -0,0 +1,96 @@
%% -*- mode: erlang -*-
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}},
{handler, access_logger, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, not_equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
%% PAN
"(?<=\\W[2-6][0-9]{5})[0-9]{1,11}(?=[0-9]{2}\\W)",
%% Expiration date
"(?<=\\W)[0-9]{1,2}[\\s.,-/]([0-9]{2}|2[0-9]{3})(?=\\W)",
%% CVV / CVV2 / CSC
"(?<=\\W)[0-9]{3,4}(?=\\W)"
]
}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{binapi, [
{ip, "::"},
{port, 8080},
{service_type, real},
{access_conf, #{
jwt => #{
signee => binapi,
keyset => #{
keycloak => {pem_file, "/var/lib/binapi/keys/keycloak/keycloak.pubkey.pem"}
}
}
}},
{oops_bodies, #{
500 => "/var/lib/binapi/oops-bodies/oopsBody1",
501 => "/var/lib/binapi/oops-bodies/oopsBody1",
502 => "/var/lib/binapi/oops-bodies/oopsBody1",
503 => "/var/lib/binapi/oops-bodies/oopsBody2",
504 => "/var/lib/binapi/oops-bodies/oopsBody2"
}},
{health_check, #{
disk => {erl_health, disk, ["/", 99]},
memory => {erl_health, cg_memory, [70]},
service => {erl_health, service, [<<"binapi">>]}
}},
{max_request_deadline, 60000} % milliseconds
]},
{binapi_woody_client, [
{service_urls, #{
binbase => "http://binbase:8022/v1/binbase"
}},
{service_deadlines, #{
merchant_stat => 30000, % milliseconds
reporting => 30000, % milliseconds
analytics => 30000, % milliseconds
party_shop => 10000 % milliseconds
}}
]},
{how_are_you, [
{metrics_publishers, []}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{snowflake, [{machine_id, hostname_hash}]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,86 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/binapi
tag: bc5d6fd206c740a3075fd33228561928763d0995
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
fetchKeycloakPubkey: |
{{- readFile "../api-common/fetch-keycloak-pubkey.sh" | nindent 6 }}
oopsBody1: |
{{- readFile "../api-common/oops-bodies/sad-kitty1" | nindent 6 }}
oopsBody2: |
{{- readFile "../api-common/oops-bodies/sad-kitty2" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
apiInitContainers:
enabled: true
volumeMounts:
- name: config-volume
mountPath: /opt/binapi/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/binapi/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/binapi/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: config-volume
mountPath: /var/lib/binapi/oops-bodies/oopsBody1
subPath: oopsBody1
readOnly: true
- name: config-volume
mountPath: /var/lib/binapi/oops-bodies/oopsBody2
subPath: oopsBody2
readOnly: true
- name: keycloak-pubkey
mountPath: /var/lib/binapi/keys/keycloak
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: keycloak-pubkey
emptyDir: {}
service:
type: ClusterIP
ports:
- name: api
port: 8080
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ingress:
enabled: true
hosts:
- host: api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /binbase/v1
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080

View File

@ -0,0 +1,19 @@
#!/bin/sh
set -ue
function onExit {
pg_ctl -D /var/lib/postgresql/9.6/data stop -w
}
trap onExit EXIT
pg_ctl -D /var/lib/postgresql/9.6/data start -w
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/binbase/binbase.jar \
--management.security.enabled=false \
--spring.batch.job.enabled=false \
--client.cds.url=http://cds:8022/v2/storage \
--spring.flyway.enabled=false \
--spring.batch.initialize-schema=never \
${@}

View File

@ -0,0 +1,46 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/binbase-test-data
tag: 53e611d5881405f796f59abef843bcc8178a1343
pullPolicy: IfNotPresent
runopts:
command : ["/opt/binbase/entrypoint.sh"]
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
volumeMounts:
- name: config-volume
mountPath: /opt/binbase/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
livenessProbe:
httpGet:
path: /actuator/health
port: api
initialDelaySeconds: 30
timeoutSeconds: 3
initialDelaySeconds: 30
readinessProbe:
httpGet:
path: /actuator/health
port: api
resources:
requests:
cpu: 100m
memory: 512Mi

View File

@ -0,0 +1,109 @@
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => debug,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
%% PAN
"(?<=\\W[2-6][0-9]{5})[0-9]{1,11}(?=[0-9]{2}\\W)",
%% Expiration date
"(?<=\\W)[0-9]{1,2}[\\s.,-/]([0-9]{2}|2[0-9]{3})(?=\\W)",
%% CVV / CVV2 / CSC
"(?<=\\W)[0-9]{3,4}(?=\\W)"
]
}}
}},
{handler, access_logger, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, not_equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
%% PAN
"(?<=\\W[2-6][0-9]{5})[0-9]{1,11}(?=[0-9]{2}\\W)",
%% Expiration date
"(?<=\\W)[0-9]{1,2}[\\s.,-/]([0-9]{2}|2[0-9]{3})(?=\\W)",
%% CVV / CVV2 / CSC
"(?<=\\W)[0-9]{3,4}(?=\\W)"
]
}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{capi_pcidss, [
{ip , "::" },
{port , 8080 },
{service_type , real },
{access_conf, #{
jwt => #{
keyset => #{
keycloak => {pem_file, "/var/lib/capi/keys/keycloak/keycloak.pubkey.pem"},
capi => {pem_file, "/var/lib/capi/keys/capi.privkey.pem"}
}
},
access => #{
service_name => <<"common-api">>,
resource_hierarchy => #{
payment_resources => #{}
}
}
}},
{oops_bodies, #{
500 => "/var/lib/capi/oops-bodies/oopsBody1",
501 => "/var/lib/capi/oops-bodies/oopsBody1",
502 => "/var/lib/capi/oops-bodies/oopsBody1",
503 => "/var/lib/capi/oops-bodies/oopsBody2",
504 => "/var/lib/capi/oops-bodies/oopsBody2"
}},
{health_checkers, [
{erl_health, disk , ["/", 99]},
{erl_health, cg_memory, [70]},
{erl_health, service , [<<"capi-pcidss-v1">>]}
]},
{lechiffre_opts, #{
encryption_source => {json, {file, <<"/var/lib/capi/keys/token_encryption_key1.jwk">>}}
}},
{validation, #{
%% By default now = current datetime.
now => { {2020, 2, 1}, {0, 0, 0} }
}}
]},
{capi_woody_client, [
{service_urls, #{
cds_storage => "http://cds:8022/v2/storage",
binbase => "http://binbase:8022/v1/binbase",
bender => "http://bender:8022/v1/bender"
}}
]},
{how_are_you, [{metrics_publishers, []}]},
{os_mon, [
{disksup_posix_only, true}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,121 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/capi_pcidss-v1
tag: 3007bbf74504d9f9c709d5ace37cbcfce85c0f4e
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
fetchKeycloakPubkey: |
{{- readFile "../api-common/fetch-keycloak-pubkey.sh" | nindent 6 }}
oopsBody1: |
{{- readFile "../api-common/oops-bodies/sad-kitty1" | nindent 6 }}
oopsBody2: |
{{- readFile "../api-common/oops-bodies/sad-kitty2" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
secret:
data:
token_encryption_key1.jwk: |
{{- readFile "../api-common/keys/token-encryption-keys/1.jwk" | nindent 6 }}
capi.privkey.pem: |
{{- readFile "../api-common/keys/capi.privkey.pem" | nindent 6 }}
apiInitContainers:
enabled: true
volumeMounts:
- name: config-volume
mountPath: /opt/capi_pcidss/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/capi_pcidss/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/capi_pcidss/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody1
subPath: oopsBody1
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody2
subPath: oopsBody2
readOnly: true
- name: secret
mountPath: /var/lib/capi/keys
readOnly: true
- name: keycloak-pubkey
mountPath: /var/lib/capi/keys/keycloak
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: secret
secret:
secretName: {{ .Release.Name }}
- name: keycloak-pubkey
emptyDir: {}
service:
type: ClusterIP
ports:
- name: api
port: 8080
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ingress:
enabled: true
hosts:
- host: api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /v1/processing/payment-resources
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080
ciliumPolicies:
- filters:
- port: 8080
type: TCP
name: keycloak
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: binbase
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: bender
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: cds
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,143 @@
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => debug,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
%% PAN
"(?<=\\W[2-6][0-9]{5})[0-9]{1,11}(?=[0-9]{2}\\W)",
%% Expiration date
"(?<=\\W)[0-9]{1,2}[\\s.,-/]([0-9]{2}|2[0-9]{3})(?=\\W)",
%% CVV / CVV2 / CSC
"(?<=\\W)[0-9]{3,4}(?=\\W)"
]
}}
}},
{handler, access_logger, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, not_equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
%% PAN
"(?<=\\W[2-6][0-9]{5})[0-9]{1,11}(?=[0-9]{2}\\W)",
%% Expiration date
"(?<=\\W)[0-9]{1,2}[\\s.,-/]([0-9]{2}|2[0-9]{3})(?=\\W)",
%% CVV / CVV2 / CSC
"(?<=\\W)[0-9]{3,4}(?=\\W)"
]
}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{capi_pcidss, [
{ip , "::" },
{port , 8080 },
{service_type , real },
{access_conf, #{
jwt => #{
keyset => #{
keycloak => {pem_file, "/var/lib/capi/keys/keycloak/keycloak.pubkey.pem"},
capi => {pem_file, "/var/lib/capi/keys/capi.privkey.pem"}
}
},
access => #{
service_name => <<"common-api">>,
resource_hierarchy => #{
payment_resources => #{}
}
}
}},
{oops_bodies, #{
500 => "/var/lib/capi/oops-bodies/oopsBody1",
501 => "/var/lib/capi/oops-bodies/oopsBody1",
502 => "/var/lib/capi/oops-bodies/oopsBody1",
503 => "/var/lib/capi/oops-bodies/oopsBody2",
504 => "/var/lib/capi/oops-bodies/oopsBody2"
}},
{health_checkers, [
{erl_health, disk , ["/", 99]},
{erl_health, cg_memory, [70]},
{erl_health, service , [<<"capi-pcidss-v2">>]}
]},
{max_request_deadline, 60000}, % milliseconds
{lechiffre_opts, #{
encryption_source => {json, {file, <<"/var/lib/capi/keys/token_encryption_key1.jwk">>}}
}},
{validation, #{
%% By default now = current datetime.
now => { {2020, 2, 1}, {0, 0, 0} }
}}
]},
{capi_woody_client, [
{services, #{
cds_storage => #{
url => "http://cds:8022/v2/storage",
transport_opts => #{
pool => cds_storage,
timeout => 1000,
max_connections => 1
}
},
tds_storage => #{
url => "http://cds:8022/v1/token_storage",
transport_opts => #{
pool => tds_storage,
timeout => 1000
}
},
binbase => #{
url => "http://binbase:8022/v1/binbase",
transport_opts => #{
pool => binbase,
timeout => 1000,
max_connections => 1
}
},
bender => #{
url => "http://bender:8022/v1/bender",
transport_opts => #{
pool => bender,
timeout => 1000,
max_connections => 1
}
}
}}
]},
{hackney, [
{mod_metrics, woody_client_metrics}
]},
{how_are_you, [
{metrics_publishers, []}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,120 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/capi_pcidss-v2
tag: 54dde2dd6a7ce75437be334ee3adfcfb9b590d19
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
fetchKeycloakPubkey: |
{{- readFile "../api-common/fetch-keycloak-pubkey.sh" | nindent 6 }}
oopsBody1: |
{{- readFile "../api-common/oops-bodies/sad-kitty1" | nindent 6 }}
oopsBody2: |
{{- readFile "../api-common/oops-bodies/sad-kitty2" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
secret:
data:
token_encryption_key1.jwk: |
{{- readFile "../api-common/keys/token-encryption-keys/1.jwk" | nindent 6 }}
capi.privkey.pem: |
{{- readFile "../api-common/keys/capi.privkey.pem" | nindent 6 }}
apiInitContainers:
enabled: true
volumeMounts:
- name: config-volume
mountPath: /opt/capi_pcidss/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/capi_pcidss/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/capi_pcidss/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody1
subPath: oopsBody1
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody2
subPath: oopsBody2
readOnly: true
- name: secret
mountPath: /var/lib/capi/keys
readOnly: true
- name: keycloak-pubkey
mountPath: /var/lib/capi/keys/keycloak
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: secret
secret:
secretName: {{ .Release.Name }}
- name: keycloak-pubkey
emptyDir: {}
service:
type: ClusterIP
ports:
- name: api
port: 8080
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ingress:
enabled: true
hosts:
- host: api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /v2/processing/payment-resources
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080
ciliumPolicies:
- filters:
- port: 8080
type: TCP
name: keycloak
- filters:
- port: 8022
type: TCP
name: binbase
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: bender
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: cds
namespace: {{ .Release.Namespace }}

133
config/capi-v1/sys.config Normal file
View File

@ -0,0 +1,133 @@
%% -*- mode: erlang -*-
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => debug,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}},
{handler, access_logger, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, not_equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{capi, [
{ip , "::" },
{port , 8080 },
{service_type , real },
{authorizers, #{
jwt => #{
signee => capi,
keyset => #{
keycloak => {pem_file, "/var/lib/capi/keys/keycloak/keycloak.pubkey.pem"},
capi => {pem_file, "/var/lib/capi/keys/capi.privkey.pem"}
}
}
}},
{api_key_blacklist, #{
update_interval => 50000, % milliseconds
blacklisted_keys_dir => "/opt/capi"
}},
{oops_bodies, #{
500 => "/var/lib/capi/oops-bodies/oopsBody1",
501 => "/var/lib/capi/oops-bodies/oopsBody1",
502 => "/var/lib/capi/oops-bodies/oopsBody1",
503 => "/var/lib/capi/oops-bodies/oopsBody2",
504 => "/var/lib/capi/oops-bodies/oopsBody2"
}},
{swagger_handler_opts, #{
validation_opts => #{
schema => #{
response => mild
}
}
}},
{health_check, #{
disk => {erl_health, disk , ["/", 99]},
memory => {erl_health, cg_memory, [70]},
service => {erl_health, service , [<<"capi-v1">>]}
}},
{reporter_url_lifetime, 300}, % seconds
{lechiffre_opts, #{
decryption_sources => [
{json, {file, <<"/var/lib/capi/keys/token_encryption_key1.jwk">>}}
]
}}
]},
{capi_woody_client, [
{service_urls, #{
bender => "http://bender:8022/v1/bender",
invoicing => "http://hellgate:8022/v1/processing/invoicing",
invoice_templating => "http://hellgate:8022/v1/processing/invoice_templating",
merchant_stat => "http://magista:8022/stat",
party_management => "http://hellgate:8022/v1/processing/partymgmt",
geo_ip_service => "http://columbus:8022/repo",
accounter => "http://shumway:8022/accounter",
file_storage => "http://file_storage:8022/file_storage",
reporting => "http://reporter:8022/reports/new-proto",
webhook_manager => "http://hooker:8022/hook",
customer_management => "http://hellgate:8022/v1/processing/customer_management"
}},
{service_deadlines, #{
bender => 30000,
invoicing => 30000, % milliseconds
party_management => 30000,
customer_management => 30000
}}
]},
{dmt_client, [
{cache_update_interval, 30000}, % milliseconds
{cache_server_call_timeout, 30000}, % milliseconds
{max_cache_size, #{
elements => 5,
memory => 52428800 % 50Mb
}},
{service_urls, #{
'Repository' => <<"http://dominant:8022/v1/domain/repository">>,
'RepositoryClient' => <<"http://dominant:8022/v1/domain/repository_client">>
}}
]},
{how_are_you, [
{metrics_publishers, [
% {hay_statsd_publisher, #{
% key_prefix => <<"capi-v1.">>,
% host => "localhost",
% port => 8125
% }}
]}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{snowflake, [{machine_id, hostname_hash}]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,126 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/capi-v1
tag: b2b15a5b620cd7061f9e81fa44955e824ffdf806
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
fetchKeycloakPubkey: |
{{- readFile "../api-common/fetch-keycloak-pubkey.sh" | nindent 6 }}
oopsBody1: |
{{- readFile "../api-common/oops-bodies/sad-kitty1" | nindent 6 }}
oopsBody2: |
{{- readFile "../api-common/oops-bodies/sad-kitty2" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
secret:
data:
token_encryption_key1.jwk: |
{{- readFile "../api-common/keys/token-encryption-keys/1.jwk" | nindent 6 }}
capi.privkey.pem: |
{{- readFile "../api-common/keys/capi.privkey.pem" | nindent 6 }}
apiInitContainers:
enabled: true
volumeMounts:
- name: config-volume
mountPath: /opt/capi/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/capi/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/capi/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody1
subPath: oopsBody1
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody2
subPath: oopsBody2
readOnly: true
- name: secret
mountPath: /var/lib/capi/keys
readOnly: true
- name: keycloak-pubkey
mountPath: /var/lib/capi/keys/keycloak
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: secret
secret:
secretName: {{ .Release.Name }}
- name: keycloak-pubkey
emptyDir: {}
service:
type: ClusterIP
ports:
- name: api
port: 8080
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ingress:
enabled: true
hosts:
- host: api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /v1
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080
ciliumPolicies:
- filters:
- port: 8080
type: TCP
name: keycloak
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: bender
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: shumway
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: dominant
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: hellgate
namespace: {{ .Release.Namespace }}

219
config/capi-v2/sys.config Normal file
View File

@ -0,0 +1,219 @@
[
{kernel, [
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => debug,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}},
{handler, access_logger, logger_std_h, #{
level => info,
config => #{
type => standard_io,
sync_mode_qlen => 2000,
drop_mode_qlen => 2000,
flush_qlen => 3000
},
filters => [{access_log, {fun logger_filters:domain/2, {stop, not_equal, [cowboy_access_log]}}}],
formatter => {logger_logstash_formatter, #{}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{capi, [
{ip , "::" },
{port , 8080 },
{service_type , real },
{access_conf, #{
jwt => #{
signee => capi,
keyset => #{
keycloak => {pem_file, "/var/lib/capi/keys/keycloak/keycloak.pubkey.pem"},
capi => {pem_file, "/var/lib/capi/keys/capi.privkey.pem"}
}
}
}},
{oops_bodies, #{
500 => "/var/lib/capi/oops-bodies/oopsBody1",
501 => "/var/lib/capi/oops-bodies/oopsBody1",
502 => "/var/lib/capi/oops-bodies/oopsBody1",
503 => "/var/lib/capi/oops-bodies/oopsBody2",
504 => "/var/lib/capi/oops-bodies/oopsBody2"
}},
{api_key_blacklist, #{
update_interval => 50000, % milliseconds
blacklisted_keys_dir => "/opt/capi"
}},
{swagger_handler_opts, #{
validation_opts => #{
schema => #{
response => mild
}
}
}},
{health_check, #{
disk => {erl_health, disk , ["/", 99]},
memory => {erl_health, cg_memory, [70]},
service => {erl_health, service , [<<"capi-v2">>]}
}},
{max_request_deadline, 60000}, % milliseconds
{reporter_url_lifetime, 300}, % seconds
{default_processing_deadline, <<"30m">>},
{lechiffre_opts, #{
decryption_sources => [
{json, {file, <<"/var/lib/capi/keys/token_encryption_key1.jwk">>}}
]
}}
]},
{capi_woody_client, [
{services, #{
invoicing => #{
url => "http://hellgate:8022/v1/processing/invoicing",
transport_opts => #{
pool => invoicing
%timeout => {{ woody_client_keep_alive }},
%max_connections => {{ salt['pillar.get']('wetkitty:macroservice:limits:concurrent-payments') }}
}
},
invoice_templating => #{
url => "http://hellgate:8022/v1/processing/invoice_templating",
transport_opts => #{
pool => invoice_templating
%timeout => {{ woody_client_keep_alive }}
}
},
merchant_stat => #{
url => "http://magista:8022/stat",
transport_opts => #{
pool => merchant_stat
%timeout => {{ woody_client_keep_alive }}
}
},
party_management => #{
url => "http://hellgate:8022/v1/processing/partymgmt",
transport_opts => #{
pool => party_management
%timeout => {{ woody_client_keep_alive }}
}
},
geo_ip_service => #{
url => "http://columbus:8022/repo",
transport_opts => #{
pool => geo_ip_service
%timeout => {{ woody_client_keep_alive }}
}
},
accounter => #{
url => "http://shumway:8022/accounter",
transport_opts => #{
pool => accounter
%timeout => {{ woody_client_keep_alive }},
%max_connections => {{ salt['pillar.get']('wetkitty:macroservice:limits:concurrent-payments') }}
}
},
file_storage => #{
url => "http://file_storage:8022/file_storage",
transport_opts => #{
pool => file_storage
%timeout => {{ woody_client_keep_alive }}
}
},
reporting => #{
url => "http://reporter:8022/reports/new-proto",
transport_opts => #{
pool => reporting
%timeout => {{ woody_client_keep_alive }}
}
},
payouts => #{
url => "http://payouter:8022/payout/management",
transport_opts => #{
pool => payouts
%timeout => {{ woody_client_keep_alive }}
}
},
webhook_manager => #{
url => "http://hooker:8022/hook",
transport_opts => #{
pool => webhook_manager
%timeout => {{ woody_client_keep_alive }}
}
},
customer_management => #{
url => "http://hellgate:8022/v1/processing/customer_management",
transport_opts => #{
pool => customer_management
%timeout => {{ woody_client_keep_alive }}
}
}
}},
{service_deadlines, #{
bender => 30000,
invoicing => 30000, % milliseconds
party_management => 30000,
customer_management => 30000
}}
]},
{bender_client, [
{services, #{
'Bender' => <<"http://bender:8022/v1/bender">>,
'Generator' => <<"http://bender:8022/v1/generator">>
}},
{deadline, 60000}
]},
{dmt_client, [
{cache_update_interval, 30000}, % milliseconds
{cache_server_call_timeout, 30000}, % milliseconds
{max_cache_size, #{
elements => 5,
memory => 52428800 % 50Mb
}},
{service_urls, #{
'Repository' => <<"http://dominant:8022/v1/domain/repository">>,
'RepositoryClient' => <<"http://dominant:8022/v1/domain/repository_client">>
}}
]},
{how_are_you, [
{metrics_handlers, [
hay_vm_handler,
hay_cgroup_handler,
woody_api_hay
]},
{metrics_publishers, [
%{hay_statsd_publisher, #{
% key_prefix => <<"{{ service_name }}.">>,
% host => "{{ salt['pillar.get']('wetkitty:statsd:host') }}",
% port => {{ salt['pillar.get']('wetkitty:statsd:port') }}
%}}
]}
]},
{hackney, [
{mod_metrics, woody_client_metrics}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{snowflake, [{machine_id, hostname_hash}]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,126 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/capi-v2
tag: 10510c2148fb3aaf1bf8893f8ddd2b4de900e557
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
fetchKeycloakPubkey: |
{{- readFile "../api-common/fetch-keycloak-pubkey.sh" | nindent 6 }}
oopsBody1: |
{{- readFile "../api-common/oops-bodies/sad-kitty1" | nindent 6 }}
oopsBody2: |
{{- readFile "../api-common/oops-bodies/sad-kitty2" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
secret:
data:
token_encryption_key1.jwk: |
{{- readFile "../api-common/keys/token-encryption-keys/1.jwk" | nindent 6 }}
capi.privkey.pem: |
{{- readFile "../api-common/keys/capi.privkey.pem" | nindent 6 }}
apiInitContainers:
enabled: true
volumeMounts:
- name: config-volume
mountPath: /opt/capi/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/capi/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/capi/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody1
subPath: oopsBody1
readOnly: true
- name: config-volume
mountPath: /var/lib/capi/oops-bodies/oopsBody2
subPath: oopsBody2
readOnly: true
- name: secret
mountPath: /var/lib/capi/keys
readOnly: true
- name: keycloak-pubkey
mountPath: /var/lib/capi/keys/keycloak
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: secret
secret:
secretName: {{ .Release.Name }}
- name: keycloak-pubkey
emptyDir: {}
service:
type: ClusterIP
ports:
- name: api
port: 8080
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ingress:
enabled: true
hosts:
- host: api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /v2
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080
ciliumPolicies:
- filters:
- port: 8080
type: TCP
name: keycloak
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: bender
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: shumway
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: dominant
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: hellgate
namespace: {{ .Release.Namespace }}

44
config/cds/ca.crt Normal file
View File

@ -0,0 +1,44 @@
-----BEGIN CERTIFICATE-----
MIIDnzCCAwGgAwIBAgIBAjAKBggqhkjOPQQDAjCB3zEVMBMGCgmSJomT8ixkARkW
BW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UE
CAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoM
CVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBMYWJzMSowKAYDVQQDDCFSQksg
TW9uZXkgQ0RFIERldmVsb3BtZW50IFJvb3QgQ0ExHzAdBgkqhkiG9w0BCQEWEGRl
dm9wc0ByYmsubW9uZXkwHhcNMTkwOTA1MDg0MjM0WhcNMjQwOTAzMDg0MjM0WjCB
4jEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQsw
CQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQH
DAZNb3Njb3cxEjAQBgNVBAoMCVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBM
YWJzMS0wKwYDVQQDDCRSQksgTW9uZXkgQ0RFIERldmVsb3BtZW50IFNpZ25pbmcg
Q0ExHzAdBgkqhkiG9w0BCQEWEGRldm9wc0ByYmsubW9uZXkwgZswEAYHKoZIzj0C
AQYFK4EEACMDgYYABAAsqZHI7O964jB0afIpxzkKWCeeaaOSIS6DqH0Hw2H9lOB8
fdlPcBrEM8t+Ubs1FjiwKBXcoL3vtD6IWMmG4Oyt7QHcjItexzRHm0BIIgjSuQJi
Qza1DEJLFElPB4rGtg4SsXf0+inEB8U2miZe2jXToxAtgdKwBWfCNry3L9JkTuns
LKNmMGQwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0O
BBYEFNeHpaGZ/0ehRBPixvl+bcaGMPpAMB8GA1UdIwQYMBaAFDg8k/pUJ3gSQDL6
BXcgDto3r4rkMAoGCCqGSM49BAMCA4GLADCBhwJBOLXNDe3nO3EtzTnV5JPLU+jO
KWcgOp6YL+MNP21iSFugNAnPqs0orV8cnP4hCLL/wABD9WjqIzr2xKtmpkFAip0C
QgFhdxbzNBFMw3VhBojg7XB7DpoH7KUBHz/dzgXeCor20ovPycyOxemr25ySk1iy
Pwe0dORE23A8IWoDe6IsGIuyag==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDmTCCAvugAwIBAgIBATAKBggqhkjOPQQDAjCB3zEVMBMGCgmSJomT8ixkARkW
BW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UE
CAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoM
CVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBMYWJzMSowKAYDVQQDDCFSQksg
TW9uZXkgQ0RFIERldmVsb3BtZW50IFJvb3QgQ0ExHzAdBgkqhkiG9w0BCQEWEGRl
dm9wc0ByYmsubW9uZXkwHhcNMTkwOTA1MDg0MjM0WhcNMjkwOTAyMDg0MjM0WjCB
3zEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQsw
CQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQH
DAZNb3Njb3cxEjAQBgNVBAoMCVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBM
YWJzMSowKAYDVQQDDCFSQksgTW9uZXkgQ0RFIERldmVsb3BtZW50IFJvb3QgQ0Ex
HzAdBgkqhkiG9w0BCQEWEGRldm9wc0ByYmsubW9uZXkwgZswEAYHKoZIzj0CAQYF
K4EEACMDgYYABACfG9NGzV34Q3DSF0PfLhEe2od5YgfSxniVpba+O+bRHVOFnp1G
ZOBuJ7WJiK2q9mWG2qSQnEfuSvqoLq4pBYfHbACBjYcLoQRRfaIyvBACHMCdWH5h
TjJ4/Rav0mBgQNsaZ41oFfSyv27vfl92ue8S42l9RnZCkoH6LYM2LP6PeT9JjKNj
MGEwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFDg8
k/pUJ3gSQDL6BXcgDto3r4rkMB8GA1UdIwQYMBaAFDg8k/pUJ3gSQDL6BXcgDto3
r4rkMAoGCCqGSM49BAMCA4GLADCBhwJBdiaVap0dLI/12coM9Xqa16alUBVzr8QV
DFpzcQ3nm/n1SvoV1lDeyEUocaWgIcAL3db1abbOwJITWyB0NxO7FToCQgGCVDej
jrQ901BCO1b9r3aWo4UlSFR2ZCdPmV7oDFYku+kQ7/6q+kiwNHPolcnI/kk57P30
nXQN6GRWxoK7Pv7i7A==
-----END CERTIFICATE-----

30
config/cds/client.pem Normal file
View File

@ -0,0 +1,30 @@
-----BEGIN CERTIFICATE-----
MIIEJjCCA4egAwIBAgIBAzAKBggqhkjOPQQDAjCB4jEVMBMGCgmSJomT8ixkARkW
BW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UE
CAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoM
CVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBMYWJzMS0wKwYDVQQDDCRSQksg
TW9uZXkgQ0RFIERldmVsb3BtZW50IFNpZ25pbmcgQ0ExHzAdBgkqhkiG9w0BCQEW
EGRldm9wc0ByYmsubW9uZXkwHhcNMjAwOTE1MTY1OTI4WhcNMzAwOTEzMTY1OTI4
WjCBoDEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJr
MRIwEAYDVQQKDAlSQksgTW9uZXkxFTATBgNVBAoMDFBheW1lbnQgTGFiczEPMA0G
A1UEBwwGTW9zY293MQswCQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRl
cmF0aW9uMQwwCgYDVQQDDANjZHMwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAS+
p6N340oSXgtKLqZpF49Br/bFco06WIUdZQ2qHS97+/xw9X8f5x6KGOEpunM6H6Ao
ayn/7bYHXEieypImiaUmo4IBbDCCAWgwCQYDVR0TBAIwADALBgNVHQ8EBAMCBeAw
HQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMIMB0GA1UdDgQWBBSD5X+5Qx12
XrkxFFcMRsy4h0fx6TCCAQ4GA1UdIwSCAQUwggEBgBTXh6Whmf9HoUQT4sb5fm3G
hjD6QKGB5aSB4jCB3zEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPy
LGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRlcmF0
aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoMCVJCSyBNb25leTEVMBMGA1UE
CgwMUGF5bWVudCBMYWJzMSowKAYDVQQDDCFSQksgTW9uZXkgQ0RFIERldmVsb3Bt
ZW50IFJvb3QgQ0ExHzAdBgkqhkiG9w0BCQEWEGRldm9wc0ByYmsubW9uZXmCAQIw
CgYIKoZIzj0EAwIDgYwAMIGIAkIBnrsa/Wbkue5r+D2nwBHJbDoqjSQK6JVQFLJM
S0QYlnn9ePGqTHurqepvNNoEfyMBN+s5rI08og6O3LOa6l/+DgYCQgCuNCZdFz/U
OROv/gGEQ58oQSOTcKdcCIM0yol3/GDpWSMt2dUAX+MXJ8IJERtOU+dQBQKf5fut
HK2JoC64YeanVw==
-----END CERTIFICATE-----
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIEGEaBOM7/9Cq1cSfn9kqvB0QWlM0U5lcf0/rde5l0R7oAoGCCqGSM49
AwEHoUQDQgAEvqejd+NKEl4LSi6maRePQa/2xXKNOliFHWUNqh0ve/v8cPV/H+ce
ihjhKbpzOh+gKGsp/+22B1xInsqSJomlJg==
-----END EC PRIVATE KEY-----

93
config/cds/sys.config Normal file
View File

@ -0,0 +1,93 @@
[
{cds, [
{ip, "::"},
{port, 8022},
{transport_opts, #{}},
{protocol_opts, #{
request_timeout => 60000
}},
{shutdown_timeout, 0},
{scrypt_opts, {16384, 8, 1}},
{keyring, #{
url => <<"https://kds:8023">>,
ssl_options => [
{server_name_indication, "kds"},
{verify, verify_peer},
{cacertfile, "/var/lib/cds/ca.crt"},
{certfile, "/var/lib/cds/client.pem"}
],
transport_opts => #{
recv_timeout => 10000,
connect_timeout => 1000
},
timeout => 10000
}},
{storage, cds_storage_riak},
{cds_storage_riak, #{
conn_params => #{
host => "riak",
port => 8087,
options => #{
connect_timeout => 1000,
keepalive => true
}
},
pool_params => #{
max_count => 10,
init_count => 10,
cull_interval => {0, min},
pool_timeout => {1, sec}
}
}},
{session_cleaning, #{
interval => 3000,
batch_size => 5000,
session_lifetime => 3600
}},
{recrypting, #{
interval => 3000,
batch_size => 5000
}},
{health_checkers, [
{erl_health, disk , ["/", 99] },
{erl_health, cg_memory, [99] },
{erl_health, service , [<<"cds">>]}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{kernel, [
{logger_sasl_compatible, false},
{logger_level, debug},
{logger, [
{handler, default, logger_std_h, #{
config => #{
type => standard_io
},
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
"[0-9]{12,19}", %% pan
"[0-9]{2}.[0-9]{2,4}", %% expiration date
"[0-9]{3,4}" %% cvv
]
}}
}}
]}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{how_are_you, [
{metrics_publishers, []}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,70 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/cds
tag: c0661c4d5abb85f7728bd0e816760670aa248251
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
secret:
data:
ca.crt: |
{{- readFile "ca.crt" | nindent 6 }}
client.pem: |
{{- readFile "client.pem" | nindent 6 }}
volumeMounts:
- name: config-volume
mountPath: /opt/cds/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/cds/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/cds/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: secret
mountPath: /var/lib/cds/
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
- name: secret
secret:
secretName: {{ .Release.Name }}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ciliumPolicies:
- filters:
- port: 8087
type: TCP
name: riak
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
- port: 8023
type: TCP
name: kds
namespace: {{ .Release.Namespace }}

1495
config/cilium/values.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,15 @@
# -*- mode: yaml -*-
global:
name: "consul"
client:
enabled: false
server:
replicas: 1
extraLabels:
selector.cilium.rbkmoney/release: {{ .Release.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: {{ .Release.Name }}

View File

@ -0,0 +1,7 @@
{
{{- if .Values.services.ingress.tls.enabled }}
"papiEndpoint": "https://iddqd.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}/papi/v1"
{{- else }}
"papiEndpoint": "http://iddqd.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}/papi/v1"
{{- end }}
}

View File

@ -0,0 +1,7 @@
{
"realm": "internal",
"auth-server-url": "https://auth.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}/auth/",
"ssl-required": "external",
"resource": "control-center",
"public-client": true
}

View File

@ -0,0 +1,68 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/control-center
tag: 520bccc8dee1dfb23c6ca0fd96f960e3e00750a2
pullPolicy: IfNotPresent
service:
type: ClusterIP
ports:
- name: http
port: 8080
configMap:
data:
appConfig.json: |
{{- tpl (readFile "appConfig.json.gotmpl") . | nindent 6 }}
authConfig.json: |
{{- tpl (readFile "authConfig.json.gotmpl") . | nindent 6 }}
control-center.conf: |
{{- readFile "vhost.conf" | nindent 6 }}
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/assets/appConfig.json
subPath: appConfig.json
readOnly: true
- name: config-volume
mountPath: /usr/share/nginx/html/assets/authConfig.json
subPath: appConfig.json
readOnly: true
- name: config-volume
mountPath: /etc/nginx/vhosts.d/control-center.conf
subPath: control-center.conf
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
livenessProbe:
httpGet:
path: /assets/appConfig.json
port: http
initialDelaySeconds: 30
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /assets/appConfig.json
port: http
initialDelaySeconds: 30
timeoutSeconds: 3
ingress:
enabled: true
hosts:
- host: iddqd.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- iddqd.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080

View File

@ -0,0 +1,20 @@
server {
listen 8080;
listen [::]:8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /v1 {
proxy_pass http://dominant:8022;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@ -0,0 +1,17 @@
{
"apiEndpoint": "https://api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}",
"urlShortenerEndpoint": "https://rbk.mn",
"checkoutEndpoint": "https://checkout.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}",
"ext": {
"docsEndpoint": "https://rbkmoney.github.io/docs",
"supportEmail": "support@rbkmoney.com",
"paymentsApiSpecEndpoint": "https://developer.rbk.money/api/"
},
"yandexMetrika": {
"id": null,
"clickmap": true,
"trackLinks": true,
"accurateTrackBounce": true,
"webvisor": true
}
}

View File

@ -0,0 +1,7 @@
{
"realm": "internal",
"auth-server-url": "https://auth.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}/auth/",
"ssl-required": "external",
"resource": "koffing",
"public-client": true
}

View File

@ -0,0 +1,68 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/dashboard
tag: 380a2e2464ccec1e624d8972381622fcb3b5789a
pullPolicy: IfNotPresent
service:
type: ClusterIP
ports:
- name: http
port: 8080
configMap:
data:
appConfig.json: |
{{- tpl (readFile "appConfig.json.gotmpl") . | nindent 6 }}
authConfig.json: |
{{- tpl (readFile "authConfig.json.gotmpl") . | nindent 6 }}
dashboard.conf: |
{{- readFile "vhost.conf" | nindent 6 }}
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/appConfig.json
subPath: appConfig.json
readOnly: true
- name: config-volume
mountPath: /usr/share/nginx/html/authConfig.json
subPath: authConfig.json
readOnly: true
- name: config-volume
mountPath: /etc/nginx/vhosts.d/dashboard.conf
subPath: dashboard.conf
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
livenessProbe:
httpGet:
path: /appConfig.json
port: http
initialDelaySeconds: 30
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /appConfig.json
port: http
initialDelaySeconds: 30
timeoutSeconds: 3
ingress:
enabled: true
hosts:
- host: dashboard.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- dashboard.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080

View File

@ -0,0 +1,15 @@
server {
listen 8080;
listen [::]:8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@ -0,0 +1,294 @@
#!/usr/bin/env bash
set -o errexit
set -o pipefail
set -o errtrace
FIXTURE=$(cat <<END
{"ops": [
{"insert": {"object": {"globals": {
"ref": {},
"data": {
"system_account_set": {"value": {"id": 1}},
"external_account_set": {"value": {"id": 1}},
"inspector": {"value": {"id": 1}}
}
}}}},
{"insert": {"object": {"system_account_set": {
"ref": {"id": 1},
"data": {
"name": "Primary",
"description": "Primary",
"accounts": [
{"key": {"symbolic_code": "RUB"}, "value": {
"settlement": $(scripts/dominant/create-account.sh RUB)
}}
]
}
}}}},
{"insert": {"object": {"external_account_set": {
"ref": {"id": 1},
"data": {
"name": "Primary",
"description": "Primary",
"accounts": [
{"key": {"symbolic_code": "RUB"}, "value": {
"income": $(scripts/dominant/create-account.sh RUB),
"outcome": $(scripts/dominant/create-account.sh RUB)
}}
]
}
}}}},
{"insert": {"object": {"inspector": {
"ref": {"id": 1},
"data": {
"name": "Kovalsky",
"description": "World famous inspector Kovalsky at your service!",
"proxy": {
"ref": {"id": 100},
"additional": {
"risk_score": "high"
}
}
}
}}}},
{"insert": {"object": {"term_set_hierarchy": {
"ref": {"id": 1},
"data": {
"term_sets": [
{
"action_time": {},
"terms": {
"payments": {
"currencies": {"value": [
{"symbolic_code": "RUB"}
]},
"categories": {"value": [
{"id": 1}
]},
"payment_methods": {"value": [
{"id": {"bank_card": {"payment_system": "visa"}}},
{"id": {"bank_card": {"payment_system": "mastercard"}}}
]},
"cash_limit": {"decisions": [
{
"if_": {"condition": {"currency_is": {"symbolic_code": "RUB"}}},
"then_": {"value": {
"lower": {"inclusive": {"amount": 1000, "currency": {"symbolic_code": "RUB"}}},
"upper": {"exclusive": {"amount": 4200000, "currency": {"symbolic_code": "RUB"}}}
}}
}
]},
"fees": {"decisions": [
{
"if_": {"condition": {"currency_is": {"symbolic_code": "RUB"}}},
"then_": {"value": [
{
"source": {"merchant": "settlement"},
"destination": {"system": "settlement"},
"volume": {"share": {"parts": {"p": 45, "q": 1000}, "of": "operation_amount"}}
}
]}
}
]},
"holds": {
"payment_methods": {"value": [
{"id": {"bank_card": {"payment_system": "visa"}}},
{"id": {"bank_card": {"payment_system": "mastercard"}}}
]},
"lifetime": {"value": {"seconds": 10}}
},
"refunds": {
"payment_methods": {"value": [
{"id": {"bank_card": {"payment_system": "visa"}}},
{"id": {"bank_card": {"payment_system": "mastercard"}}}
]},
"fees": {"value": [
]}
}
}
}
}
]
}
}}}},
{"insert": {"object": {"contract_template": {
"ref": {"id": 1},
"data": {
"terms": {"id": 1}
}
}}}},
{"insert": {"object": {"currency": {
"ref": {"symbolic_code": "RUB"},
"data": {
"name": "Russian rubles",
"numeric_code": 643,
"symbolic_code": "RUB",
"exponent": 2
}
}}}},
{"insert": {"object": {"category": {
"ref": {"id": 1},
"data": {
"name": "Basic test category",
"description": "Basic test category for mocketbank provider",
"type": "test"
}
}}}},
{"insert": {"object": {"payment_method": {
"ref": {"id": {"bank_card": {"payment_system": "visa"}}},
"data": {
"name": "VISA",
"description": "VISA bank cards"
}
}}}},
{"insert": {"object": {"payment_method": {
"ref": {"id": {"bank_card": {"payment_system": "mastercard"}}},
"data": {
"name": "Mastercard",
"description": "Mastercard bank cards"
}
}}}},
{"insert": {"object": {"provider": {
"ref": {"id": 1},
"data": {
"name": "Mocketbank",
"description": "Mocketbank",
"terminal": {"value": [
{"id": 1}
]},
"proxy": {
"ref": {"id": 1},
"additional": {}
},
"abs_account": "0000000001",
"terms": {
"payments": {
"currencies": {"value": [
{"symbolic_code": "RUB"}
]},
"categories": {"value": [
{"id": 1}
]},
"payment_methods": {"value": [
{"id": {"bank_card": {"payment_system": "visa"}}},
{"id": {"bank_card": {"payment_system": "mastercard"}}}
]},
"cash_limit": {"value": {
"lower": {"inclusive": {"amount": 1000, "currency": {"symbolic_code": "RUB"}}},
"upper": {"exclusive": {"amount": 10000000, "currency": {"symbolic_code": "RUB"}}}
}},
"cash_flow": {"decisions": [
{
"if_": {"condition":
{"payment_tool": {"bank_card": {"definition": {"payment_system_is": "visa"}}}}
},
"then_": {"value": [
{
"source": {"provider": "settlement"},
"destination": {"merchant": "settlement"},
"volume": {"share": {"parts": {"p": 1, "q": 1}, "of": "operation_amount"}}
},
{
"source": {"system": "settlement"},
"destination": {"provider": "settlement"},
"volume": {"share": {"parts": {"p": 15, "q": 1000}, "of": "operation_amount"}}
}
]}
},
{
"if_": {"condition":
{"payment_tool": {"bank_card": {"definition": {"payment_system_is": "mastercard"}}}}
},
"then_": {"value": [
{
"source": {"provider": "settlement"},
"destination": {"merchant": "settlement"},
"volume": {"share": {"parts": {"p": 1, "q": 1}, "of": "operation_amount"}}
},
{
"source": {"system": "settlement"},
"destination": {"provider": "settlement"},
"volume": {"share": {"parts": {"p": 18, "q": 1000}, "of": "operation_amount"}}
}
]}
}
]},
"holds": {
"lifetime": {"value": {"seconds": 3600}}
},
"refunds": {
"cash_flow": {"value": [
{
"source": {"merchant": "settlement"},
"destination": {"provider": "settlement"},
"volume": {"share": {"parts": {"p": 1, "q": 1}, "of": "operation_amount"}}
}
]}
}
}
},
"accounts": [
{"key": {"symbolic_code": "RUB"}, "value": {
"settlement": $(scripts/dominant/create-account.sh RUB)
}}
]
}
}}}},
{"insert": {"object": {"terminal": {
"ref": {"id": 1},
"data": {
"name": "Mocketbank Test Acquiring",
"description": "Mocketbank Test Acquiring"
}
}}}},
{"insert": {"object": {"proxy": {
"ref": {"id": 1},
"data": {
"name": "Mocketbank Proxy",
"description": "Mocked bank proxy for integration test purposes",
"url": "http://proxy-mocketbank:8022/proxy/mocketbank",
"options": {}
}
}}}},
{"insert": {"object": {"proxy": {
"ref": {"id": 100},
"data": {
"name": "Mocket Inspector Proxy",
"description": "Mocked inspector proxy for integration test purposes",
"url": "http://proxy-mocket-inspector:8022/proxy/mocket/inspector",
"options": {"risk_score": "high"}
}
}}}},
{"insert": {"object": {"payment_institution": {
"ref": {"id": 1},
"data": {
"name": "Test Payment Institution",
"system_account_set": {"value": {"id": 1}},
"default_contract_template": {"value": {"id": 1}},
"providers": {"value": [{"id": 1}]},
"inspector": {"value": {"id": 1}},
"realm": "test",
"residences": ["rus", "aus", "jpn"]
}
}}}}
]}
END
)
woorl -s "damsel/proto/domain_config.thrift" "http://dominant:8022/v1/domain/repository" Repository Commit 0 "${FIXTURE}"

View File

@ -0,0 +1,86 @@
[
{kernel, [
{logger_sasl_compatible, false},
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
config => #{
type => standard_io
},
formatter => {logger_logstash_formatter, #{}}
}}
]}
]},
{dmt_api, [
{repository, dmt_api_repository_v5},
{migration, #{
timeout => 360,
limit => 20,
read_only_gap => 1000
}},
{ip, "::"},
{port, 8022},
{default_woody_handling_timeout, 30000},
{scoper_event_handler_options, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000
}
}
}},
{woody_event_handlers, [
{scoper_woody_event_handler, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000,
max_printable_string_length => 80
}
}
}}
]},
{transport_opts, #{
max_connections => 1024
}},
{protocol_opts, #{
% http keep alive timeout in ms
request_timeout => 60000,
% Should be greater than any other timeouts
idle_timeout => infinity
}},
{max_cache_size, 52428800}, % 50Mb
{health_checkers, [
{erl_health, disk , ["/", 99] },
{erl_health, cg_memory, [99] },
{erl_health, service , [<<"dominant">>]}
]},
{services, #{
automaton => #{
url => "http://machinegun:8022/v1/automaton",
transport_opts => #{
pool => woody_automaton,
timeout => 1000,
max_connections => 1024
}
}
}}
]},
{os_mon, [
% for better compatibility with busybox coreutils
{disksup_posix_only, true}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{snowflake, [
{max_backward_clock_moving, 1000}, % 1 second
{machine_id, hostname_hash}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,94 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/dominant
tag: de2a937b3b92eb4fa6888be5aef3bde7d3c8b409
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
init-script.sh: |
{{- readFile "init-script.sh" | nindent 6 }}
hook:
enabled: true
image:
repository: docker.io/rbkmoney/holmes
tag: 07f58e297c03bcd50dc4695ddbcfa4eb30c9928e
pullPolicy: IfNotPresent
kind: post-install
command: "/opt/initdominant/init-script.sh"
volumes:
- name: dom-init
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: dom-init
mountPath: /opt/initdominant/init-script.sh
subPath: init-script.sh
readOnly: true
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/dominant/releases/0.1/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/dominant/releases/0.1/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/dominant/erl_inetrc
subPath: erl_inetrc
readOnly: true
ciliumPolicies:
- filters:
- port: 8022
type: TCP
name: shumway
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: machinegun
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: dominant
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: proxy-mocket-inspector
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: proxy-mocketbank
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,25 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/questionary/questionary.jar \
--logging.config=/opt/questionary/logback.xml \
--management.security.flag=false \
--management.metrics.export.statsd.flavor=etsy \
--management.metrics.export.statsd.enabled=true \
--management.metrics.export.prometheus.enabled=true \
--management.endpoint.health.show-details=always \
--management.endpoint.metrics.enabled=true \
--management.endpoint.prometheus.enabled=true \
--management.endpoints.web.exposure.include=health,info,prometheus \
--spring.datasource.hikari.data-source-properties.prepareThreshold=0 \
--spring.datasource.hikari.leak-detection-threshold=5300 \
--spring.datasource.hikari.max-lifetime=300000 \
--spring.datasource.hikari.idle-timeout=30000 \
--spring.datasource.hikari.minimum-idle=2 \
--spring.datasource.hikari.maximum-pool-size=20 \
${@} \
--spring.config.additional-location=/vault/secrets/application.properties \

View File

@ -0,0 +1,4 @@
<included>
<logger name="com.rbkmoney" level="INFO"/>
<logger name="com.rbkmoney.woody" level="INFO"/>
</included>

View File

@ -0,0 +1,94 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/questionary
tag: 954dbc039eb011f32d6edf661d874eca9cea9c77
pullPolicy: IfNotPresent
runopts:
command: ["/opt/questionary/entrypoint.sh"]
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
loggers.xml: |
{{- readFile "loggers.xml" | nindent 6 }}
logback.xml: |
{{- readFile "../logs/logback.xml" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/questionary/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/questionary/logback.xml
subPath: logback.xml
readOnly: true
- name: config-volume
mountPath: /opt/questionary/loggers.xml
subPath: loggers.xml
readOnly: true
service:
ports:
- name: api
port: 8022
- name: management
port: 8023
livenessProbe:
httpGet:
path: /actuator/health
port: management
readinessProbe:
httpGet:
path: /actuator/health
port: management
podAnnotations:
vault.hashicorp.com/role: "db-app"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/db-app-questionary"
vault.hashicorp.com/agent-inject-template-application.properties: |
{{`{{- with secret "database/creds/db-app-questionary" -}}
spring.datasource.url=jdbc:postgresql://postgres-postgresql:5432/questionary?sslmode=disable
spring.datasource.username={{ .Data.username }}
spring.datasource.password={{ .Data.password }}
spring.flyway.url=jdbc:postgresql://postgres-postgresql:5432/questionary?sslmode=disable
spring.flyway.user={{ .Data.username }}
spring.flyway.password={{ .Data.password }}
{{- end }}`}}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "management"
path: /actuator/prometheus
scheme: http
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}
- filters:
- port: 8200
type: TCP
name: vault
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/fraudbusters-management/fraudbusters-management.jar \
logging.config=/opt/fraudbusters-management/logback-spring.xml \
management.security.enabled=false \
kafka.ssl.enable=false \
kafka.bootstrap.servers=kafka:9092 \
service.payment.url=http://fraudbusters:8022/fraud_payment/v1/ \
${@} \
--spring.config.additional-location=/vault/secrets/application.properties

View File

@ -0,0 +1,3 @@
<included>
<logger name="com.rbkmoney" level="INFO"/>
</included>

View File

@ -0,0 +1,128 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: rbkmoney/fraudbusters-management
tag: "91fe3772f946c7a76a982adfd4d23411607dee5f"
pullPolicy: IfNotPresent
runopts:
command: ["/opt/fraudbusters-management/entrypoint.sh"]
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
loggers.xml: |
{{- readFile "loggers.xml" | nindent 6 }}
logback.xml: |
{{- readFile "../logs/logback.xml" | nindent 6 }}
env:
- name: LOGBACK_SERVICE_NAME
value: "fraudbusters-management"
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/fraudbusters-management/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/fraudbusters-management/logback.xml
subPath: logback.xml
readOnly: true
- name: config-volume
mountPath: /opt/fraudbusters-management/loggers.xml
subPath: loggers.xml
readOnly: true
service:
ports:
- name: api
port: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: management
readinessProbe:
httpGet:
path: /actuator/health
port: management
podAnnotations:
vault.hashicorp.com/role: "db-app"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/db-app-fbmgmt"
vault.hashicorp.com/agent-inject-template-application.properties: |
{{`{{- with secret "database/creds/db-app-fbmgmt" -}}
spring.datasource.url=jdbc:postgresql://postgres-postgresql:5432/fraudbusters?sslmode=disable
spring.datasource.username={{ .Data.username }}
spring.datasource.password={{ .Data.password }}
spring.flyway.url=jdbc:postgresql://postgres-postgresql:5432/fraudbusters?sslmode=disable
spring.flyway.user={{ .Data.username }}
spring.flyway.password={{ .Data.password }}
spring.flyway.schemas=af
{{- end }}`}}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "management"
path: /actuator/prometheus
scheme: http
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}
- filters:
- port: 9092
rules:
kafka:
- role: consume
topics:
- mg-events-customer
- mg-events-invoice
type: TCP
name: kafka
namespace: {{ .Release.Namespace }}
- filters:
- port: 8200
type: TCP
name: vault
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: hellgate
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: fault-detector
namespace: {{ .Release.Namespace }}
##In case of kafka mTLS auth move it to vault template
# {{- /*
# kafka.ssl.enabled={{ kafka.ssl.enable }}
# kafka.ssl.key-store-location=/opt/{{ service_name }}/kafka-keystore.p12
# kafka.ssl.key-store-password="{{ service.keystore.pass }}"
# kafka.ssl.key-password="{{ service.keystore.pass }}"
# kafka.ssl.trust-store-location=/opt/{{ service_name }}/kafka-truststore.p12
# kafka.ssl.trust-store-password="{{ kafka.truststore.java.pass }}"
# */ -}}

View File

@ -0,0 +1,23 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/fraudbusters/fraudbusters.jar \
logging.config=/opt/fraudbusters/logback.xml \
management.security.enabled=false \
geo.ip.service.url=http://columbus:8022/repo \
kafka.ssl.enable=false \
kafka.bootstrap.servers=kafka:9092 \
wb.list.service.url=http://wb-list-manager:8022/v1/wb_list \
clickhouse.db.url=jdbc:clickhouse://clickhouse:8123/default \
clickhouse.db.user={{ .Values.services.clickhouse.user }}
clickhouse.db.password={{ .Values.services.clickhouse.password }}
fraud.management.url=http://fraudbusters-management:8080 \
spring.profiles.active=full-prod \
kafka.topic.event.sink.payment=payment_event \
kafka.topic.event.sink.refund=refund_event \
kafka.topic.event.sink.chargeback=chargeback_event \
${@}

View File

@ -0,0 +1,4 @@
<included>
<logger name="com.rbkmoney" level="INFO"/>
<logger name="com.rbkmoney.woody" level="INFO"/>
</included>

View File

@ -0,0 +1,82 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/fraudbusters
tag: fbe14fec347e5e6312a5e726e17e8b8c2b749b89
pullPolicy: IfNotPresent
runopts:
command: ["/opt/fraudbusters/entrypoint.sh"]
env:
- name: LOGBACK_SERVICE_NAME
value: "fraudbusters"
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
loggers.xml: |
{{- readFile "loggers.xml" | nindent 6 }}
logback.xml: |
{{- readFile "../logs/logback.xml" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/fraudbusters/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/fraudbusters/logback.xml
subPath: logback.xml
readOnly: true
- name: config-volume
mountPath: /opt/fraudbusters/loggers.xml
subPath: loggers.xml
readOnly: true
service:
ports:
- name: api
port: 8022
livenessProbe:
httpGet:
path: /actuator/health
port: api
readinessProbe:
httpGet:
path: /actuator/health
port: api
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "api"
path: /actuator/prometheus
scheme: http
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}
- filters:
- port: 8200
type: TCP
name: vault
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,25 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/questionary/questionary.jar \
--logging.config=/opt/questionary/logback.xml \
--management.security.flag=false \
--management.metrics.export.statsd.flavor=etsy \
--management.metrics.export.statsd.enabled=true \
--management.metrics.export.prometheus.enabled=true \
--management.endpoint.health.show-details=always \
--management.endpoint.metrics.enabled=true \
--management.endpoint.prometheus.enabled=true \
--management.endpoints.web.exposure.include=health,info,prometheus \
--spring.datasource.hikari.data-source-properties.prepareThreshold=0 \
--spring.datasource.hikari.leak-detection-threshold=5300 \
--spring.datasource.hikari.max-lifetime=300000 \
--spring.datasource.hikari.idle-timeout=30000 \
--spring.datasource.hikari.minimum-idle=2 \
--spring.datasource.hikari.maximum-pool-size=20 \
${@} \
--spring.config.additional-location=/vault/secrets/application.properties \

View File

@ -0,0 +1,4 @@
<included>
<logger name="com.rbkmoney" level="INFO"/>
<logger name="com.rbkmoney.woody" level="INFO"/>
</included>

View File

@ -0,0 +1,94 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/questionary
tag: 954dbc039eb011f32d6edf661d874eca9cea9c77
pullPolicy: IfNotPresent
runopts:
command: ["/opt/questionary/entrypoint.sh"]
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
loggers.xml: |
{{- readFile "loggers.xml" | nindent 6 }}
logback.xml: |
{{- readFile "../logs/logback.xml" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/questionary/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/questionary/logback.xml
subPath: logback.xml
readOnly: true
- name: config-volume
mountPath: /opt/questionary/loggers.xml
subPath: loggers.xml
readOnly: true
service:
ports:
- name: api
port: 8022
- name: management
port: 8023
livenessProbe:
httpGet:
path: /actuator/health
port: management
readinessProbe:
httpGet:
path: /actuator/health
port: management
podAnnotations:
vault.hashicorp.com/role: "db-app"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/db-app-questionary"
vault.hashicorp.com/agent-inject-template-application.properties: |
{{`{{- with secret "database/creds/db-app-questionary" -}}
spring.datasource.url=jdbc:postgresql://postgres-postgresql:5432/questionary?sslmode=disable
spring.datasource.username={{ .Data.username }}
spring.datasource.password={{ .Data.password }}
spring.flyway.url=jdbc:postgresql://postgres-postgresql:5432/questionary?sslmode=disable
spring.flyway.user={{ .Data.username }}
spring.flyway.password={{ .Data.password }}
{{- end }}`}}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "management"
path: /actuator/prometheus
scheme: http
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}
- filters:
- port: 8200
type: TCP
name: vault
namespace: {{ .Release.Namespace }}

251
config/hellgate/sys.config Normal file
View File

@ -0,0 +1,251 @@
%% -*- mode: erlang -*-
[
{dmt_client, [
{cache_update_interval, 5000}, % milliseconds
{cache_server_call_timeout, 30000}, % milliseconds
{max_cache_size, #{
elements => 80,
memory => 209715200 % 200Mb
}},
{woody_event_handlers, [
{scoper_woody_event_handler, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000,
max_printable_string_length => 80
}
}
}}
]},
{service_urls, #{
'Repository' => <<"http://dominant:8022/v1/domain/repository" >>,
'RepositoryClient' => <<"http://dominant:8022/v1/domain/repository_client">>
}}
]},
{kernel, [
{logger_sasl_compatible, false},
{logger_level, info},
{logger, [
{handler, default, logger_std_h, #{
level => debug,
config => #{
type => standard_io,
sync_mode_qlen => 5000,
drop_mode_qlen => 5000,
flush_qlen => 10000,
%% We want almost all logs
burst_limit_enable => false
},
formatter => {logger_logstash_formatter, #{}}
}}
]}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{hellgate, [
{ip , "::"},
{port, 8022},
{default_woody_handling_timeout, 30000},
%% 1 sec above cowboy's request_timeout
{shutdown_timeout, 7000},
{protocol_opts, #{
request_timeout => 4000,
% Should be greater than any other timeouts
idle_timeout => infinity
}},
{transport_opts, #{
% Keeping the default value
max_connections => 8096
}},
{scoper_event_handler_options, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000,
max_printable_string_length => 80
}
}
}},
{services, #{
automaton => #{
url => <<"http://machinegun:8022/v1/automaton">>,
transport_opts => #{
pool => woody_automaton,
timeout => 3000,
max_connections => 2000
}
},
eventsink => #{
url => <<"http://machinegun:8022/v1/event_sink">>,
transport_opts => #{
pool => woody_eventsink,
timeout => 3000,
max_connections => 300
}
},
accounter => #{
url => <<"http://shumway:8022/shumpune">>,
transport_opts => #{
pool => woody_accounter,
timeout => 3000,
max_connections => 2000
}
},
party_management => #{
url => <<"http://hellgate:8022/v1/processing/partymgmt">>,
transport_opts => #{
pool => woody_party_management,
timeout => 3000,
max_connections => 300
}
},
customer_management => #{
url => <<"http://hellgate:8022/v1/processing/customer_management">>,
transport_opts => #{
pool => woody_customer_management,
timeout => 3000,
max_connections => 300
}
},
recurrent_paytool => #{
url => <<"http://hellgate:8022/v1/processing/recpaytool">>,
transport_opts => #{
pool => woody_recurrent_paytool,
timeout => 3000,
max_connections => 300
}
},
fault_detector => #{
url => <<"http://fault_detector:8022/v1/fault-detector">>,
transport_opts => #{
pool => woody_fault_detector,
timeout => 3000,
max_connections => 2000
}
}
}},
{fault_detector, #{
enabled => false,
timeout => 4000,
availability => #{
critical_fail_rate => 0.3,
sliding_window => 60000,
operation_time_limit => 10000,
pre_aggregation_size => 2
},
conversion => #{
benign_failures => [
insufficient_funds,
rejected_by_issuer,
processing_deadline_reached
],
critical_fail_rate => 0.7,
sliding_window => 60000,
operation_time_limit => 1200000,
pre_aggregation_size => 2
}
}},
{proxy_opts, #{
transport_opts => #{
pool => proxy_connections,
timeout => 3000,
max_connections => 2000
}
}},
{health_check, #{
disk => {erl_health, disk , ["/", 99]},
memory => {erl_health, cg_memory, [70]},
dmt_client => {dmt_client, health_check, []},
service => {erl_health, service , [<<"hellgate">>]}
}},
{payment_retry_policy, #{
% {exponential, Retries, Factor, Timeout, MaxTimeout}
% try every min(2 ** n, 20) seconds until 60 seconds from first error pass
processed => {exponential, {max_total_timeout, 60}, 2, 1, 20},
% try every min(2 ** n seconds, 5 minutes) until 5 hours from first error pass
captured => {exponential, {max_total_timeout, 18000}, 2, 1, 300},
refunded => no_retry
}},
{inspect_timeout, 7000}
]},
{party_management, [
{scoper_event_handler_options, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000,
max_printable_string_length => 80
}
}
}},
{services, #{
automaton => #{
url => <<"http://machinegun:8022/v1/automaton">>,
transport_opts => #{
pool => woody_automaton,
timeout => 3000,
max_connections => 2000
}
},
accounter => #{
url => <<"http://shumway:8022/shumpune">>,
transport_opts => #{
pool => woody_accounter,
timeout => 3000,
max_connections => 2000
}
}
}}
]},
{party_client, [
{services, #{
party_management => <<"http://hellgate:8022/v1/processing/partymgmt">>
}},
{woody, #{
cache_mode => safe, % disabled | safe | aggressive
options => #{
woody_client => #{
event_handler => {
scoper_woody_event_handler,
{scoper_event_handler_options, #{
event_handler_opts => #{
formatter_opts => #{
max_length => 1000,
max_printable_string_length => 80
}
}
}
}
},
transport_opts => #{
pool => party_client,
timeout => 3000,
max_connections => 2000
}
}
},
cache => #{
memory => 209715200, % 200Mb, cache memory quota in bytes
n => 10 % number of cache segments
}
}}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{hackney, [
{mod_metrics, woody_client_metrics}
]},
{how_are_you, [
{metrics_handlers, [
hay_vm_handler,
hay_cgroup_handler,
woody_api_hay
]},
{metrics_publishers, [
]}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,70 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/hellgate
tag: efe0b67a7a048bfa17cac871ff2e7b797ea13796
pullPolicy: IfNotPresent
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
volumeMounts:
- name: config-volume
mountPath: /opt/hellgate/releases/0.1/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/hellgate/releases/0.1/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/hellgate/erl_inetrc
subPath: erl_inetrc
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
ciliumPolicies:
- filters:
- port: 8022
type: TCP
name: shumway
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: machinegun
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: dominant
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: proxy-mocket-inspector
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: proxy-mocketbank
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,11 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/holmes
tag: 07f58e297c03bcd50dc4695ddbcfa4eb30c9928e
pullPolicy: IfNotPresent
livenessProbe: null
readinessProbe: null

View File

@ -0,0 +1,37 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/hooker/hooker.jar \
--logging.config=/opt/hooker/logback.xml \
--spring.datasource.hikari.data-source-properties.prepareThreshold=0 \
--spring.datasource.hikari.leak-detection-threshold=5300 \
--spring.datasource.hikari.max-lifetime=300000 \
--spring.datasource.hikari.idle-timeout=30000 \
--spring.datasource.hikari.minimum-idle=2 \
--spring.datasource.hikari.maximum-pool-size=20 \
--service.invoicing.url=http://hellgate:8022/v1/processing/invoicing \
--service.customer.url=http://hellgate:8022/v1/processing/customer_management \
--service.fault-detector.url=http://fault-detector:8022/v1/fault-detector \
--kafka.bootstrap-servers=kafka:9092 \
--kafka.topics.invoice.id=mg-events-invoice \
--kafka.topics.invoice.enabled=true \
--kafka.topics.invoice.concurrency=7 \
--kafka.topics.customer.id=mg-events-customer \
--kafka.topics.customer.enabled=true \
--kafka.topics.customer.concurrency=2 \
--kafka.client-id=hooker \
--kafka.consumer.group-id=Hooker-Invoicing \
--kafka.consumer.max-poll-records=500 \
--kafka.ssl.enabled=false \
--kafka.ssl.key-store-location=/opt/hooker/kafka-keystore.p12 \
--kafka.ssl.key-store-password=test \
--kafka.ssl.trust-store-location=/opt/hooker/kafka-truststore.p12 \
--kafka.ssl.trust-store-password=test \
--kafka.ssl.key-password=test \
--spring.application.name=hooker \
--logging.level.com.rbkmoney.hooker.scheduler.MessageScheduler=DEBUG \
${@} \
--spring.config.additional-location=/vault/secrets/application.properties \

View File

@ -0,0 +1,4 @@
<included>
<logger name="com.rbkmoney" level="INFO"/>
<logger name="com.rbkmoney.woody" level="INFO"/>
</included>

View File

@ -0,0 +1,117 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/hooker
tag: aeffeedb13a8f5125962bfe3e0e734ba4104d876
pullPolicy: IfNotPresent
runopts:
command: ["/opt/hooker/entrypoint.sh"]
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
loggers.xml: |
{{- readFile "loggers.xml" | nindent 6 }}
logback.xml: |
{{- readFile "../logs/logback.xml" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/hooker/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/hooker/logback.xml
subPath: logback.xml
readOnly: true
- name: config-volume
mountPath: /opt/hooker/loggers.xml
subPath: loggers.xml
readOnly: true
service:
ports:
- name: api
port: 8022
- name: management
port: 8023
livenessProbe:
httpGet:
path: /actuator/health
port: management
readinessProbe:
httpGet:
path: /actuator/health
port: management
podAnnotations:
vault.hashicorp.com/role: "db-app"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/db-app-hooker"
vault.hashicorp.com/agent-inject-template-application.properties: |
{{`{{- with secret "database/creds/db-app-hooker" }}
spring.datasource.url=jdbc:postgresql://postgres-postgresql:5432/hooker?sslmode=disable
spring.datasource.username={{ .Data.username }}
spring.datasource.password={{ .Data.password }}
spring.flyway.url=jdbc:postgresql://postgres-postgresql:5432/hooker?sslmode=disable
spring.flyway.user={{ .Data.username }}
spring.flyway.password={{ .Data.password }}
{{- end }}`}}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "management"
path: /actuator/prometheus
scheme: http
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}
- filters:
- port: 9092
rules:
kafka:
- role: consume
topics:
- mg-events-customer
- mg-events-invoice
type: TCP
name: kafka
namespace: {{ .Release.Namespace }}
- filters:
- port: 8200
type: TCP
name: vault
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: hellgate
namespace: {{ .Release.Namespace }}
{{- /*
- filters:
- port: 8022
type: TCP
name: fault-detector
namespace: {{ .Release.Namespace }}
*/ -}}

View File

@ -0,0 +1,28 @@
replicas: 1
podLabels:
selector.cilium.rbkmoney/release: {{ .Release.Name }}
zookeeper:
## If true, install the Zookeeper chart alongside Kafka
## ref: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
enabled: true
## If the Zookeeper Chart is disabled a URL and port are required to connect
# url: "zookeeper"
# port: 2181
replicaCount: 1
persistence:
enabled: false
ciliumPolicies:
- filters:
- port: 2181
type: TCP
name: zookeeper
namespace: {{ .Release.Namespace }}
- filters:
- port: 9092
type: TCP
name: kafka
namespace: {{ .Release.Namespace }}

44
config/kds/ca.crt Normal file
View File

@ -0,0 +1,44 @@
-----BEGIN CERTIFICATE-----
MIIDnzCCAwGgAwIBAgIBAjAKBggqhkjOPQQDAjCB3zEVMBMGCgmSJomT8ixkARkW
BW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UE
CAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoM
CVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBMYWJzMSowKAYDVQQDDCFSQksg
TW9uZXkgQ0RFIERldmVsb3BtZW50IFJvb3QgQ0ExHzAdBgkqhkiG9w0BCQEWEGRl
dm9wc0ByYmsubW9uZXkwHhcNMTkwOTA1MDg0MjM0WhcNMjQwOTAzMDg0MjM0WjCB
4jEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQsw
CQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQH
DAZNb3Njb3cxEjAQBgNVBAoMCVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBM
YWJzMS0wKwYDVQQDDCRSQksgTW9uZXkgQ0RFIERldmVsb3BtZW50IFNpZ25pbmcg
Q0ExHzAdBgkqhkiG9w0BCQEWEGRldm9wc0ByYmsubW9uZXkwgZswEAYHKoZIzj0C
AQYFK4EEACMDgYYABAAsqZHI7O964jB0afIpxzkKWCeeaaOSIS6DqH0Hw2H9lOB8
fdlPcBrEM8t+Ubs1FjiwKBXcoL3vtD6IWMmG4Oyt7QHcjItexzRHm0BIIgjSuQJi
Qza1DEJLFElPB4rGtg4SsXf0+inEB8U2miZe2jXToxAtgdKwBWfCNry3L9JkTuns
LKNmMGQwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0O
BBYEFNeHpaGZ/0ehRBPixvl+bcaGMPpAMB8GA1UdIwQYMBaAFDg8k/pUJ3gSQDL6
BXcgDto3r4rkMAoGCCqGSM49BAMCA4GLADCBhwJBOLXNDe3nO3EtzTnV5JPLU+jO
KWcgOp6YL+MNP21iSFugNAnPqs0orV8cnP4hCLL/wABD9WjqIzr2xKtmpkFAip0C
QgFhdxbzNBFMw3VhBojg7XB7DpoH7KUBHz/dzgXeCor20ovPycyOxemr25ySk1iy
Pwe0dORE23A8IWoDe6IsGIuyag==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDmTCCAvugAwIBAgIBATAKBggqhkjOPQQDAjCB3zEVMBMGCgmSJomT8ixkARkW
BW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UE
CAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoM
CVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBMYWJzMSowKAYDVQQDDCFSQksg
TW9uZXkgQ0RFIERldmVsb3BtZW50IFJvb3QgQ0ExHzAdBgkqhkiG9w0BCQEWEGRl
dm9wc0ByYmsubW9uZXkwHhcNMTkwOTA1MDg0MjM0WhcNMjkwOTAyMDg0MjM0WjCB
3zEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQsw
CQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQH
DAZNb3Njb3cxEjAQBgNVBAoMCVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBM
YWJzMSowKAYDVQQDDCFSQksgTW9uZXkgQ0RFIERldmVsb3BtZW50IFJvb3QgQ0Ex
HzAdBgkqhkiG9w0BCQEWEGRldm9wc0ByYmsubW9uZXkwgZswEAYHKoZIzj0CAQYF
K4EEACMDgYYABACfG9NGzV34Q3DSF0PfLhEe2od5YgfSxniVpba+O+bRHVOFnp1G
ZOBuJ7WJiK2q9mWG2qSQnEfuSvqoLq4pBYfHbACBjYcLoQRRfaIyvBACHMCdWH5h
TjJ4/Rav0mBgQNsaZ41oFfSyv27vfl92ue8S42l9RnZCkoH6LYM2LP6PeT9JjKNj
MGEwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFDg8
k/pUJ3gSQDL6BXcgDto3r4rkMB8GA1UdIwQYMBaAFDg8k/pUJ3gSQDL6BXcgDto3
r4rkMAoGCCqGSM49BAMCA4GLADCBhwJBdiaVap0dLI/12coM9Xqa16alUBVzr8QV
DFpzcQ3nm/n1SvoV1lDeyEUocaWgIcAL3db1abbOwJITWyB0NxO7FToCQgGCVDej
jrQ901BCO1b9r3aWo4UlSFR2ZCdPmV7oDFYku+kQ7/6q+kiwNHPolcnI/kk57P30
nXQN6GRWxoK7Pv7i7A==
-----END CERTIFICATE-----

29
config/kds/server.pem Normal file
View File

@ -0,0 +1,29 @@
-----BEGIN CERTIFICATE-----
MIIEHDCCA32gAwIBAgIBBDAKBggqhkjOPQQDAjCB4jEVMBMGCgmSJomT8ixkARkW
BW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJrMQswCQYDVQQGEwJSVTEbMBkGA1UE
CAwSUnVzc2lhbiBGZWRlcmF0aW9uMQ8wDQYDVQQHDAZNb3Njb3cxEjAQBgNVBAoM
CVJCSyBNb25leTEVMBMGA1UECgwMUGF5bWVudCBMYWJzMS0wKwYDVQQDDCRSQksg
TW9uZXkgQ0RFIERldmVsb3BtZW50IFNpZ25pbmcgQ0ExHzAdBgkqhkiG9w0BCQEW
EGRldm9wc0ByYmsubW9uZXkwHhcNMjAwOTE1MTcwMDA2WhcNMzAwOTEzMTcwMDA2
WjCBoDEVMBMGCgmSJomT8ixkARkWBW1vbmV5MRMwEQYKCZImiZPyLGQBGRYDcmJr
MRIwEAYDVQQKDAlSQksgTW9uZXkxFTATBgNVBAoMDFBheW1lbnQgTGFiczEPMA0G
A1UEBwwGTW9zY293MQswCQYDVQQGEwJSVTEbMBkGA1UECAwSUnVzc2lhbiBGZWRl
cmF0aW9uMQwwCgYDVQQDDANrZHMwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAR4
8BVs7cai8R779I8sNv/lDoLEaB9+l5t3XclyRt6aa6Rr1EVJBV8cGUPd2YIcTw0g
9n1A1vwzR6Cn/UfngdLdo4IBYjCCAV4wCQYDVR0TBAIwADALBgNVHQ8EBAMCA+gw
EwYDVR0lBAwwCgYIKwYBBQUHAwEwHQYDVR0OBBYEFNO654PMciK16i6OADZgywv2
YhI6MIIBDgYDVR0jBIIBBTCCAQGAFNeHpaGZ/0ehRBPixvl+bcaGMPpAoYHlpIHi
MIHfMRUwEwYKCZImiZPyLGQBGRYFbW9uZXkxEzARBgoJkiaJk/IsZAEZFgNyYmsx
CzAJBgNVBAYTAlJVMRswGQYDVQQIDBJSdXNzaWFuIEZlZGVyYXRpb24xDzANBgNV
BAcMBk1vc2NvdzESMBAGA1UECgwJUkJLIE1vbmV5MRUwEwYDVQQKDAxQYXltZW50
IExhYnMxKjAoBgNVBAMMIVJCSyBNb25leSBDREUgRGV2ZWxvcG1lbnQgUm9vdCBD
QTEfMB0GCSqGSIb3DQEJARYQZGV2b3BzQHJiay5tb25leYIBAjAKBggqhkjOPQQD
AgOBjAAwgYgCQgE+b3U7wbGPqS4Yvqi3GnqlpKLV7dGWCTQMOGgzmaE1B1eCPzW8
eFN+ZPJkUlS1954hMPgC4fjCVJjDnNLobpF5JQJCANROFCa/iwyaM80r/+UxYTAA
LrZ8yXjxX0DoCeF7twf58YDpr3bRtFbrhPkRE1cHokwqVk6x0Z3MUtTcmuEabA0K
-----END CERTIFICATE-----
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEINZTRvXyrSvagqqTHVvjUznSRJOwHTIJW7CHbhMvnKEZoAoGCCqGSM49
AwEHoUQDQgAEePAVbO3GovEe+/SPLDb/5Q6CxGgffpebd13Jckbemmuka9RFSQVf
HBlD3dmCHE8NIPZ9QNb8M0egp/1H54HS3Q==
-----END EC PRIVATE KEY-----

148
config/kds/sys.config Normal file
View File

@ -0,0 +1,148 @@
[
{kds, [
{ip, "::"},
{management_port, 8022},
{storage_port, 8023},
{management_transport_opts, #{}},
{storage_transport_opts, #{
transport => ranch_ssl,
socket_opts => [
{cacertfile, "/var/lib/kds/ca.crt"},
{certfile, "/var/lib/kds/server.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}
]
}},
{protocol_opts, #{
request_timeout => 60000
}},
{new_key_security_parameters, #{
deduplication_hash_opts => #{
n => 16384,
r => 8,
p => 1
}
}},
{shutdown_timeout, 0},
{keyring_storage, kds_keyring_storage_file},
{keyring_storage_opts, #{
keyring_path => "/opt/kds/state/keyring"
}},
{health_check, #{
disk => {erl_health, disk , ["/", 99] },
memory => {erl_health, cg_memory, [99] },
service => {erl_health, service , [<<"kds">>]}
}},
{keyring_rotation_lifetime, 60000},
{keyring_initialize_lifetime, 180000},
{keyring_rekeying_lifetime, 180000},
{keyring_unlock_lifetime, 60000},
{shareholders, #{
<<"first">> => #{
owner => <<"ndiezel">>,
public_keys => #{
enc => <<"{
\"use\": \"enc\",
\"kty\": \"RSA\",
\"kid\": \"TGDG-PGQEfeczZkg7SYpTJXbkk-433uvGqg6T5wKYLY\",
\"alg\": \"RSA-OAEP-256\",
\"n\": \"q0PNoHvIZZn_sNel1cLqNNc-22bKIKo49-GgJQPcxgMiGH0BGEZYj2FAGZFh0tSq6kpc0CYOq6jiALLZPb-oxKSz1OpkRLe9MEK1Ku6VCZB3rqvvz8o95ELZN-KERnr7VPFnQQ7kf9e3ZfyZw2UoQO2HEbJuZDz6hDQPC2xBXF8brT1dPXl26hvtAPRPUUUdfjg7MVHrojbZqCfCY0WHFCel7wMAKM78fn0RN7Zc8htdFEOLkAbA57-6ubA6krv0pIVuIlIemvLaJ9fIIif8FRrO_eC4SYJg0w5lSwjDKDG-lkV1yDJuKvIOcjkfJJgfAavCk-ARQzH5b42e3QWXRDWLCOgJrbCfGPDWsfSVa26Vnr_j6-WfUzD2zctdfq9YKeJhm_wZxmfjyJg-Pz_mPJ8zZc-9rHNaHoiUXXOs2mXQXiOEr5hOCMQZ4pOo_TK0fzNa3OxI4Wj9fVnvbU-lmZfaPnRel9m6temzyBZeutjBUngXISiWSa5clB4zpEXrj_ncauJB3eTIIA66TID4TqNPMTuhuDREtIkOjNQUJK1Ejm6TGAHQ9-pkV_ACwjK08csqG-r1BelllnMJU5RvwDyNAOfTNeNTJzMhYwPHa9z8Zv4GTePWTvynPbDM5W7fRmhnXb1Qpg90tNaHrL3oIt5U9Rsfq2ldv3zWv8NuskE\",
\"e\": \"AQAB\"
}">>,
sig => <<"{
\"use\": \"sig\",
\"kty\": \"OKP\",
\"kid\": \"JCQN3nCVJ1oYQBLT2buyJ5m5poaslWK6jeqL9wgHeZI\",
\"crv\": \"Ed25519\",
\"alg\": \"EdDSA\",
\"x\": \"duKbDzqwQlZUUUpMTgjMYZhN6AIbS4OLbj6eI3uNYBc\"
}">>
}
},
<<"second">> => #{
owner => <<"ndiezel">>,
public_keys => #{
enc => <<"{
\"use\": \"enc\",
\"kty\": \"RSA\",
\"kid\": \"PFzgoRIaIxPTiorv0FNVLPAwFxbqkfdcjp8oTHhsiXQ\",
\"alg\": \"RSA-OAEP-256\",
\"n\": \"yVfp8flKbPUTHDCCIac-0nZ2S0hr_98d0qg-k40pQVGF9J5iDaNFkJtFzwnXVIAkzv9FFmTsyIFvy107-lOLOY55mCg1SagEeNFXqedLLCw5B_CA05Fn5XpPcwkhM5nr7ojoch9jOENjAEZ0WpqmArE6hAKo174QqaSfij3z2izBVvS-zsUirXzlIH8hH21uGvxborwrE8vfHBP1BjAgmVK7fWZDtt4PndpIkqEDFPWWEo17lBi0Riqxb-joO7zAQr16Uyfg2o5CIla04wYk0lB3yrg4fq9LG1KJXMCCK-3eFmM5HwzKsTorWiuZI0ViozRtdzBEfM5T_c3-1BiFQuILeiWVuVomAm4nPOzF3tLkQPDa1z1Z-CZyw89gaXK7FFkt_7rN6OC7iVDHx11JLZWxi03URUVuhZS3VlFjiaEZyc8eWoEcXcHqmVwu3WLBzBL2JeCN5vuPle9qvxdtARWS_JyEc7fHVc_Z-ScRbpWVUDu6pDcxPzt9HXAsMQ32PoakxSANrNTRBDLBdcGNOOGnyz5pXhq80SbLuT1ZaCKX_Rrvn27pmum7yzdsnXacvwYhps2TFls5oCqMidwpLj7XaOQb65H3Q8NtY_uDxzX-Aa4XvKL8JtSX1Q6vdPrOC4dnvghyxAfJlkxAuGevEkKDxHAV2L_SQYBe39cA080\",
\"e\": \"AQAB\"
}">>,
sig => <<"{
\"use\": \"sig\",
\"kty\": \"OKP\",
\"kid\": \"-kPdMxSFTO1FNT1Umrhbdy1nD8zYTfMbw1GA0j_fU7I\",
\"crv\": \"Ed25519\",
\"alg\": \"EdDSA\",
\"x\": \"ylYLtRGJq9k9mz9fEn5c7Y2VER76b_q5G_58C50XlU0\"
}">>
}
},
<<"third">> => #{
owner => <<"ndiezel">>,
public_keys => #{
enc => <<"{
\"use\": \"enc\",
\"kty\": \"RSA\",
\"kid\": \"eUWOkQW_IcrZtuM9SyNBRz7mguRiM_rgvM0soq5Euz8\",
\"alg\": \"RSA-OAEP-256\",
\"n\": \"tEbg-0rER3u8r7BYFtR28-oeQjQ0TrxeZEHcUhbyshahizUISoocwzbiY64Kf2GIQd1Y8HQ3GxU5a8KuiS_DvScfIklk0A_k7_y0yCD8ZJAbLSUg9o5D9XXhYhsSCQDP9MbGBfRGJpmR2ZE-OMbvv2QCsAIyq0dLJhLDU8UBe1rGLGLhIDqUMq9yB6HJuDR47hYCt0WM5bAXvK9m-392bdE6uAhwWMWctFf4bspXOo76TD4ZODRhnjKz8QTqKyyztUqECGVbzmBIkknq9xq722_vLYwsUgRItENaP4FM57psjHLhHPJ3v-gsYh_i8b_pHKP02MLOX1GSCu2YBkKxmkwbFn6k4P5SmCWcP64rfyD_grRDcKhkZE2eprQofQs4mqwTipC7p9m5crnfu5la1phkX6OYwYeGio9s2by6AjaNo_Hh9Xrerz86ZKC9Q7gohsXxQKv2oUCaqhyYtxwKsZeN-vobOObectT_A3gGcMzFz30RoVrJl4d0K_t33v-XJ4-h6Gaq4fb1KX0BDiQ8xZB6o84EI6hZoqiUiXZGhqtExoU8qBRY7WmmKojEVSRl64Lr_AV6bZMjcDPake7pXOxTUQu_BIsLpWbVpl4puiDIYIsSNxt-vbbSyiZQICPoWJfpxPpRaREDi0l9vFlnKRFZY0hyRAwqHl044E6lM1E\",
\"e\": \"AQAB\"
}">>,
sig => <<"{
\"use\": \"sig\",
\"kty\": \"OKP\",
\"kid\": \"-7dH2IVg1Tt_GpW3vFaS6VoBz9P5lpqvDJDiXxe6pBA\",
\"crv\": \"Ed25519\",
\"alg\": \"EdDSA\",
\"x\": \"jZ_9k4EiUc2L7SWrimH2trUbeiWETxX5l04Zrd3-fbg\"
}">>
}
}
}}
]},
{scoper, [
{storage, scoper_storage_logger}
]},
{kernel, [
{logger_sasl_compatible, false},
{logger_level, debug},
{logger, [
{handler, default, logger_std_h, #{
config => #{
type => standard_io
},
formatter => {logger_logstash_formatter, #{
message_redaction_regex_list => [
"[0-9]{12,19}", %% pan
"[0-9]{2}.[0-9]{2,4}", %% expiration date
"[0-9]{3,4}", %% cvv
"^ey[JI]([a-zA-Z0-9_-]*.?){1,6}" %% JWS and JWE compact representation
]
}}
}}
]}
]},
{os_mon, [
{disksup_posix_only, true}
]},
{how_are_you, [
{metrics_publishers, [
% {hay_statsd_publisher, #{
% key_prefix => <<"kds.">>,
% host => "localhost",
% port => 8125
% }}
]}
]},
{prometheus, [
{collectors, [default]}
]}
].

View File

@ -0,0 +1,105 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/kds
tag: df8a550af175177486ec49cf3bdab64cf5db2d33
pullPolicy: IfNotPresent
hook:
enabled: true
image:
repository: docker.io/rbkmoney/holmes
tag: 07f58e297c03bcd50dc4695ddbcfa4eb30c9928e
pullPolicy: IfNotPresent
kind: post-install
command: "/opt/holmes/scripts/cds/keyring.py -a kds init"
configMap:
data:
sys.config: |
{{- readFile "sys.config" | nindent 6 }}
erl_inetrc: |
{{- readFile "../vm/erl_inetrc" | nindent 6 }}
vm.args: |
{{- tpl (readFile "../vm/erl_vm_args.gotmpl") . | nindent 6 }}
secret:
data:
ca.crt: |
{{- readFile "ca.crt" | nindent 6 }}
server.pem: |
{{- readFile "server.pem" | nindent 6 }}
service:
type: ClusterIP
ports:
- name: management
port: 8022
- name: storage
port: 8023
livenessProbe:
httpGet:
path: /health
port: management
readinessProbe:
httpGet:
path: /health
port: management
volumeMounts:
- name: config-volume
mountPath: /opt/kds/releases/0.1.0/sys.config
subPath: sys.config
readOnly: true
- name: config-volume
mountPath: /opt/kds/releases/0.1.0/vm.args
subPath: vm.args
readOnly: true
- name: config-volume
mountPath: /opt/kds/erl_inetrc
subPath: erl_inetrc
readOnly: true
- name: secret
mountPath: /var/lib/kds/
readOnly: true
- name: keyring
mountPath: /opt/kds/state/
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
- name: secret
secret:
secretName: {{ .Release.Name }}
- name: keyring
persistentVolumeClaim:
claimName: "{{ .Release.Name }}-keyring"
pvc:
enabled: true
name: "{{ .Release.Name }}-keyring"
storage: 3Mi
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "management"
path: /metrics
scheme: http
ciliumPolicies:
- filters:
- port: 8022
type: TCP
- port: 8023
type: TCP
name: kds
namespace: {{ .Release.Namespace }}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,6 @@
# -*- mode: yaml -*-
configMap:
data:
realms.json: |
{{- tpl (readFile "realms.json.gotmpl") . | nindent 6 }}

View File

@ -0,0 +1,71 @@
postgresql:
enabled: false
podLabels:
selector.cilium.rbkmoney/release: {{ .Release.Name }}
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: postgres-postgresql
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: postgres
- name: DB_PASSWORD
value: "H@ckM3"
- name: JAVA_OPTS
value: >-
-XX:+UseContainerSupport
-XX:MaxRAMPercentage=50.0
{{- if .Values.services.global.ipv6only }}
-Djava.net.preferIPv4Stack=false
-Djava.net.preferIPv6Addresses=true
{{- else }}
-Djava.net.preferIPv4Stack=true
{{- end }}
-Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS
-Djava.awt.headless=true
- name: KEYCLOAK_IMPORT
value: /realm/realms.json
extraVolumes: |
- name: keycloak-realms-volume
configMap:
name: keycloak-realms
extraVolumeMounts: |
- name: keycloak-realms-volume
mountPath: "/realm/"
readOnly: true
ingress:
enabled: true
servicePort: http
annotations: {}
## Resolve HTTP 502 error using ingress-nginx:
## See https://www.ibm.com/support/pages/502-error-ingress-keycloak-response
# nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
rules:
- host: 'auth.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}'
paths:
- /
{{- if .Values.services.ingress.tls.enabled }}
tls:
- hosts:
- 'auth.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}'
secretName: {{ .Values.services.ingress.tls.secretName }}
{{- end }}
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}

39
config/logs/logback.xml Normal file
View File

@ -0,0 +1,39 @@
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan = "true" scanPeriod="60 seconds">
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<mdc/>
<threadName/>
<message/>
<version/>
<loggerName/>
<context/>
<pattern>
<pattern>
{
"@timestamp": "%date{yyy-MM-dd'T'HH:mm:ss.SSSXXX, UTC}",
"@severity": "%level",
"application": "${LOGBACK_SERVICE_NAME}"
}
</pattern>
</pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<shortenedClassNameLength>20</shortenedClassNameLength>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
<include file="loggers.xml"/>
</configuration>

2
config/logs/values.yaml Normal file
View File

@ -0,0 +1,2 @@
filebeat:
enabled: true

View File

@ -0,0 +1,204 @@
service_name: machinegun
erlang:
{{- if .Values.services.global.ipv6only }}
ipv6: true
{{- else }}
ipv6: false
{{- end }}
disable_dns_cache: true
secret_cookie_file: /opt/machinegun/etc/cookie
woody_server:
ip: "::"
port: 8022
max_concurrent_connections: 8000
http_keep_alive_timeout: 3000ms
storage:
type: riak
host: riak
port: 8087
pool:
size: 10
queue_max: 100
batch_concurrency_limit: 10
connect_timeout: 500ms
request_timeout: 10s
index_query_timeout: 60s
consuela:
presence:
check_interval: 5s
registry:
nodename: consul-server-0
session_ttl: 30s
session_renewal_interval: 10s
discovery:
tags: []
logging:
out_type: stdout
# Consul client settings.
# Required when distributed machine registry is enabled.
consul:
url: http://consul-server:8500
connect_timeout: 200ms
recv_timeout: 1s
namespaces:
domain-config:
overseer:
&default_overseer_config
scan_interval: 60m
min_scan_delay: 5s
timers:
&default_timers_config
scan_interval: 60s
scan_limit: 5000
capacity: 2000
min_scan_delay: 5s
processor:
url: http://dominant:8022/v1/stateproc
http_keep_alive_timeout: 3000ms
bender_generator:
timers: disabled
overseer: *default_overseer_config
processor:
url: http://bender:8022/v1/stateproc/bender_generator
pool_size: 300
http_keep_alive_timeout: 3000ms
bender_sequence:
timers: disabled
overseer: *default_overseer_config
processor:
url: http://bender:8022/v1/stateproc/bender_sequence
pool_size: 300
http_keep_alive_timeout: 3000ms
invoice:
timers: *default_timers_config
overseer: *default_overseer_config
event_sinks:
kafka:
type: kafka
topic: mg-events-invoice
client: default_kafka_client
processor:
url: http://hellgate:8022/v1/stateproc/invoice
pool_size: 2000
http_keep_alive_timeout: 3000ms
invoice_template:
timers: disabled
overseer: *default_overseer_config
event_sinks:
kafka:
type: kafka
topic: mg-events-invoice-template
client: default_kafka_client
processor:
url: http://hellgate:8022/v1/stateproc/invoice_template
pool_size: 2000
http_keep_alive_timeout: 3000ms
customer:
timers: *default_timers_config
overseer: *default_overseer_config
event_sinks:
kafka:
type: kafka
topic: mg-events-customer
client: default_kafka_client
processor:
url: http://hellgate:8022/v1/stateproc/customer
pool_size: 300
http_keep_alive_timeout: 3000ms
recurrent_paytools:
timers: *default_timers_config
overseer: *default_overseer_config
event_sinks:
kafka:
type: kafka
topic: mg-events-recurrent-paytools
client: default_kafka_client
processor:
url: http://hellgate:8022/v1/stateproc/recurrent_paytools
pool_size: 300
http_keep_alive_timeout: 3000ms
party:
timers: disabled
overseer: *default_overseer_config
event_sinks:
kafka:
type: kafka
topic: mg-events-party
client: default_kafka_client
processor:
url: http://hellgate:8022/v1/stateproc/party
http_keep_alive_timeout: 3000ms
url-shortener:
timers: *default_timers_config
overseer: *default_overseer_config
processor:
url: http://url-shortener:8022/v1/stateproc
http_keep_alive_timeout: 3000ms
kafka:
default_kafka_client:
endpoints:
- host: "kafka-headless"
port: 9092
producer:
compression: no_compression # 'gzip' or 'snappy' to enable compression
# How many message sets (per-partition) can be sent to kafka broker
# asynchronously before receiving ACKs from broker.
partition_onwire_limit: 1
# Maximum time the broker can await the receipt of the
# number of acknowledgements in RequiredAcks. The timeout is not an exact
# limit on the request time for a few reasons: (1) it does not include
# network latency, (2) the timer begins at the beginning of the processing
# of this request so if many requests are queued due to broker overload
# that wait time will not be included, (3) kafka leader will not terminate
# a local write so if the local write time exceeds this timeout it will
# not be respected.
ack_timeout: 10s
# How many acknowledgements the kafka broker should receive from the
# clustered replicas before acking producer.
# none: the broker will not send any response
# (this is the only case where the broker will not reply to a request)
# leader_only: The leader will wait the data is written to the local log before
# sending a response.
# all_isr: If it is 'all_isr' the broker will block until the message is committed by
# all in sync replicas before acking.
required_acks: all_isr
# How many requests (per-partition) can be buffered without blocking the
# caller. The callers are released (by receiving the
# 'brod_produce_req_buffered' reply) once the request is taken into buffer
# and after the request has been put on wire, then the caller may expect
# a reply 'brod_produce_req_acked' when the request is accepted by kafka.
partition_buffer_limit: 256
# Messages are allowed to 'linger' in buffer for this amount of
# time before being sent.
# Definition of 'linger': A message is in 'linger' state when it is allowed
# to be sent on-wire, but chosen not to (for better batching).
max_linger: 0ms
# At most this amount (count not size) of messages are allowed to 'linger'
# in buffer. Messages will be sent regardless of 'linger' age when this
# threshold is hit.
# NOTE: It does not make sense to have this value set larger than
# `partition_buffer_limit'
max_linger_count: 0
# In case callers are producing faster than brokers can handle (or
# congestion on wire), try to accumulate small requests into batches
# as much as possible but not exceeding max_batch_size.
# OBS: If compression is enabled, care should be taken when picking
# the max batch size, because a compressed batch will be produced
# as one message and this message might be larger than
# 'max.message.bytes' in kafka config (or topic config)
max_batch_size: 1M
# If {max_retries, N} is given, the producer retry produce request for
# N times before crashing in case of failures like connection being
# shutdown by remote or exceptions received in produce response from kafka.
# The special value N = -1 means 'retry indefinitely'
max_retries: 3
# Time in milli-seconds to sleep before retry the failed produce request.
retry_backoff: 500ms

View File

@ -0,0 +1,107 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/machinegun
tag: 54eff8de6e39b1102f1eafb44b6a5ce3eab6e9a2
pullPolicy: IfNotPresent
configMap:
data:
config.yaml: |
{{- tpl (readFile "config.yaml.gotmpl") . | nindent 6 }}
secret:
data:
cookie: "SomeV3ryRand0mStringForCoock1e"
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
- name: cookie-secret
secret:
secretName: {{ .Release.Name }}
volumeMounts:
- name: config-volume
mountPath: /opt/machinegun/etc/config.yaml
subPath: config.yaml
readOnly: true
- name: cookie-secret
mountPath: /opt/machinegun/etc/cookie
subPath: cookie
readOnly: true
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
ciliumPolicies:
- filters:
- port: 8500
type: TCP
name: consul
namespace: {{ .Release.Namespace }}
- filters:
- port: 9092
rules:
kafka:
- role: produce
topics:
- mg-events-cashreg
- mg-events-customer
- mg-events-ff-deposit
- mg-events-ff-destination
- mg-events-ff-identity
- mg-events-ff-p2p-template
- mg-events-ff-p2p-transfer
- mg-events-ff-p2p-transfer-session
- mg-events-ff-source
- mg-events-ff-w2w-transfer
- mg-events-ff-wallet
- mg-events-ff-withdrawal
- mg-events-ff-withdrawal-session
- mg-events-invoice
- mg-events-invoice-template
- mg-events-party
- mg-events-rates
- mg-events-recurrent-paytools
- mg-events-schedulers
type: TCP
name: kafka
namespace: {{ .Release.Namespace }}
- filters:
- port: 8087
type: TCP
name: riak
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: bender
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: url-shortener
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: machinegun
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: dominant
namespace: {{ .Release.Namespace }}
- filters:
- port: 8022
type: TCP
name: hellgate
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,11 @@
{
"applePayMerchantID": "merchant.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}",
"brandless": false,
"capiEndpoint": "https://api.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}/",
"fixedTheme": "",
"googlePayGatewayMerchantID": "rbkmoneydevcheckout",
"googlePayMerchantID": "15442243338125315447",
"samsungPayMerchantName": "RBK.money",
"samsungPayServiceID": "c9d337a160e242ba8322aa",
"wrapperEndpoint": "https://wrapper.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}/"
}

View File

@ -0,0 +1,62 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/payform
tag: 5e8f3648568635398ea56075f19180eff28dad19
pullPolicy: IfNotPresent
service:
type: ClusterIP
ports:
- name: http
port: 8080
configMap:
data:
appConfig.json: |
{{- tpl (readFile "appConfig.json.gotmpl") . | nindent 6 }}
payform.conf: |
{{- readFile "vhost.conf" | nindent 6 }}
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/appConfig.json
subPath: appConfig.json
readOnly: true
- name: config-volume
mountPath: /etc/nginx/vhosts.d/payform.conf
subPath: payform.conf
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
livenessProbe:
httpGet:
path: /appConfig.json
port: http
initialDelaySeconds: 30
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /appConfig.json
port: http
initialDelaySeconds: 30
timeoutSeconds: 3
ingress:
enabled: true
hosts:
- host: checkout.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
paths:
- /
{{- if .Values.services.ingress.tls.enabled }}
tls:
- secretName: {{ .Values.services.ingress.tls.secretName }}
hosts:
- checkout.{{ .Values.services.ingress.rootDomain | default "rbk.dev" }}
{{- end }}
servicePort: 8080

15
config/payform/vhost.conf Normal file
View File

@ -0,0 +1,15 @@
server {
listen 8080;
listen [::]:8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@ -0,0 +1,15 @@
replicas: 1
#This is just init password for vault connections.
postgresqlPassword: "H@ckM3"
initdbScripts:
rbk-dbs.sql: |
CREATE DATABASE keycloak;
CREATE DATABASE shumway;
CREATE DATABASE hooker;
#TODO: If bump version, change master to primary
master:
podLabels:
selector.cilium.rbkmoney/release: {{ .Release.Name }}

View File

@ -0,0 +1,17 @@
SRC_FILES := $(wildcard src/*.jsonnet)
LIB_FILES := $(wildcard src/*.libsonnet)
DASHBOARDS := $(patsubst src/%.jsonnet,result/%.json,$(SRC_FILES))
.PHONY: generate format
src/grafonnet-lib/.git:
git submodule update --init src/grafonnet-lib
result/%.json: src/%.jsonnet src/grafonnet-lib/.git $(LIB_FILES)
jsonnet -o $@ $<
generate: $(DASHBOARDS)
format:
jsonnetfmt --in-place -- $(SRC_FILES) $(LIB_FILES)

View File

@ -0,0 +1,30 @@
Grafana Dashboards
=========
Наборы графиков, предзагруженные в графану при её инициализации.
Готовые дашборды находятся в директории `result`, их исходники в `src`.
Разработка нового дашборда
------
Необходимо иметь установленные `jsonnet` и `jsonnetfmt`.
Для создания нового дашборда достаточно создать файл с раширением `jsonnet` в директории `src`, в котором описать желаемое, аналогично прочим файлам в этой директории.
Для форматирования можно использовать команду `make format`
```shell
$ make format
jsonnetfmt --in-place -- src/erlang-instance.jsonnet
...
```
Затем, для генерации дашбордов из исходников можно воспользоваться командой `make generate`
```shell
$ make generate
jsonnet -o result/erlang-instance.json src/erlang-instance.jsonnet
...
```
Получившийся файл необходимо добавить в `config/prometheus/values.yaml.gotmpl`.

View File

@ -0,0 +1,946 @@
{
"__inputs": [ ],
"__requires": [ ],
"annotations": {
"list": [ ]
},
"editable": false,
"gnetId": null,
"graphTooltip": 1,
"hideControls": false,
"id": null,
"links": [ ],
"panels": [
{
"aliasColors": { },
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 0,
"y": 0
},
"id": 2,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [ ],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "erlang_vm_memory_bytes_total{namespace=\"$namespace\", pod=\"$pod\", kind=\"processes\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Processes Memory",
"refId": "A"
},
{
"expr": "erlang_vm_memory_system_bytes_total{namespace=\"$namespace\", pod=\"$pod\", usage=\"atom\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Atoms",
"refId": "B"
},
{
"expr": "erlang_vm_memory_system_bytes_total{namespace=\"$namespace\", pod=\"$pod\", usage=\"binary\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Binary",
"refId": "C"
},
{
"expr": "erlang_vm_memory_system_bytes_total{namespace=\"$namespace\", pod=\"$pod\", usage=\"code\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Code",
"refId": "D"
},
{
"expr": "erlang_vm_memory_system_bytes_total{namespace=\"$namespace\", pod=\"$pod\", usage=\"ets\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "ETS",
"refId": "E"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "BEAM Memory",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
}
]
},
{
"aliasColors": { },
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 0,
"y": 0
},
"id": 3,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
{
"alias": "CPU Limit",
"bars": false,
"color": "#890f02",
"fill": 0,
"lines": true,
"zindex": 3
},
{
"alias": "CPU Requests",
"bars": false,
"color": "#f2495c",
"fill": 0,
"lines": true,
"zindex": 3
},
{
"alias": "BEAM CPU Time",
"color": "#3f6833",
"zindex": 2
},
{
"alias": "Pod CPU Time",
"color": "#ef843c",
"zindex": 1
},
{
"alias": "CPU Throttling",
"bars": false,
"color": "#b877d9",
"fill": 0,
"lines": true,
"yaxis": 2,
"zindex": 3
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "kube_pod_container_resource_limits_cpu_cores{namespace=\"$namespace\", pod=\"$pod\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "CPU Limit",
"refId": "A"
},
{
"expr": "kube_pod_container_resource_requests_cpu_cores{namespace=\"$namespace\", pod=\"$pod\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "CPU Requests",
"refId": "B"
},
{
"expr": "irate(erlang_vm_statistics_runtime_milliseconds{namespace=\"$namespace\", pod=\"$pod\"}[$interval]) / on (namespace, pod) irate(erlang_vm_statistics_wallclock_time_milliseconds[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "BEAM CPU Time",
"refId": "C"
},
{
"expr": "sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\", pod=\"$pod\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Pod CPU Time",
"refId": "D"
},
{
"expr": "max(irate(container_cpu_cfs_throttled_periods_total{namespace=\"$namespace\", pod=\"$pod\", container=\"\"}[$interval]) / on (namespace, pod, container, node, service) irate(container_cpu_cfs_periods_total[$interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "CPU Throttling",
"refId": "E"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "CPU",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "percentunit",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "percentunit",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
}
]
},
{
"aliasColors": { },
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 0,
"y": 0
},
"id": 4,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
{
"alias": "Context Switches",
"yaxis": 2
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "irate(erlang_vm_statistics_context_switches{namespace=\"$namespace\", pod=\"$pod\"}[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Context Switches",
"refId": "A"
},
{
"expr": "irate(erlang_vm_statistics_reductions_total{namespace=\"$namespace\", pod=\"$pod\"}[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Reductions",
"refId": "B"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "Load",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
}
]
},
{
"aliasColors": { },
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 0,
"y": 0
},
"id": 5,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
{
"alias": "Bytes Reclaimed",
"bars": false,
"fill": 0,
"lines": true,
"yaxis": 2,
"zindex": 1
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "irate(erlang_vm_statistics_garbage_collection_number_of_gcs{namespace=\"$namespace\", pod=\"$pod\"}[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Number of GCs",
"refId": "A"
},
{
"expr": "irate(erlang_vm_statistics_garbage_collection_bytes_reclaimed{namespace=\"$namespace\", pod=\"$pod\"}[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Bytes Reclaimed",
"refId": "B"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "GC",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
}
]
},
{
"aliasColors": { },
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 12,
"y": 0
},
"id": 6,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
{
"alias": "Memory Limit",
"bars": false,
"color": "#890f02",
"fill": 0,
"lines": true,
"stack": false,
"zindex": 3
},
{
"alias": "Memory Requests",
"bars": false,
"color": "#f2495c",
"fill": 0,
"lines": true,
"stack": false,
"zindex": 3
},
{
"alias": "BEAM Total",
"color": "#3274d9",
"stack": false,
"zindex": 1
},
{
"alias": "Pod Usage",
"color": "#3f6833",
"stack": false,
"zindex": -2
},
{
"alias": "Pod RSS",
"stack": "A",
"zindex": -1
}
],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Memory Limit",
"refId": "A"
},
{
"expr": "kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Memory Requests",
"refId": "B"
},
{
"expr": "sum(erlang_vm_memory_system_bytes_total{namespace=\"$namespace\", pod=\"$pod\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "BEAM Total",
"refId": "C"
},
{
"expr": "max(container_memory_rss{namespace=\"$namespace\", pod=\"$pod\", container=\"\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Pod RSS",
"refId": "D"
},
{
"expr": "max(container_memory_cache{namespace=\"$namespace\", pod=\"$pod\", container=\"\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Pod Cache",
"refId": "E"
},
{
"expr": "max(container_memory_usage_bytes{namespace=\"$namespace\", pod=\"$pod\", container=\"\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Pod Usage",
"refId": "F"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "Pod Memory",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
}
]
},
{
"aliasColors": { },
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 12,
"y": 0
},
"id": 7,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
{
"alias": "Input",
"color": "#73bf69"
},
{
"alias": "Output",
"color": "#1f60c4"
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "irate(erlang_vm_statistics_bytes_received_total{namespace=\"$namespace\", pod=\"$pod\"}[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Input",
"refId": "A"
},
{
"expr": "-irate(erlang_vm_statistics_bytes_output_total{namespace=\"$namespace\", pod=\"$pod\"}[$interval])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Output",
"refId": "B"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "IO",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": { },
"bars": true,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 5,
"w": 12,
"x": 12,
"y": 0
},
"id": 8,
"legend": {
"alignAsTable": false,
"avg": false,
"current": false,
"max": false,
"min": false,
"rightSide": false,
"show": true,
"sideWidth": null,
"total": false,
"values": false
},
"lines": false,
"linewidth": 1,
"links": [ ],
"nullPointMode": "null",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"repeat": null,
"seriesOverrides": [
{
"alias": "Run Queues Length",
"yaxis": 2,
"zindex": 1
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "erlang_vm_process_count{namespace=\"$namespace\", pod=\"$pod\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Processes",
"refId": "A"
},
{
"expr": "erlang_vm_statistics_run_queues_length_total{namespace=\"$namespace\", pod=\"$pod\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Run Queues Length",
"refId": "B"
}
],
"thresholds": [ ],
"timeFrom": null,
"timeShift": null,
"title": "Processes",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": [ ]
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": 0,
"show": true
}
]
}
],
"refresh": "30s",
"rows": [ ],
"schemaVersion": 14,
"style": "dark",
"tags": [ ],
"templating": {
"list": [
{
"auto": false,
"auto_count": 300,
"auto_min": "10s",
"current": {
"text": "1m",
"value": "1m"
},
"hide": 0,
"label": "Interval",
"name": "interval",
"query": "1m,5m,10m,30m,1h,6h,12h,1d,7d,14d,30d",
"refresh": 2,
"type": "interval"
},
{
"allValue": null,
"current": { },
"datasource": "Prometheus",
"hide": 0,
"includeAll": false,
"label": "Namespace",
"multi": false,
"name": "namespace",
"options": [ ],
"query": "label_values(erlang_vm_process_count, namespace)",
"refresh": 1,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [ ],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": { },
"datasource": "Prometheus",
"hide": 0,
"includeAll": false,
"label": "Service",
"multi": false,
"name": "service",
"options": [ ],
"query": "label_values(erlang_vm_process_count{namespace=\"$namespace\"}, service)",
"refresh": 1,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [ ],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": { },
"datasource": "Prometheus",
"hide": 0,
"includeAll": false,
"label": "Pod",
"multi": false,
"name": "pod",
"options": [ ],
"query": "label_values(erlang_vm_process_count{namespace=\"$namespace\", service=\"$service\"}, pod)",
"refresh": 2,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [ ],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-3h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "browser",
"title": "Erlang Instance Overview",
"version": 0
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,60 @@
local grafana = import 'grafonnet-lib/grafonnet/grafana.libsonnet';
local dashboard = grafana.dashboard;
local template = grafana.template;
local erlang = import 'erlang.libsonnet';
local datasource = 'Prometheus';
dashboard.new(
title='Erlang Instance Overview',
time_from='now-3h',
time_to='now',
refresh='30s',
graphTooltip='shared_crosshair',
)
.addTemplate(
template.interval(
name='interval',
label='Interval',
query='1m,5m,10m,30m,1h,6h,12h,1d,7d,14d,30d',
current='1m',
)
)
.addTemplate(
template.new(
name='namespace',
label='Namespace',
datasource=datasource,
query='label_values(erlang_vm_process_count, namespace)',
refresh='load',
)
)
.addTemplate(
template.new(
name='service',
label='Service',
datasource=datasource,
query='label_values(erlang_vm_process_count{namespace="$namespace"}, service)',
refresh='load',
)
)
.addTemplate(
template.new(
name='pod',
label='Pod',
datasource=datasource,
query='label_values(erlang_vm_process_count{namespace="$namespace", service="$service"}, pod)',
refresh='time',
)
)
.addPanels([
// left column
erlang.beamMemoryPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
erlang.cpuPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
erlang.loadPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
erlang.gcPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
// right column
erlang.memoryPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
erlang.ioPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
erlang.processesPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
])

View File

@ -0,0 +1,278 @@
local grafana = import 'grafonnet-lib/grafonnet/grafana.libsonnet';
local graphPanel = grafana.graphPanel;
local prometheus = grafana.prometheus;
{
beamMemoryPanel(datasource)::
graphPanel.new(
title='BEAM Memory',
datasource=datasource,
bars=true,
lines=false,
stack=true,
format='bytes',
min=0,
)
.addTargets([
prometheus.target(
expr='erlang_vm_memory_bytes_total{namespace="$namespace", pod="$pod", kind="processes"}',
legendFormat='Processes Memory'
),
prometheus.target(
expr='erlang_vm_memory_system_bytes_total{namespace="$namespace", pod="$pod", usage="atom"}',
legendFormat='Atoms'
),
prometheus.target(
expr='erlang_vm_memory_system_bytes_total{namespace="$namespace", pod="$pod", usage="binary"}',
legendFormat='Binary'
),
prometheus.target(
expr='erlang_vm_memory_system_bytes_total{namespace="$namespace", pod="$pod", usage="code"}',
legendFormat='Code'
),
prometheus.target(
expr='erlang_vm_memory_system_bytes_total{namespace="$namespace", pod="$pod", usage="ets"}',
legendFormat='ETS'
),
]),
cpuPanel(datasource)::
graphPanel.new(
title='CPU',
datasource=datasource,
bars=true,
lines=false,
formatY1='percentunit',
formatY2='percentunit',
min=0,
)
.addTargets([
prometheus.target(
expr='kube_pod_container_resource_limits_cpu_cores{namespace="$namespace", pod="$pod"}',
legendFormat='CPU Limit'
),
prometheus.target(
expr='kube_pod_container_resource_requests_cpu_cores{namespace="$namespace", pod="$pod"}',
legendFormat='CPU Requests'
),
prometheus.target(
expr='irate(erlang_vm_statistics_runtime_milliseconds{namespace="$namespace", pod="$pod"}[$interval]) / on (namespace, pod) irate(erlang_vm_statistics_wallclock_time_milliseconds[$interval])',
legendFormat='BEAM CPU Time'
),
prometheus.target(
expr='sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate{namespace="$namespace", pod="$pod"})',
legendFormat='Pod CPU Time'
),
prometheus.target(
expr='max(irate(container_cpu_cfs_throttled_periods_total{namespace="$namespace", pod="$pod", container=""}[$interval]) / on (namespace, pod, container, node, service) irate(container_cpu_cfs_periods_total[$interval]))',
legendFormat='CPU Throttling'
),
])
.addSeriesOverride({
alias: 'CPU Limit',
bars: false,
fill: 0,
zindex: 3,
lines: true,
color: '#890f02',
})
.addSeriesOverride({
alias: 'CPU Requests',
bars: false,
fill: 0,
zindex: 3,
lines: true,
color: '#f2495c',
})
.addSeriesOverride({
alias: 'BEAM CPU Time',
zindex: 2,
color: '#3f6833',
})
.addSeriesOverride({
alias: 'Pod CPU Time',
zindex: 1,
color: '#ef843c',
})
.addSeriesOverride({
alias: 'CPU Throttling',
zindex: 3,
yaxis: 2,
bars: false,
fill: 0,
lines: true,
color: '#b877d9',
}),
memoryPanel(datasource)::
graphPanel.new(
title='Pod Memory',
datasource=datasource,
bars=true,
lines=false,
format='bytes',
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='kube_pod_container_resource_limits_memory_bytes{namespace="$namespace", pod="$pod"}',
legendFormat='Memory Limit'
),
prometheus.target(
expr='kube_pod_container_resource_requests_memory_bytes{namespace="$namespace", pod="$pod"}',
legendFormat='Memory Requests'
),
prometheus.target(
expr='sum(erlang_vm_memory_system_bytes_total{namespace="$namespace", pod="$pod"})',
legendFormat='BEAM Total'
),
prometheus.target(
expr='max(container_memory_rss{namespace="$namespace", pod="$pod", container=""})',
legendFormat='Pod RSS'
),
prometheus.target(
expr='max(container_memory_cache{namespace="$namespace", pod="$pod", container=""})',
legendFormat='Pod Cache'
),
prometheus.target(
expr='max(container_memory_usage_bytes{namespace="$namespace", pod="$pod", container=""})',
legendFormat='Pod Usage'
),
])
.addSeriesOverride({
alias: 'Memory Limit',
bars: false,
lines: true,
stack: false,
fill: 0,
zindex: 3,
color: '#890f02',
})
.addSeriesOverride({
alias: 'Memory Requests',
bars: false,
lines: true,
stack: false,
fill: 0,
zindex: 3,
color: '#f2495c',
})
.addSeriesOverride({
alias: 'BEAM Total',
zindex: 1,
color: '#3274d9',
stack: false,
})
.addSeriesOverride({
alias: 'Pod Usage',
zindex: -2,
color: '#3f6833',
stack: false,
})
.addSeriesOverride({
alias: 'Pod RSS',
zindex: -1,
stack: 'A',
}),
ioPanel(datasource)::
graphPanel.new(
title='IO',
datasource=datasource,
bars=true,
lines=false,
format='bytes',
)
.addTargets([
prometheus.target(
expr='irate(erlang_vm_statistics_bytes_received_total{namespace="$namespace", pod="$pod"}[$interval])',
legendFormat='Input'
),
prometheus.target(
expr='-irate(erlang_vm_statistics_bytes_output_total{namespace="$namespace", pod="$pod"}[$interval])',
legendFormat='Output'
),
])
.addSeriesOverride({
alias: 'Input',
color: '#73bf69',
})
.addSeriesOverride({
alias: 'Output',
color: '#1f60c4',
}),
loadPanel(datasource)::
graphPanel.new(
title='Load',
datasource=datasource,
min=0,
)
.addTargets([
prometheus.target(
expr='irate(erlang_vm_statistics_context_switches{namespace="$namespace", pod="$pod"}[$interval])',
legendFormat='Context Switches'
),
prometheus.target(
expr='irate(erlang_vm_statistics_reductions_total{namespace="$namespace", pod="$pod"}[$interval])',
legendFormat='Reductions'
),
])
.addSeriesOverride({
alias: 'Context Switches',
yaxis: 2,
}),
processesPanel(datasource)::
graphPanel.new(
title='Processes',
datasource=datasource,
bars=true,
lines=false,
min=0,
)
.addTargets([
prometheus.target(
expr='erlang_vm_process_count{namespace="$namespace", pod="$pod"}',
legendFormat='Processes'
),
prometheus.target(
expr='erlang_vm_statistics_run_queues_length_total{namespace="$namespace", pod="$pod"}',
legendFormat='Run Queues Length'
),
])
.addSeriesOverride({
alias: 'Run Queues Length',
yaxis: 2,
zindex: 1,
}),
gcPanel(datasource)::
graphPanel.new(
title='GC',
datasource=datasource,
formatY2='bytes',
bars=true,
lines=false,
min=0,
)
.addTargets([
prometheus.target(
expr='irate(erlang_vm_statistics_garbage_collection_number_of_gcs{namespace="$namespace", pod="$pod"}[$interval])',
legendFormat='Number of GCs'
),
prometheus.target(
expr='irate(erlang_vm_statistics_garbage_collection_bytes_reclaimed{namespace="$namespace", pod="$pod"}[$interval])',
legendFormat='Bytes Reclaimed'
),
])
.addSeriesOverride({
alias: 'Bytes Reclaimed',
yaxis: 2,
zindex: 1,
fill: 0,
bars: false,
lines: true,
}),
}

View File

@ -0,0 +1,206 @@
local grafana = import 'grafonnet-lib/grafonnet/grafana.libsonnet';
local dashboard = grafana.dashboard;
local template = grafana.template;
local row = grafana.row;
local erlang = import 'erlang.libsonnet';
local machinegun = import 'machinegun.libsonnet';
local datasource = 'Prometheus';
dashboard.new(
title='Machinegun Namespace Overview',
time_from='now-3h',
time_to='now',
refresh='30s',
graphTooltip='shared_crosshair',
)
.addTemplate(
template.interval(
name='interval',
label='Interval',
query='1m,5m,10m,30m,1h,6h,12h,1d,7d,14d,30d',
current='1m',
)
)
.addTemplate(
template.new(
name='namespace',
label='K8S Namespace',
datasource=datasource,
query='label_values(erlang_vm_process_count, namespace)',
refresh='load',
)
)
.addTemplate(
template.new(
name='mg_namespace',
label='MG Namespace',
datasource=datasource,
current='invoice',
query='label_values(mg_machine_lifecycle_changes_total{namespace="$namespace"}, exported_namespace)',
refresh='load',
)
)
.addTemplate(
template.custom(
name='service',
label='Service',
query='machinegun',
current='machinegun',
hide='all',
)
)
.addTemplate(
template.new(
name='pod',
label='Pod',
datasource=datasource,
query='label_values(erlang_vm_process_count{namespace="$namespace", service="$service"}, pod)',
refresh='time',
includeAll=true,
hide='all',
current='all',
)
)
.addTemplate(
template.new(
name='scheduler',
label='Scheduler Name',
datasource=datasource,
query='label_values(mg_scheduler_task_changes_total{namespace="$namespace", exported_namespace="$mg_namespace"}, name)',
refresh='time',
includeAll=true,
hide='all',
current='all',
)
)
.addTemplate(
template.new(
name='storage',
label='Storage Name',
datasource=datasource,
query='label_values(mg_storage_operation_changes_total{namespace="$namespace", exported_namespace="$mg_namespace"}, name)',
refresh='time',
includeAll=true,
hide='all',
current='all',
)
)
.addTemplate(
template.new(
name='storage_operation',
label='Storage Operation Name',
datasource=datasource,
query='label_values(mg_storage_operation_changes_total{namespace="$namespace", exported_namespace="$mg_namespace"}, operation)',
refresh='time',
includeAll=true,
hide='all',
current='all',
)
)
.addTemplate(
template.new(
name='riak_pool',
label='Riak Pool Name',
datasource=datasource,
query='label_values(mg_riak_pool_connections_in_use{namespace="$namespace", exported_namespace="$mg_namespace"}, name)',
refresh='time',
includeAll=true,
hide='all',
current='all',
)
)
.addTemplate(
template.new(
name='processing_impact',
label='Processing Impact',
datasource=datasource,
query='label_values(mg_machine_processing_changes_total{namespace="$namespace", exported_namespace="$mg_namespace"}, impact)',
refresh='time',
includeAll=true,
hide='all',
current='all',
)
)
.addPanels([
row.new(
title='`[[mg_namespace]]` Overview',
collapse=false,
) { gridPos: { h: 1, w: 24, x: 0, y: 0 } },
// left column
machinegun.timersLifecyclePanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 1 } },
machinegun.machinesImpactPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 1 } },
// right column
machinegun.machinesLifecyclePanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 1 } },
machinegun.machinesStartQueueUsagePanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 1 } },
row.new(
title='BEAM on [[pod]]',
repeat='pod',
collapse=false,
) { gridPos: { h: 1, w: 24, x: 0, y: 6 } },
// left column
erlang.cpuPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 7 } },
erlang.loadPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 7 } },
erlang.gcPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 7 } },
// right column
erlang.memoryPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 7 } },
erlang.ioPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 7 } },
erlang.processesPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 7 } },
row.new(
title='`[[scheduler]]` Scheduler Overview',
repeat='scheduler',
collapse=true,
)
.addPanels([
// left column
machinegun.schedulerChangesPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.schedulerScanDelayPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.schedulerTaskDelayPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
// right column
machinegun.schedulerQuotaPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
machinegun.schedulerScanDurationPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
machinegun.schedulerTaskProcessingDurationPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
]) { gridPos: { h: 1, w: 24, x: 0, y: 8 } },
row.new(
title='`[[storage]]` Storage Overview',
repeat='storage',
collapse=true,
)
.addPanels([
// left column
machinegun.storageChangesPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 }, repeat: 'storage_operation' },
// right column
machinegun.storageDurationPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 }, repeat: 'storage_operation' },
]) { gridPos: { h: 1, w: 24, x: 0, y: 9 } },
row.new(
title='`[[riak_pool]]` Riak Pool Overview',
repeat='riak_pool',
collapse=true,
)
.addPanels([
// left column
machinegun.riakPoolConnectionsPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.riakPoolInUsePerRequestPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.riakPoolQueuedPerRequestPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.riakPoolConnectTimeoutPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.riakPoolKilledFreePanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
// right column
machinegun.riakPoolQueuePanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
machinegun.riakPoolFreePerRequestPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
machinegun.riakPoolNoFreeConnetionPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
machinegun.riakPoolQueueLimitPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
machinegun.riakPoolKilledInUsePanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
]) { gridPos: { h: 1, w: 24, x: 0, y: 10 } },
row.new(
title='`[[processing_impact]]` Impact Processing',
repeat='processing_impact',
collapse=true,
)
.addPanels([
// left column
machinegun.processingStartedPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
machinegun.processingDurationPanel(datasource) { gridPos: { h: 5, w: 12, x: 0, y: 0 } },
// right column
machinegun.processingFinishedPanel(datasource) { gridPos: { h: 5, w: 12, x: 12, y: 0 } },
]) { gridPos: { h: 1, w: 24, x: 0, y: 12 } },
])

View File

@ -0,0 +1,589 @@
local grafana = import 'grafonnet-lib/grafonnet/grafana.libsonnet';
local graphPanel = grafana.graphPanel;
local prometheus = grafana.prometheus;
local percentileColors = {
p50: 'dark-red',
p95: 'yellow',
p99: 'green',
};
{
timersLifecyclePanel(datasource)::
graphPanel.new(
title='Timers Lifecycle',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum by (change)(irate(mg_timer_lifecycle_changes_total{namespace="$namespace", exported_namespace="$mg_namespace"}[$interval]))',
legendFormat='{{change}}',
),
]),
machinesLifecyclePanel(datasource)::
graphPanel.new(
title='Machines Lifecycle',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum by (change)(irate(mg_machine_lifecycle_changes_total{namespace="$namespace", exported_namespace="$mg_namespace"}[$interval]))',
legendFormat='{{change}}',
),
]),
machinesImpactPanel(datasource)::
graphPanel.new(
title='Machines Process Impact',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum by (impact)(irate(mg_machine_processing_changes_total{namespace="$namespace", exported_namespace="$mg_namespace", change="started"}[$interval]))',
legendFormat='{{impact}}',
),
]),
machinesStartQueueUsagePanel(datasource)::
graphPanel.new(
title='Machines Start Queue Usage',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_worker_action_queue_usage_bucket{namespace="$namespace", exported_namespace="$mg_namespace"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_worker_action_queue_usage_bucket{namespace="$namespace", exported_namespace="$mg_namespace"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_worker_action_queue_usage_bucket{namespace="$namespace", exported_namespace="$mg_namespace"}))',
legendFormat='p99',
),
]),
schedulerChangesPanel(datasource)::
graphPanel.new(
title='Tasks',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum by (change)(irate(mg_scheduler_task_changes_total{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}[$interval]))',
legendFormat='{{change}}',
),
]),
schedulerQuotaPanel(datasource)::
graphPanel.new(
title='Quota Usage',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum by (status)(mg_scheduler_task_quota_usage{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"})',
legendFormat='{{status}}',
),
])
.addSeriesOverride({
alias: 'reserved',
bars: false,
fill: 0,
lines: true,
linewidth: 2,
color: '#890f02',
zindex: 3,
})
.addSeriesOverride({
alias: 'active',
color: '#3f6833',
})
.addSeriesOverride({
alias: 'waiting',
color: '#eab839',
}),
schedulerTaskDelayPanel(datasource)::
graphPanel.new(
title='Task Delay',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
format='s',
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_scheduler_task_processing_delay_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_scheduler_task_processing_delay_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_scheduler_task_processing_delay_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p99',
),
]),
schedulerScanDelayPanel(datasource)::
graphPanel.new(
title='Scan Delay',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
format='s',
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_scheduler_scan_delay_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_scheduler_scan_delay_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_scheduler_scan_delay_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p99',
),
]),
schedulerScanDurationPanel(datasource)::
graphPanel.new(
title='Scan Duration',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
format='s',
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_scheduler_scan_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_scheduler_scan_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_scheduler_scan_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p99',
),
]),
schedulerTaskProcessingDurationPanel(datasource)::
graphPanel.new(
title='Task Processing Duration',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
format='s',
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_scheduler_task_processing_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_scheduler_task_processing_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_scheduler_task_processing_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$scheduler"}))',
legendFormat='p99',
),
]),
storageChangesPanel(datasource)::
graphPanel.new(
title='`[[storage_operation]]` Total',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum by (change)(irate(mg_storage_operation_changes_total{change="finish", namespace="$namespace", exported_namespace="$mg_namespace", name="$storage", operation="$storage_operation"}[$interval]))',
legendFormat='finished',
),
prometheus.target(
expr='sum by (change)(irate(mg_storage_operation_changes_total{change="finish", namespace="$namespace", exported_namespace="$mg_namespace", name="$storage", operation="$storage_operation"}[$interval])) - sum by (change)(irate(mg_storage_operation_changes_total{change="start", namespace="$namespace", exported_namespace="$mg_namespace", name="$storage", operation="$storage_operation"}[$interval]))',
legendFormat='not finished',
),
])
.addSeriesOverride({
alias: 'finished',
color: '#7eb26d',
})
.addSeriesOverride({
alias: 'not finished',
color: '#f2495c',
}),
storageDurationPanel(datasource)::
graphPanel.new(
title='`[[storage_operation]]` Duration',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
format='s',
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_storage_operation_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$storage", operation="$storage_operation"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_storage_operation_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$storage", operation="$storage_operation"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_storage_operation_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$storage", operation="$storage_operation"}))',
legendFormat='p99',
),
]),
riakPoolConnectionsPanel(datasource)::
graphPanel.new(
title='Connections',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(mg_riak_pool_connections_in_use{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"})',
legendFormat='in use',
),
prometheus.target(
expr='sum(mg_riak_pool_connections_free{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"})',
legendFormat='free',
),
prometheus.target(
expr='sum(mg_riak_pool_connections_limit{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"})',
legendFormat='limit',
),
])
.addSeriesOverride({
alias: 'in use',
color: '#eab839',
})
.addSeriesOverride({
alias: 'free',
color: '#3f6833',
})
.addSeriesOverride({
alias: 'limit',
color: '#890f02',
fill: 0,
zindex: 3,
bars: false,
lines: true,
stack: false,
}),
riakPoolQueuePanel(datasource)::
graphPanel.new(
title='Pool Queue',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(mg_riak_pool_queued_requests{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"})',
legendFormat='queued',
),
prometheus.target(
expr='sum(mg_riak_pool_queued_requests_limit{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"})',
legendFormat='limit',
),
])
.addSeriesOverride({
alias: 'queued',
color: '#eab839',
})
.addSeriesOverride({
alias: 'limit',
color: '#890f02',
fill: 0,
zindex: 3,
bars: false,
lines: true,
stack: false,
}),
riakPoolInUsePerRequestPanel(datasource)::
graphPanel.new(
title='In Use Connections Per Request',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_riak_pool_connections_in_use_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_riak_pool_connections_in_use_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_riak_pool_connections_in_use_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p99',
),
]),
riakPoolFreePerRequestPanel(datasource)::
graphPanel.new(
title='Free Connections Per Request',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_riak_pool_connections_free_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_riak_pool_connections_free_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_riak_pool_connections_free_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p99',
),
]),
riakPoolQueuedPerRequestPanel(datasource)::
graphPanel.new(
title='Queued Requests Per Request',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_riak_pool_queued_requests_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_riak_pool_queued_requests_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_riak_pool_queued_requests_per_request_bucket{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}))',
legendFormat='p99',
),
]),
riakPoolNoFreeConnetionPanel(datasource)::
graphPanel.new(
title='No Free Connection Errors Number',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_riak_pool_no_free_connection_errors_total{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}[$interval]))',
legendFormat='total',
),
]),
riakPoolConnectTimeoutPanel(datasource)::
graphPanel.new(
title='Connect Timeout Errors Number',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_riak_pool_connect_timeout_errors_total{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}[$interval]))',
legendFormat='total',
),
]),
riakPoolQueueLimitPanel(datasource)::
graphPanel.new(
title='Queue Limit Reached Errors Number',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_riak_pool_queue_limit_reached_errors_total{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}[$interval]))',
legendFormat='total',
),
]),
riakPoolKilledFreePanel(datasource)::
graphPanel.new(
title='Killed Free Connections Number',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_riak_pool_killed_free_connections_total{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}[$interval]))',
legendFormat='total',
),
]),
riakPoolKilledInUsePanel(datasource)::
graphPanel.new(
title='Killed In Use Connections Number',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_riak_pool_killed_in_use_connections_total{namespace="$namespace", exported_namespace="$mg_namespace", name="$riak_pool"}[$interval]))',
legendFormat='total',
),
]),
processingStartedPanel(datasource)::
graphPanel.new(
title='Machine Processing Started',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_machine_processing_changes_total{change="started", namespace="$namespace", exported_namespace="$mg_namespace", impact="$processing_impact"}[$interval]))',
legendFormat='total',
),
]),
processingFinishedPanel(datasource)::
graphPanel.new(
title='Machine Processing Finished',
datasource=datasource,
bars=true,
lines=false,
stack=true,
min=0,
)
.addTargets([
prometheus.target(
expr='sum(irate(mg_machine_processing_changes_total{change="finished", namespace="$namespace", exported_namespace="$mg_namespace", impact="$processing_impact"}[$interval]))',
legendFormat='total',
),
]),
processingDurationPanel(datasource)::
graphPanel.new(
title='Machine Processing Duration',
datasource=datasource,
bars=false,
lines=false,
points=true,
pointradius=2,
min=0,
format='s',
aliasColors=percentileColors,
)
.addTargets([
prometheus.target(
expr='histogram_quantile(0.5, sum by (le)(mg_machine_processing_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", impact="$processing_impact"}))',
legendFormat='p50',
),
prometheus.target(
expr='histogram_quantile(0.95, sum by (le)(mg_machine_processing_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", impact="$processing_impact"}))',
legendFormat='p95',
),
prometheus.target(
expr='histogram_quantile(0.99, sum by (le)(mg_machine_processing_duration_seconds_bucket{namespace="$namespace", exported_namespace="$mg_namespace", impact="$processing_impact"}))',
legendFormat='p99',
),
]),
}

View File

@ -0,0 +1,140 @@
# Look for reference at https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L2008
prometheus:
additionalServiceMonitors:
- name: "rbk-erlang-service"
selector:
matchLabels:
prometheus.metrics.erlang.enabled: "true"
namespaceSelector:
matchNames:
- default
endpoints:
- port: "api"
path: /metrics
scheme: http
- name: "rbk-java-service"
selector:
matchLabels:
prometheus.metrics.java.enabled: "true"
namespaceSelector:
matchNames:
- default
endpoints:
- port: "management"
path: /actuator/prometheus
scheme: http
grafana:
enabled: true
replicas: 1
create: true
## Use an existing ClusterRole/Role (depending on rbac.namespaced false/true)
# useExistingRole: name-of-some-(cluster)role
rbac:
create: true
pspEnabled: true
pspUseAppArmor: true
namespaced: false
extraClusterRoleRules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "watch", "list"]
image:
repository: grafana/grafana
tag: 7.2.1
sha: ""
pullPolicy: IfNotPresent
{{- if .Values.elk.enabled }}
extraEmptyDirMounts:
- name: dashboard-dir
mountPath: /var/lib/grafana/dashboards/general
envValueFrom:
ELASTIC_PASS:
secretKeyRef:
name: rbkmoney-es-elastic-user
key: elastic
extraInitContainers:
- name: dashboard-autosync
image: alpine/git:v2.26.2
imagePullPolicy: IfNotPresent
args:
- clone
- -b
- dashboard/release
- https://github.com/rbkmoney/grafana-dashboards-common.git
- /git/dashboards
volumeMounts:
- name: dashboard-dir
mountPath: "/git/dashboards"
securityContext:
runAsUser: 0
extraContainerVolumes:
- name: sync-key
secret:
secretName: prometheus-grafana-env
items:
- key: synckey
path: synckey
mode: 0600
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: rbkm-elasticsearch
type: elasticsearch
database: "filebeat-rbkmoney-processing-*"
url: https://rbkmoney-es-http:9200
basicAuth: true
basicAuthUser: elastic
jsonData:
timeField: "@timestamp"
esVersion: 70
tlsSkipVerify: true
secureJsonData:
basicAuthPassword: $ELASTIC_PASS
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'general'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
rbk-dashboards:
erlang-instance:
json: |
{{- readFile "dashboards/result/erlang-instance.json" | nindent 10 }}
machinegun-namespace:
json: |
{{- readFile "dashboards/result/machinegun-namespace.json" | nindent 10 }}
{{- end }}
grafana.ini:
paths:
data: /var/lib/grafana/data
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
analytics:
check_for_updates: true
log:
mode: console
grafana_net:
url: https://grafana.net
revisionHistoryLimit: 10

View File

@ -0,0 +1,9 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/proxy-mocket-inspector/proxy-mocket-inspector.jar \
--server.port=8022 \
${@}

View File

@ -0,0 +1,41 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/proxy-mocket-inspector
tag: 0ea276f2bb2ff2d25ba69c3c729552b81a75ece2
pullPolicy: IfNotPresent
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/proxy-mocket-inspector/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
runopts:
command: ["/opt/proxy-mocket-inspector/entrypoint.sh"]
livenessProbe:
httpGet:
path: /actuator/health
port: api
initialDelaySeconds: 30
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /actuator/health
port: api
initialDelaySeconds: 30
timeoutSeconds: 3

View File

@ -0,0 +1,29 @@
cardPan, action, paymentSystem
4242424242424242, Success, Visa
5555555555554444, Success, MasterCard
586824160825533338, Success, Maestro
2201382000000013, Success, MIR
4111111111111111, Success, Visa
4012888888881881, 3-D Secure Success, Visa
5169147129584558, 3-D Secure Success, MasterCard
4987654321098769, 3-D Secure Failure, Visa
5123456789012346, 3-D Secure Failure, MasterCard
4342561111111118, 3-D Secure Timeout, Visa
5112000200000002, 3-D Secure Timeout, MasterCard
4000000000000002, Insufficient Funds, Visa
5100000000000412, Insufficient Funds, MasterCard
4222222222222220, Invalid Card, Visa
5100000000000511, Invalid Card, MasterCard
4003830171874018, CVV Match Fail, Visa
5496198584584769, CVV Match Fail, MasterCard
4000000000000069, Expired Card, Visa
5105105105105100, Expired Card, MasterCard
4111110000000112, Unknown Failure, Visa
5124990000000002, Unknown Failure, MasterCard
5000000000000009, Apple Pay Failure, Visa
4300000000000777, Apple Pay Success, Visa
2222405343248877, Google Pay Failure, MasterCard
2223007648726984, Samsung Pay Failure, MasterCard
5185731540006869, Samsung Pay Success, MasterCard
5204240250183519, Samsung Pay Success, MasterCard
9999999999999999, Success, Visa
1 cardPan action paymentSystem
2 4242424242424242 Success Visa
3 5555555555554444 Success MasterCard
4 586824160825533338 Success Maestro
5 2201382000000013 Success MIR
6 4111111111111111 Success Visa
7 4012888888881881 3-D Secure Success Visa
8 5169147129584558 3-D Secure Success MasterCard
9 4987654321098769 3-D Secure Failure Visa
10 5123456789012346 3-D Secure Failure MasterCard
11 4342561111111118 3-D Secure Timeout Visa
12 5112000200000002 3-D Secure Timeout MasterCard
13 4000000000000002 Insufficient Funds Visa
14 5100000000000412 Insufficient Funds MasterCard
15 4222222222222220 Invalid Card Visa
16 5100000000000511 Invalid Card MasterCard
17 4003830171874018 CVV Match Fail Visa
18 5496198584584769 CVV Match Fail MasterCard
19 4000000000000069 Expired Card Visa
20 5105105105105100 Expired Card MasterCard
21 4111110000000112 Unknown Failure Visa
22 5124990000000002 Unknown Failure MasterCard
23 5000000000000009 Apple Pay Failure Visa
24 4300000000000777 Apple Pay Success Visa
25 2222405343248877 Google Pay Failure MasterCard
26 2223007648726984 Samsung Pay Failure MasterCard
27 5185731540006869 Samsung Pay Success MasterCard
28 5204240250183519 Samsung Pay Success MasterCard
29 9999999999999999 Success Visa

View File

@ -0,0 +1,9 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/proxy-mocketbank-mpi/proxy-mocketbank-mpi.jar \
--server.port=8080 \
${@}

View File

@ -0,0 +1,49 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/proxy-mocketbank-mpi
tag: e43b6f00eca01eb57a6e917704bff608de57336a
pullPolicy: IfNotPresent
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
cards.csv: |
{{- readFile "cards.csv" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/proxy-mocketbank-mpi/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/proxy-mocketbank-mpi/fixture/cards.csv
subPath: cards.csv
readOnly: true
runopts:
command: ["/opt/proxy-mocketbank-mpi/entrypoint.sh"]
service:
type: ClusterIP
ports:
- name: api
port: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: api
readinessProbe:
httpGet:
path: /actuator/health
port: api

View File

@ -0,0 +1,29 @@
cardPan, action, paymentSystem
4242424242424242, Success, Visa
5555555555554444, Success, MasterCard
586824160825533338, Success, Maestro
2201382000000013, Success, MIR
4111111111111111, Success, Visa
4012888888881881, 3-D Secure Success, Visa
5169147129584558, 3-D Secure Success, MasterCard
4987654321098769, 3-D Secure Failure, Visa
5123456789012346, 3-D Secure Failure, MasterCard
4342561111111118, 3-D Secure Timeout, Visa
5112000200000002, 3-D Secure Timeout, MasterCard
4000000000000002, Insufficient Funds, Visa
5100000000000412, Insufficient Funds, MasterCard
4222222222222220, Invalid Card, Visa
5100000000000511, Invalid Card, MasterCard
4003830171874018, CVV Match Fail, Visa
5496198584584769, CVV Match Fail, MasterCard
4000000000000069, Expired Card, Visa
5105105105105100, Expired Card, MasterCard
4111110000000112, Unknown Failure, Visa
5124990000000002, Unknown Failure, MasterCard
5000000000000009, Apple Pay Failure, Visa
4300000000000777, Apple Pay Success, Visa
2222405343248877, Google Pay Failure, MasterCard
2223007648726984, Samsung Pay Failure, MasterCard
5185731540006869, Samsung Pay Success, MasterCard
5204240250183519, Samsung Pay Success, MasterCard
9999999999999999, Success, Visa
1 cardPan action paymentSystem
2 4242424242424242 Success Visa
3 5555555555554444 Success MasterCard
4 586824160825533338 Success Maestro
5 2201382000000013 Success MIR
6 4111111111111111 Success Visa
7 4012888888881881 3-D Secure Success Visa
8 5169147129584558 3-D Secure Success MasterCard
9 4987654321098769 3-D Secure Failure Visa
10 5123456789012346 3-D Secure Failure MasterCard
11 4342561111111118 3-D Secure Timeout Visa
12 5112000200000002 3-D Secure Timeout MasterCard
13 4000000000000002 Insufficient Funds Visa
14 5100000000000412 Insufficient Funds MasterCard
15 4222222222222220 Invalid Card Visa
16 5100000000000511 Invalid Card MasterCard
17 4003830171874018 CVV Match Fail Visa
18 5496198584584769 CVV Match Fail MasterCard
19 4000000000000069 Expired Card Visa
20 5105105105105100 Expired Card MasterCard
21 4111110000000112 Unknown Failure Visa
22 5124990000000002 Unknown Failure MasterCard
23 5000000000000009 Apple Pay Failure Visa
24 4300000000000777 Apple Pay Success Visa
25 2222405343248877 Google Pay Failure MasterCard
26 2223007648726984 Samsung Pay Failure MasterCard
27 5185731540006869 Samsung Pay Success MasterCard
28 5204240250183519 Samsung Pay Success MasterCard
29 9999999999999999 Success Visa

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/proxy-mocketbank/proxy-mocketbank.jar \
--server.secondary.ports=8080 \
--server.port=8022 \
--cds.client.storage.url=http://cds:8022/v2/storage \
--hellgate.client.adapter.url=http://hellgate:8022/v1/proxyhost/provider \
--adapter-mock-mpi.url=http://proxy-mocketbank-mpi:8080 \
${@}

View File

@ -0,0 +1,59 @@
[
{
"codeRegex": "^Unsupported\\sCard$",
"mapping": "authorization_failed:payment_tool_rejected:bank_card_rejected:issuer_not_found"
},
{
"codeRegex": "3-D Secure Failure",
"mapping": "preauthorization_failed:unknown"
},
{
"codeRegex": "3-D Secure Timeout",
"mapping": "preauthorization_failed:unknown"
},
{
"codeRegex": "Invalid Card",
"mapping": "authorization_failed:account_not_found"
},
{
"codeRegex": "CVV Match Fail",
"mapping": "authorization_failed:payment_tool_rejected:bank_card_rejected:cvv_invalid"
},
{
"codeRegex": "Expired Card",
"mapping": "authorization_failed:payment_tool_rejected:bank_card_rejected:card_expired"
},
{
"codeRegex": "Insufficient Funds",
"mapping": "authorization_failed:insufficient_funds"
},
{
"codeRegex": "Unknown",
"mapping": "authorization_failed:unknown"
},
{
"codeRegex": "Apple Pay Failure",
"mapping": "authorization_failed:unknown"
},
{
"codeRegex": "Google Pay Failure",
"mapping": "authorization_failed:unknown"
},
{
"codeRegex": "Samsung Pay Failure",
"mapping": "authorization_failed:unknown"
},
{
"codeRegex": "Unknown Failure",
"mapping": "authorization_failed:unknown"
},
{
"codeRegex": "error",
"descriptionRegex":"Expiration date not found",
"mapping": "authorization_failed:payment_tool_rejected:bank_card_rejected:card_expired"
},
{
"codeRegex": ".*",
"mapping": "authorization_failed:unknown"
}
]

View File

@ -0,0 +1,61 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/proxy-mocketbank
tag: 91953e1e9874a851816474b47ad0f123c7c936d1
pullPolicy: IfNotPresent
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
cards.csv: |
{{- readFile "cards.csv" | nindent 6 }}
errors.json: |
{{- readFile "errors.json" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/proxy-mocketbank/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/proxy-mocketbank/fixture/errors.json
subPath: errors.json
readOnly: true
- name: config-volume
mountPath: /opt/proxy-mocketbank/fixture/cards.csv
subPath: cards.csv
readOnly: true
runopts:
command: ["/opt/proxy-mocketbank/entrypoint.sh"]
service:
type: ClusterIP
ports:
- name: api
port: 8022
- name: callback
port: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: api
initialDelaySeconds: 30
timeoutSeconds: 3
readinessProbe:
httpGet:
path: /actuator/health
port: api
initialDelaySeconds: 30
timeoutSeconds: 3

70
config/riak/cm.yaml Normal file
View File

@ -0,0 +1,70 @@
#!/bin/bash
#
# Cluster start script to bootstrap a Riak cluster.
#
sleep 10
set -ex
if [[ -x /usr/sbin/riak ]]; then
export RIAK=/usr/sbin/riak
else
export RIAK=$RIAK_HOME/bin/riak
fi
export RIAK_CONF=/etc/riak/riak.conf
export USER_CONF=/etc/riak/user.conf
export RIAK_ADVANCED_CONF=/etc/riak/advanced.config
if [[ -x /usr/sbin/riak-admin ]]; then
export RIAK_ADMIN=/usr/sbin/riak-admin
else
export RIAK_ADMIN=$RIAK_HOME/bin/riak-admin
fi
export SCHEMAS_DIR=/etc/riak/schemas/
# Set ports for PB and HTTP
export PB_PORT=${PB_PORT:-8087}
export HTTP_PORT=${HTTP_PORT:-8098}
# CLUSTER_NAME is used to name the nodes and is the value used in the distributed cookie
export CLUSTER_NAME=${CLUSTER_NAME:-riak}
# The COORDINATOR_NODE is the first node in a cluster to which other nodes will eventually join
export COORDINATOR_NODE=${COORDINATOR_NODE:-$(hostname -s).riak-headless}
if [[ ! -z "$ipv6" ]]; then
export COORDINATOR_NODE_HOST=$(ping -c1 $COORDINATOR_NODE | awk '/^PING/ {print $3}' | sed -r 's/\((.*)\):/\1/g')||'::1'
else
export COORDINATOR_NODE_HOST=$(ping -c1 $COORDINATOR_NODE | awk '/^PING/ {print $3}' | sed -r 's/\((.*)\):/\1/g')||'127.0.0.1'
fi
# Use ping to discover our HOSTNAME because it's easier and more reliable than other methods
export HOST=${NODENAME:-$(hostname -s).riak-headless}
export HOSTIP=$(ping -c1 $HOST | awk '/^PING/ {print $3}' | sed -r 's/\((.*)\):/\1/g')
# Run all prestart scripts
PRESTART=$(find /etc/riak/prestart.d -name *.sh -print | sort)
for s in $PRESTART; do
. $s
done
# Start the node and wait until fully up
$RIAK start
$RIAK_ADMIN wait-for-service riak_kv
# Run all poststart scripts
POSTSTART=$(find /etc/riak/poststart.d -name *.sh -print | sort)
for s in $POSTSTART; do
. $s
done
# Trap SIGTERM and SIGINT and tail the log file indefinitely
tail -n 1024 -f /var/log/riak/console.log &
PID=$!
trap "$RIAK stop; kill $PID" SIGTERM SIGINT
# avoid log spamming and unnecessary exit once `riak ping` fails
set +ex
while :
do
riak ping >/dev/null 2>&1
if [ $? -ne 0 ]; then
exit 1
fi
sleep 10
done

34
config/riak/pre.yaml Normal file
View File

@ -0,0 +1,34 @@
#!/bin/bash
# Add standard config items
cat <<END >>$RIAK_CONF
nodename = $CLUSTER_NAME@$HOST
distributed_cookie = $CLUSTER_NAME
listener.protobuf.internal = $HOSTIP:$PB_PORT
listener.http.internal = $HOSTIP:$HTTP_PORT
mdc.cluster_manager = $HOSTIP:9080
handoff.ip = $HOSTIP
END
rm /etc/riak/advanced.config
cat<< END > /etc/riak/vm.args
+scl false
+sfwi 500
+P 256000
+e 256000
-env ERL_CRASH_DUMP /var/log/riak/erl_crash.dump
-env ERL_FULLSWEEP_AFTER 0
+Q 262144
+A 64
-setcookie riak
-name $CLUSTER_NAME@$HOST
+K true
+W w
-smp enable
+zdbbl 32768
END
# Maybe add user config items
if [ -s $USER_CONF ]; then
cat $USER_CONF >>$RIAK_CONF
fi

2
config/riak/user.yaml Normal file
View File

@ -0,0 +1,2 @@
storage_backend = leveldb
retry_put_coordinator_failure = off

View File

@ -0,0 +1,86 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/riak-base
tag: f5b757c2ec73c7db1460c94a17a20a3b5799fde6
configMap:
data:
user.conf: |
{{- readFile "user.yaml" | nindent 6 }}
riak-cluster.sh: |
{{- readFile "cm.yaml" | nindent 6 }}
00-update-riak-conf.sh: |
{{- readFile "pre.yaml" | nindent 6 }}
service:
type: ClusterIP
headless: true
ports:
- name: http
port: 8098
- name: protobuf
port: 8087
livenessProbe:
httpGet: null
exec:
command: ["riak", "ping"]
initialDelaySeconds: 60
periodSeconds: 20
timeoutSeconds: 15
readinessProbe:
httpGet:
path: /types/default/props
port: http
initialDelaySeconds: 60
periodSeconds: 15
timeoutSeconds: 5
env:
- name: CLUSTER_NAME
value: "riak"
- name: COORDINATOR_NODE
value: {{ .Release.Name }}-0.{{ .Release.Name }}-headless
- name: WAIT_FOR_ERLANG
value: 400
volumeMounts:
- name: config-volume
mountPath: /etc/riak/user.conf
subPath: user.conf
readOnly: true
- name: data
mountPath: /var/lib/riak
- name: config-volume
mountPath: /riak-cluster.sh
subPath: riak-cluster.sh
readOnly: true
- name: config-volume
mountPath: /etc/riak/prestart.d/00-update-riak-conf.sh
subPath: 00-update-riak-conf.sh
readOnly: true
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
- name: data
emptyDir: {}
storage:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 3Gi
podSecurityContext:
fsGroup: 102
securityContext:
capabilities:
add:
- "SYS_CHROOT"
- "NET_RAW"

View File

@ -0,0 +1,16 @@
#!/bin/sh
set -ue
java \
"-XX:OnOutOfMemoryError=kill %p" -XX:+HeapDumpOnOutOfMemoryError \
-jar \
/opt/shumway/shumway.jar \
--logging.config=/opt/shumway/logback.xml \
--spring.datasource.hikari.data-source-properties.prepareThreshold=0 \
--spring.datasource.hikari.leak-detection-threshold=5300 \
--spring.datasource.hikari.max-lifetime=300000 \
--spring.datasource.hikari.idle-timeout=30000 \
--spring.datasource.hikari.minimum-idle=2 \
--spring.datasource.hikari.maximum-pool-size=20 \
${@} \
--spring.config.additional-location=/vault/secrets/application.properties

View File

@ -0,0 +1,4 @@
<included>
<logger name="com.rbkmoney" level="INFO"/>
<logger name="com.rbkmoney.woody" level="INFO"/>
</included>

View File

@ -0,0 +1,95 @@
# -*- mode: yaml -*-
replicaCount: 1
image:
repository: docker.io/rbkmoney/shumway
tag: d5b74714437b1a1b11689a38297fd2a6c08e0db2
pullPolicy: IfNotPresent
runopts:
command : ["/opt/shumway/entrypoint.sh"]
configMap:
data:
entrypoint.sh: |
{{- readFile "entrypoint.sh" | nindent 6 }}
loggers.xml: |
{{- readFile "loggers.xml" | nindent 6 }}
logback.xml: |
{{- readFile "../logs/logback.xml" | nindent 6 }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}
defaultMode: 0755
volumeMounts:
- name: config-volume
mountPath: /opt/shumway/entrypoint.sh
subPath: entrypoint.sh
readOnly: true
- name: config-volume
mountPath: /opt/shumway/logback.xml
subPath: logback.xml
readOnly: true
- name: config-volume
mountPath: /opt/shumway/loggers.xml
subPath: loggers.xml
readOnly: true
service:
type: ClusterIP
ports:
- name: api
port: 8022
- name: management
port: 8023
livenessProbe:
httpGet:
path: /actuator/health
port: management
readinessProbe:
httpGet:
path: /actuator/health
port: management
podAnnotations:
vault.hashicorp.com/role: "db-app"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-application.properties: "database/creds/db-app-shumway"
vault.hashicorp.com/agent-inject-template-application.properties: |
{{`{{- with secret "database/creds/db-app-shumway" -}}
spring.datasource.url=jdbc:postgresql://postgres-postgresql:5432/shumway?sslmode=disable
spring.datasource.username={{ .Data.username }}
spring.datasource.password={{ .Data.password }}
spring.flyway.url=jdbc:postgresql://postgres-postgresql:5432/shumway?sslmode=disable
spring.flyway.user={{ .Data.username }}
spring.flyway.password={{ .Data.password }}
{{- end }}`}}
metrics:
serviceMonitor:
enabled: true
namespace: {{ .Release.Namespace }}
additionalLabels:
release: prometheus
endpoints:
- port: "management"
path: /actuator/prometheus
scheme: http
ciliumPolicies:
- filters:
- port: 5432
type: TCP
name: postgres
namespace: {{ .Release.Namespace }}
- filters:
- port: 8200
type: TCP
name: vault
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,3 @@
{{- if .Values.services.global.ipv6only }}
useIPv4: false
{{- end }}

Some files were not shown because too many files have changed in this diff Show More