Merge pull request #1392 from getredash/012patches

Update documentation links to point at the new location.
This commit is contained in:
Arik Fraimovich 2016-11-15 14:04:56 +02:00 committed by GitHub
commit db1a941459
29 changed files with 20 additions and 2018 deletions

View File

@ -9,7 +9,7 @@ The following is a set of guidelines for contributing to Redash. These are guide
- [Feature Roadmap](https://trello.com/b/b2LUHU7A/re-dash-roadmap)
- [Feature Requests](https://discuss.redash.io/c/feature-requests)
- [Gitter Chat](https://gitter.im/getredash/redash) or [Slack](https://slack.redash.io)
- [Documentation](http://docs.redash.io)
- [Documentation](https://redash.io/help/)
- [Blog](http://blog.redash.io/)
- [Twitter](https://twitter.com/getredash)
@ -18,7 +18,7 @@ The following is a set of guidelines for contributing to Redash. These are guide
---
## Table of Contents
[How can I contribute?](#how-can-i-contribute)
@ -28,12 +28,12 @@ The following is a set of guidelines for contributing to Redash. These are guide
- [Pull Requests](#pull-requests)
- [Documentation](#documentation)
- Design?
[Addtional Notes](#additional-notes)
- [Release Method](#release-method)
- [Code of Conduct](#code-of-conduct)
## How can I contribute?
### Reporting Bugs
@ -43,26 +43,24 @@ When creating a new bug report, please make sure to:
- Search for existing issues first. If you find a previous report of your issue, please update the existing issue with additional information instead of creating a new one.
- If you are not sure if your issue is really a bug or just some configuration/setup problem, please start a discussion in [the support forum](https://discuss.redash.io/c/support) first. Unless you can provide clear steps to reproduce, it's probably better to start with a thread in the forum and later to open an issue.
- If you still decide to open an issue, please review the template and guidelines and include as much details as possible.
### Suggesting Enhancements / Feature Requests
If you would like to suggest an enchancement or ask for a new feature:
- Please check [the roadmap](https://trello.com/b/b2LUHU7A/re-dash-roadmap) for existing Trello card for what you want to suggest/ask. If there is, feel free to upvote it to signal interest or add your comments.
- If there is no existing card, open a thread in [the forum](https://discuss.redash.io/c/feature-requests) to start a discussion about what you want to suggest. Try to provide as much details and context as possible and include information about *the problem you want to solve* rather only *your proposed solution*.
### Pull Requests
- **Code contributions are welcomed**. For big changes or significant features, it's usually better to reach out first and discuss what you want to implement and how (we recommend reading: [Pull Request First](https://medium.com/practical-blend/pull-request-first-f6bb667a9b6#.ozlqxvj36)). This to make sure that what you want to implement is aligned with our goals for the project and that no one else is already working on it.
- Include screenshots and animated GIFs in your pull request whenever possible.
- Please add [documentation](#documentation) for new features or changes in functionality along with the code.
- Please follow existing code style. We use PEP8 for Python and sensible style for Javascript.
### Documentation
The project's documentation can be found at [docs.redash.io](http://docs.redash.io/). The [documentation sources](https://github.com/getredash/redash/tree/master/docs) are managed along with the code and to contribute edits / new pages, you can use GitHub's interface. Click the "Edit on GitHub" link on the documentation page to quickly open the edit interface.
The pages are written in *reStructuredText* format, which is very similar to Markdown.
The project's documentation can be found at [https://redash.io/help/](https://redash.io/help/). The [documentation sources](https://github.com/getredash/website/tree/master/user-guide) are hosted on GitHub. To contribute edits / new pages, you can use GitHub's interface. Click the "Edit on GitHub" link on the documentation page to quickly open the edit interface.
## Additional Notes

View File

@ -1,32 +1,26 @@
More details about the future of re:dash : http://bit.ly/journey-first-step
---
<p align="center">
<img title="re:dash" src='http://redash.io/static/old_img/redash_logo.png' width="200px"/>
<img title="Redash" src='http://redash.io/static/old_img/redash_logo.png' width="200px"/>
</p>
<p align="center">
<img title="Build Status" src='https://circleci.com/gh/getredash/redash.png?circle-token=8a695aa5ec2cbfa89b48c275aea298318016f040'/>
</p>
[![Join the chat at https://gitter.im/getredash/redash](https://badges.gitter.im/getredash/redash.svg)](https://gitter.im/getredash/redash?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Documentation](https://img.shields.io/badge/docs-redash.io-brightgreen.svg)](http://docs.redash.io)
[![Documentation](https://img.shields.io/badge/docs-redash.io/help-brightgreen.svg)](https://redash.io/help/)
**_re:dash_** is our take on freeing the data within our company in a way that will better fit our culture and usage patterns.
**_Redash_** is our take on freeing the data within our company in a way that will better fit our culture and usage patterns.
Prior to **_re:dash_**, we tried to use traditional BI suites and discovered a set of bloated, technically challenged and slow tools/flows. What we were looking for was a more hacker'ish way to look at data, so we built one.
Prior to **_Redash_**, we tried to use traditional BI suites and discovered a set of bloated, technically challenged and slow tools/flows. What we were looking for was a more hacker'ish way to look at data, so we built one.
**_re:dash_** was built to allow fast and easy access to billions of records, that we process and collect using Amazon Redshift ("petabyte scale data warehouse" that "speaks" PostgreSQL).
Today **_re:dash_** has support for querying multiple databases, including: Redshift, Google BigQuery, PostgreSQL, MySQL, Graphite,
**_Redash_** was built to allow fast and easy access to billions of records, that we process and collect using Amazon Redshift ("petabyte scale data warehouse" that "speaks" PostgreSQL).
Today **_Redash_** has support for querying multiple databases, including: Redshift, Google BigQuery, PostgreSQL, MySQL, Graphite,
Presto, Google Spreadsheets, Cloudera Impala, Hive and custom scripts.
**_re:dash_** consists of two parts:
**_Redash_** consists of two parts:
1. **Query Editor**: think of [JS Fiddle](http://jsfiddle.net) for SQL queries. It's your way to share data in the organization in an open way, by sharing both the dataset and the query that generated it. This way everyone can peer review not only the resulting dataset but also the process that generated it. Also it's possible to fork it and generate new datasets and reach new insights.
2. **Dashboards/Visualizations**: once you have a dataset, you can create different visualizations out of it, and then combine several visualizations into a single dashboard. Currently it supports charts, pivot table and cohorts.
**_re:dash_** is a work in progress and has its rough edges and way to go to fulfill its full potential. The Query Editor part is quite solid, but the visualizations need more work to enrich them and to make them more user friendly.
## Demo
<img src="https://cloud.githubusercontent.com/assets/71468/17391289/8e83878e-5a1d-11e6-8938-af9054a33b19.gif" width="60%"/>
@ -35,8 +29,8 @@ You can try out the demo instance: http://demo.redash.io/ (login with any Google
## Getting Started
* [Setting up re:dash instance](http://redash.io/deployment/setup.html) (includes links to ready made AWS/GCE images).
* [Documentation](http://docs.redash.io).
* [Setting up Redash instance](https://redash.io/help-onpremise/setup/setting-up-redash-instance.html) (includes links to ready made AWS/GCE images).
* [Documentation](https://redash.io/help/).
## Getting Help
@ -49,8 +43,8 @@ You can try out the demo instance: http://demo.redash.io/ (login with any Google
## Reporting Bugs and Contributing Code
* Want to report a bug or request a feature? Please open [an issue](https://github.com/getredash/redash/issues/new).
* Want to help us build **_re:dash_**? Fork the project, edit in a [dev environment](http://docs.redash.io/en/latest/dev/vagrant.html), and make a pull request. We need all the help we can get!
* Want to help us build **_Redash_**? Fork the project, edit in a [dev environment](https://redash.io/help-onpremise/setup/setting-up-development-environment-using-vagrant.html), and make a pull request. We need all the help we can get!
## License
See [LICENSE](https://github.com/getredash/redash/blob/master/LICENSE) file.
BSD-2-Clause.

View File

@ -1,192 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/redash.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/redash.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/redash"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/redash"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,110 +0,0 @@
# -*- coding: utf-8 -*-
#
# re:dash documentation build configuration file, created by
# sphinx-quickstart on Mon Jul 20 22:40:24 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import shlex
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Re:dash'
copyright = u'2013-2016, Arik Fraimovich'
author = u'Arik Fraimovich'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
exclude_patterns = ['_build']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
import sphinx_rtd_theme
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
html_show_copyright = False
# Output file base name for HTML help builder.
htmlhelp_basename = 'redashdoc'
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'redash', u're:dash Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'redash', u're:dash Documentation',
author, 'redash', 'One line description of project.',
'Miscellaneous'),
]

View File

@ -1,284 +0,0 @@
Supported Data Sources
######################
Re:dash supports several types of data sources, and if you set it up using the provided images, it should already have
the needed dependencies to use them all. Starting from version 0.7 and newer, you can manage data sources from the UI
by browsing to ``/data_sources`` on your instance.
If one of the listed data source types isn't available when trying to create a new data source, make sure that:
1. You installed required dependencies.
2. If you've set custom value for the ``REDASH_ENABLED_QUERY_RUNNERS`` setting, it's included in the list.
PostgreSQL / Redshift / Greenplum
---------------------------------
- **Options**:
- Database name (mandatory)
- User
- Password
- Host
- Port
- **Additional requirements**:
- None
MySQL
-----
- **Options**:
- Database name (mandatory)
- User
- Password
- Host
- Port
- **Additional requirements**:
- ``MySQL-python`` python package
Google BigQuery
---------------
- **Options**:
- Project ID (mandatory)
- JSON key file, generated when creating a service account (see `instructions <https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount>`__).
- UDF Source URIs (see `more details <https://cloud.google.com/bigquery/user-defined-functions#api>`__ )
- References to JavaScript source files stored in Google Cloud Storage (comma separated)
- **Additional requirements**:
- ``google-api-python-client``, ``oauth2client`` and ``pyopenssl`` python packages (on Ubuntu it might require installing ``libffi-dev`` and ``libssl-dev`` as well).
Graphite
--------
- **Options**:
- URL (mandatory)
- User
- Password
- Verify SSL certificate
MongoDB
-------
- **Options**:
- Connection String (mandatory)
- Database name
- Replica set name
- **Additional requirements**:
- ``pymongo`` python package.
For information on how to write MongoDB queries, see :doc:`documentation </usage/mongodb_querying>`.
ElasticSearch
-------------
...
InfluxDB
--------
- **Options**:
- Url
- String of format ``influxdb://username:password@localhost:8086/databasename``
- **Additional requirements**:
- ``influxdb`` python package.
Presto
------
- **Options**:
- Host (mandatory)
- Address to a Presto coordinator.
- Port
- Port to a Presto coordinator. `8080` is the default port.
- Schema
- Default schema name of Presto. You can read other schemas by qualified name like `FROM myschema.table1`.
- Catalog
- Catalog (connector) name of Presto such as `hive-cdh4`, `hive-hadoop1`, etc.
- Username
- User name to connect to a Presto.
- **Additional requirements**:
- ``pyhive`` python package.
Hive
----
...
Impala
------
...
URL
---
A URL based data source which requests URLs that return the :doc:`results JSON
format </dev/results_format>`.
Very useful in situations where you want to expose the data without
connecting directly to the database.
The query itself inside Re:dash will simply contain the URL to be
executed (i.e. http://myserver/path/myquery)
- **Options**:
- Url - set this if you want to limit queries to certain base path.
Google Spreadsheets
-------------------
- **Options**:
- JSON key file, generated when creating a service account (see `instructions <https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount>`__).
- **Additional requirements**:
- ``gspread`` and ``oauth2client`` python packages.
Notes:
1. To be able to load the spreadsheet in Re:dash - share your it with
your ServiceAccount's email (it can be found in the credentials json
file, for example
43242343247-fjdfakljr3r2@developer.gserviceaccount.com).
2. The query format is "DOC\_UUID\|SHEET\_NUM" (for example
"kjsdfhkjh4rsEFSDFEWR232jkddsfh\|0")
3. Alternatively, one can create a new Google BigQuery table using the Google Spreadsheet in question as a source, and then use Redash's BigQuery connector to query the spreadsheet indirectly. This way, the SQL used to query the spreadsheet (via BigQuery table) is far more flexible than the direct query of the type ("kjsdfhkjh4rsEFSDFEWR232jkddsfh\|0") mentioned above. (`BigQuery integrates with Google Drive <https://cloud.google.com/blog/big-data/2016/05/bigquery-integrates-with-google-drive>`__).
Python
------
**Execute other queries, manipulate and compute with Python code**
This is a special query runner, that will execute provided Python code as the query. Useful for various scenarios such as
merging data from different data sources, doing data transformation/manipulation that isn't trivial with SQL, merging
with remote data or using data analysis libraries such as Pandas (see `example query <https://gist.github.com/arikfr/be7c2888520c44cf4f0f>`__).
While the Python query runner uses a sandbox (RestrictedPython), it's not 100% secure and the security depends on the
modules you allow to import. We recommend enabling the Python query runner only in a trusted environment (meaning: behind
VPN and with users you trust).
- **Options**:
- Allowed Modules in a comma separated list (optional). **NOTE:**
You MUST make sure these modules are installed on the machine
running the Celery workers.
Notes:
- For security, the python query runner is disabled by default.
To enable, add ``redash.query_runner.python`` to the ``REDASH_ADDITIONAL_QUERY_RUNNERS`` environmental variable. If you used
the bootstrap script, or one of the provided images, add to ``/opt/redash/.env`` file the line: ``export REDASH_ADDITIONAL_QUERY_RUNNERS=redash.query_runner.python``.
Vertica
-----
- **Options**:
- Database (mandatory)
- User
- Password
- Host
- Port
- **Additional requirements**:
- ``vertica-python`` python package
Oracle
------
- **Options**
- DSN Service name
- User
- Password
- Host
- Port
- **Additional requirements**
- ``cx_Oracle`` python package. This requires the installation of the Oracle `instant client <http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html>`__.
Treasure Data
------
- **Options**
- Type (TreasureData)
- API Key
- Database Name
- Type (Presto/Hive[default])
- **Additional requirements**
- Must have account on https://console.treasuredata.com
Documentation: https://docs.treasuredata.com/articles/redash
Microsoft SQL Server
-----
- **Options**:
- Database (mandatory)
- User #TODO: DB users only? What about domain users?
- Password
- Server
- Port
- **Notes**:
- Data type support is currently quite limited.
- Complex and new types are converted to strings in ``Re:dash``
- Coerce into simpler types if needed using ``CAST()``
- Known conversion issues for:
- DATE
- TIME
- DATETIMEOFFSET
- **Additional requirements**:
- ``freetds-dev`` C library
- ``pymssql`` python package, requires FreeTDS to be installed first
JIRA (JQL)
----------
- **Options**:
- URL (your JIRA instance url)
- Username
- Password
For information on how to write JIRA/JQL queries, see :doc:`documentation </usage/jira_querying>`.

View File

@ -1,11 +0,0 @@
Developer Information
=====================
.. toctree::
:maxdepth: 2
:glob:
dev/vagrant
dev/*

View File

@ -1,94 +0,0 @@
Query Execution Model
#####################
Introduction
============
The first datasource which was used with Re:dash was Redshift. Because
we had billions of records in Redshift, and some queries were costly to
re-run, from the get go there was the idea of caching query results in
Re:dash.
This was to relieve stress from the Redshift cluster and also to improve
user experience.
How queries get executed and cached in Re:dash?
===============================================
Server
------
To make sure each query is executed only once at any giving time, we
translate the query to a ``query hash``, using the following code:
.. code:: python
COMMENTS_REGEX = re.compile("/\*.*?\*/")
def gen_query_hash(sql):
sql = COMMENTS_REGEX.sub("", sql)
sql = "".join(sql.split()).lower()
return hashlib.md5(sql.encode('utf-8')).hexdigest()
When query execution is done, the result gets stored to
``query_results`` table. Also we check for all queries in the
``queries`` table that have the same query hash and update their
reference to the query result we just saved
(`code <https://github.com/getredash/redash/blob/master/redash/models.py#L235>`__).
Client
------
The client (UI) will execute queries in two scenarios:
1. (automatically) When opening a query page of a query that doesn't
have a result yet.
2. (manually) When the user clicks on "Execute".
In each case the client does a POST request to ``/api/query_results``
with the following parameters: ``query`` (the query text),
``data_source_id`` (data source to execute the query with) and ``ttl``.
When loading a cached result, ``ttl`` will be the one set to the query
(if it was set). This is a relic from previous versions, and I'm not
sure if it's really used anymore, as usually we will fetch query result
using its id.
When loading a non cached result, ``ttl`` will be 0 which will "force"
the server to execute the query.
As a response to ``/api/query_results`` the server will send either the
query results (in case of a cached query) or job id of the currently
executing query. When job id received the client will start polling on
this id, until a query result received (this is encapsulated in
``Query`` and ``QueryResult`` services).
Ideas on how to implement query parameters
==========================================
Client side only implementation
-------------------------------
(This was actually implemented in. See pull request `#363 <https://github.com/getredash/redash/pull/363>`__ for details.)
The basic idea of how to implement parametized queries is to treat the
query as a template and merge it with parameters taken from query string
or UI (or both).
When the caching facility isn't required (with queries that return in a
reasonable time frame) the implementation can be completely client side
and the backend can be "blind" to the parameters - it just receives the
final query to execute and returns result.
As one improvement over this, we can let the UI/user specify the TTL
value when making the request to ``/api/query_results``, in which case
caching will be availble too, while not having to make the server aware
of the parameters.
Hybrid
------
Another option, will be to store the list of possible parameters for a
query, with their default/optional values. In such case, the server can
prefetch all the options and cache them to provide faster results to the
client.

View File

@ -1,78 +0,0 @@
Data Source Results Format
==========================
All data sources in Re:dash return the following results in JSON format:
.. code:: javascript
{
"columns" : [
{
"name" : "COLUMN_NAME", // Required: a unique identifier of the column name in this result
"friendly_name" : "FRIENDLY_NAME", // Required: friendly name of the column. Currently unused, so can be the same as _name_.
"type" : "VALUE_TYPE" // // Supported types: integer, float, boolean, string (default), datetime (ISO-8601 text format). If unknown, use "string".
},
...
],
"rows" : [
{
// name is the column name as it appears in the columns above.
// VALUE is a valid JSON value. For dates it's an ISO-8601 string.
"name" : VALUE,
"name2" : VALUE2
},
...
]
}
Example:
.. code:: javascript
{
"columns": [
{
"name": "date",
"type": "date",
"friendly_name": "date"
},
{
"name": "day_number",
"type": "integer",
"friendly_name": "day_number"
},
{
"name": "value",
"type": "integer",
"friendly_name": "value"
},
{
"name": "total",
"type": "integer",
"friendly_name": "total"
}
],
"rows": [
{
"value": 40832,
"total": 53141,
"day_number": 0,
"date": "2014-01-30"
},
{
"value": 27296,
"total": 53141,
"day_number": 1,
"date": "2014-01-30"
},
{
"value": 22982,
"total": 53141,
"day_number": 2,
"date": "2014-01-30"
}
]
}

View File

@ -1,44 +0,0 @@
SAML Authentication and Authorization
#####################################
Authentication
==============
Add to your .env file REDASH_SAML_METADATA_URL config value which
needs to point to the SAML provider metadata url, eg https://app.onelogin.com/saml/metadata/
If you don't want to communicate with IdP server to fetch a metadata,
put a metadata file to the redash server as a local file,
and add REDASH_SAML_LOCAL_METADATA_PATH instead of REDASH_SAML_METADATA_URL, eg /opt/redash/metadata.xml
And an optional REDASH_SAML_CALLBACK_SERVER_NAME which contains the
server name of the redash server for the callbacks from the SAML provider (eg demo.redash.io)
And if you want to specify nameid format, add REDASH_SAML_NAMEID_FORMAT config value,
eg urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
default is urn:oasis:names:tc:SAML:2.0:nameid-format:transient
If you want to specify entityid in AuthnRequest,
add REDASH_SAML_ENTITY_ID config value, eg http://demo.redash.io/saml/callback
On the SAML provider side, example configuration for OneLogin is:
SAML Consumer URL: http://demo.redash.io/saml/login
SAML Audience: http://demo.redash.io/saml/callback
SAML Recipient: http://demo.redash.io/saml/callback
Example configuration for Okta is:
Single Sign On URL: http://demo.redash.io/saml/callback
Recipient URL: http://demo.redash.io/saml/callback
Destination URL: http://demo.redash.io/saml/callback
with parameters 'FirstName' and 'LastName', both configured to be included in the SAML assertion.
Authorization
=============
To manage group assignments in Redash using your SAML provider, configure SAML response to include
attribute with key 'RedashGroups', and value as names of groups in Redash.
Example configuration for Okta is:
In the Group Attribute Statements -
Name: RedashGroups
Filter: Starts with: this-is-a-group-in-redash

View File

@ -1,23 +0,0 @@
Setting up development environment (using Vagrant)
==================================================
To simplify contribution there is a `Vagrant
box <https://vagrantcloud.com/redash/boxes/dev>`__ available with all
the needed software to run Re:dash for development (use it only for
development, for demo purposes there is
`redash/demo <https://vagrantcloud.com/redash/boxes/demo>`__ box and the
AWS/GCE images).
To get started with this box:
1. Make sure you have recent version of
`Vagrant <https://www.vagrantup.com/>`__ installed.
2. Clone the Re:dash repository:
``git clone https://github.com/getredash/redash.git``.
3. Change dir into the repository (``cd redash``)
4. To execute tests, run ``./bin/vagrant_ctl.sh test``
5. To run the app, run ``./bin/vagrant_ctl.sh start``.
This might take some time the first time you run it,
as it downloads the Vagrant virtual box.
Now the server should be available on your host on port 9001 and you
can login with username admin and password admin.

View File

@ -1,57 +0,0 @@
.. image:: http://redash.io/static/old_img/redash_logo.png
:width: 200px
Open Source Data Collaboration and Visualization Platform
===================================
**Re:dash** is our take on freeing the data within our company in a way that will better fit our culture and usage patterns.
Prior to **Re:dash**, we tried to use traditional BI suites and discovered a set of bloated, technically challenged and slow tools/flows. What we were looking for was a more hacker'ish way to look at data, so we built one.
**Re:dash** was built to allow fast and easy access to billions of records, that we process and collect using Amazon Redshift ("petabyte scale data warehouse" that "speaks" PostgreSQL).
Today **_re:dash_** has support for querying multiple databases, including: Redshift, Google BigQuery,Google Spreadsheets, PostgreSQL, MySQL, Graphite and custom scripts.
Features
########
1. **Query Editor**: think of `JS Fiddle`_ for SQL queries. It's your way to share data in the organization in an open way, by sharing both the dataset and the query that generated it. This way everyone can peer review not only the resulting dataset but also the process that generated it.
2. **Visualizations**: once you have a dataset, you can create different visualizations out of it. Currently it supports charts, pivot table and cohorts.
3. **Dashboards**: combine several visualizations into a single dashboard.
Demo
####
.. figure:: https://cloud.githubusercontent.com/assets/71468/17391289/8e83878e-5a1d-11e6-8938-af9054a33b19.gif
:alt: Screenshots
You can try out the demo instance: `http://demo.redash.io`_ (login with any Google account).
.. _http://demo.redash.io: http://demo.redash.io
.. _JS Fiddle: http://jsfiddle.net
Getting Started
###############
:doc:`Setting up Re:dash instance </setup>` (includes links to ready made AWS/GCE images).
Getting Help
############
* Source: https://github.com/getredash/redash
* Issues: https://github.com/getredash/redash/issues
* Discussion Forum: https://discuss.redash.io/
* Slack: http://slack.redash.io/
* Gitter (chat): https://gitter.im/getredash/redash
TOC
###
.. toctree::
:maxdepth: 2
setup
upgrade
datasources
usage
dev
misc

View File

@ -1,10 +0,0 @@
Miscellaneous
=============
.. toctree::
:maxdepth: 2
:glob:
misc/*

View File

@ -1,74 +0,0 @@
How To: Backup your Re:dash database and restore it on a different server
=================
**Note:** This guide assumes that the default database name (redash) has not been changed.
1. Check the size of your redash database. This can be done by creating a query within redash itself against the 'Re:dash metadata' data source.
.. code::
select t1.datname AS db_name, pg_size_pretty(pg_database_size(t1.datname)) as db_size
from pg_database t1
where t1.datname = 'redash'
2. Check the amount of available disk space on your existing server.
.. code::
df -hT
3. Backup the existing redash database.
.. code::
sudo -u redash pg_dump redash | gzip > redash_backup.gz
4. Transfer the backup to the new server.
5. `Perform a clean install of Re:dash <http://docs.redash.io/en/latest/setup.html>`__ on the new server.
6. Check the amount of available disk space on the new server.
.. code::
df -hT
7. Login as postgres user on the new server.
.. code::
sudo -u postgres -i
8. drop the current redash database, create a new database named redash, and then restore the backup into the new database.
.. code::
dropdb redash
createdb -T template0 redash
gunzip -c redash_backup.gz | psql redash
9. Set a new password of your choosing for the 'redash_reader' user (since the new installation generated a random password).
.. code::
psql -c "ALTER ROLE redash_reader WITH PASSWORD 'yourpasswordgoeshere';"
**Note:** Then you must navigate to the 'Re:dash metadata' data source (/data_sources/1) in the new Re:dash installation and change the password to match the one entered above.
10. Grant permissions on the redash database to the redash_reader user.
.. code::
psql -c "grant select(id,name,type) ON data_sources to redash_reader;" redash
psql -c "grant select(id,name) ON users to redash_reader;" redash
psql -c "grant select on events, queries, dashboards, widgets, visualizations, query_results to redash_reader;" redash
Create a new query in redash (using Re:dash metadata as the data source) to test that everything is working as expected.

View File

@ -1,50 +0,0 @@
How To: Create a Google Developers Project
==========================================
1. Go to the `Google Developers
Console <https://console.developers.google.com/>`__.
2. Select a project, or create a new one by clicking Create Project:
1. In the Project name field, type in a name for your project.
2. In the Project ID field, optionally type in a project ID for your
project or use the one that the console has created for you. This
ID must be unique world-wide.
3. Click the **Create** button and wait for the project to be
created.
4. Click on the new project name in the list to start editing the
project.
3. In the left sidebar, select the **APIs** item below "APIs & auth". A
list of Google web services appears.
4. Find the **Google+ API** service and set its status to **ON**—notice
that this action moves the service to the top of the list.
5. In the sidebar under "APIs & auth", select **Credentials** and in that screen choose the **OAuth consent screen** tab
- Choose an Email Address and specify a Product Name.
6. In the sidebar under "APIs & auth", select **Credentials**.
7. Click **Add Credentials** button and choose **OAuth 20 Client ID**.
- In the **Application type** section of the dialog, select **Web
application**.
- In the **Authorized JavaScript origins** field, enter the origin
for your app. You can enter multiple origins to use with multiple
Re:dash instance. Wildcards are not allowed. In the example below,
we assume your Re:dash instance address is *redash.example.com*:
::
http://redash.example.com
https://redash.example.com
- In the Authorized redirect URI field, enter the redirect URI
callback:
::
http://redash.example.com/oauth/google_callback
- Click the ``Create`` button.
8. In the resulting **Client ID for web application** section, copy the
**Client ID** and **Client secret** to your ``.env`` file.

View File

@ -1,141 +0,0 @@
How To: Encrypt your Re:dash installation with a free SSL certificate from Let's Encrypt
=================
**Note:** This below steps were tested on Ubuntu 14.04, but *should* work with any Debian-based distro.
`Let's Encrypt <https://letsencrypt.org/>`__ is a new certificate authority sponsored by major tech companies including Mozilla, Google, Cisco, and Facebook. Unlike traditional CA authorities, Let's Encrypt allows you to generate and renew an SSL certificate quickly and **at no cost**.
1. Open port 443 in your security group (if using AWS or GCE).
2. Update package lists, install git, and clone the letsencrypt repository.
.. code::
sudo apt-get update
sudo apt-get install git
sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
3. Stop nginx and redash, then ensure that no processes are still listening on port 80.
.. code::
sudo supervisorctl stop redash_server
sudo service nginx stop
netstat -na | grep ':80.*LISTEN'
4. Generate your letsencrypt certificate.
.. code::
cd /opt/letsencrypt
sudo pip install urllib3[secure] --upgrade
./letsencrypt-auto certonly --standalone
In most cases you'll want to enter 'example.com www.example.com' when prompted for your domain so that you can use the certificate on http://example.com and http://www.example.com.
5. Optionally generate a stronger Diffie-Hellman ephemeral parameter. Without this step, you will not achieve higher than a B score on `SSLLabs <https://www.ssllabs.com/ssltest/>`__. Please note that on a low-end server (VPS or micro/small GCE instance) this step can take approximately 20-30 minutes.
.. code::
cd /etc/ssl/certs
sudo openssl dhparam -out dhparam.pem 3072
6. Backup the existing nginx redash config, delete it, and then create a new version with the code supplied below.
.. code::
sudo cp /etc/nginx/sites-available/redash /etc/nginx/sites-available/redash.bak
sudo rm /etc/nginx/sites-available/redash
sudo nano /etc/nginx/sites-available/redash
.. code:: nginx
upstream redash_servers {
server 127.0.0.1:5000;
}
server {
listen 80;
# Allow accessing /ping without https. Useful when placing behind load balancer.
location /ping {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://redash_servers;
}
location / {
# Enforce SSL.
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
ssl on;
# Make sure to set paths to your certificate .pem and .key files.
ssl_certificate /etc/letsencrypt/live/YOURDOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/YOURDOMAIN.TLD/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# Use secure protocols and ciphers which are compatible with modern browsers
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers AES256+EECDH:AES256+EDH;
ssl_session_cache shared:SSL:20m;
# Enforce strict transport security
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
access_log /var/log/nginx/redash.access.log;
gzip on;
gzip_types *;
gzip_proxied any;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://redash_servers;
proxy_redirect off;
}
}
7. Start the nginx and redash servers again.
.. code::
sudo service nginx start
sudo supervisorctl start redash_server
8. Verify the installation by running a `SSLLabs test <https://www.ssllabs.com/ssltest/>`__. This guide *should* yield an A+ score. If everything is working as expected, optionally delete the old redash nginx config:
.. code::
sudo rm /etc/nginx/sites-available/redash.bak
**Important Note:** letsencrypt certificates only remain valid for 90 days. To renew your certificate, simply follow steps 3 and 4 again:
.. code::
sudo supervisorctl stop redash_server
sudo service nginx stop
netstat -na | grep ':80.*LISTEN'
cd /opt/letsencrypt
./letsencrypt-auto certonly --standalone
sudo service nginx start
sudo supervisorctl start redash_server

View File

@ -1,8 +0,0 @@
HTTPS proxy pass for Nginx in HTTP without change to SSL Re:dash setup
===========
Webserver HTTPS >> Re:dash HTTP
When you have a web server with SSL (Apache, HaProxy, other nginx, etc) you can setup HTTPS proxy pass from web server to Re:dash nginx. Follow the steps below:
You must have change (``proxy_set_header X-Forwarded-Proto``), from (``$scheme``) to (``https``), this change force nginx set https for all requests.

View File

@ -1,66 +0,0 @@
SSL (HTTPS) Setup
=================
If you used the provided images or the bootstrap script, to start using
SSL with your instance you need to:
1. Update the nginx config file (``/etc/nginx/sites-available/redash``)
with SSL configuration (see below an example). Make sure to upload
the certificate to the server, and set the paths correctly in the new
config.
2. Open port 443 in your security group (if using AWS or GCE).
.. code:: nginx
upstream redash_servers {
server 127.0.0.1:5000;
}
server {
listen 80;
# Allow accessing /ping without https. Useful when placing behind load balancer.
location /ping {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://redash_servers;
}
location / {
# Enforce SSL.
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
# Make sure to set paths to your certificate .pem and .key files.
ssl on;
ssl_certificate /path-to/cert.pem; # or crt
ssl_certificate_key /path-to/cert.key;
# Specifies that we don't want to use SSLv2 (insecure) or SSLv3 (exploitable)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Uses the server's ciphers rather than the client's
ssl_prefer_server_ciphers on;
# Specifies which ciphers are okay and which are not okay. List taken from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
access_log /var/log/nginx/redash.access.log;
gzip on;
gzip_types *;
gzip_proxied any;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://redash_servers;
proxy_redirect off;
}
}

View File

@ -1,3 +0,0 @@
sphinx
sphinx-autobuild
sphinx_rtd_theme

View File

@ -1,63 +0,0 @@
Settings
########
Much of the functionality of Re:dash can be changes with settings. Settings are read by `/redash/settings.py` from environment variables which (for most installs) can be set in `/opt/redash/current/.env`
The follow is a list of settings and what they control:
- **REDASH_NAME**: name of the site, used in page titles, *default "Re:dash"*
- **REDASH_REDIS_URL**: *default "redis://localhost:6379/0"*
- **REDASH_PROXIES_COUNT**: *default "1"*
- **REDASH_STATSD_HOST**: *default "127.0.0.1"*
- **REDASH_STATSD_PORT**: *default "8125"*
- **REDASH_STATSD_PREFIX**: *default "redash"*
- **REDASH_STATSD_USE_TAGS**: whether to use tags in StatsD metrics (InfluxDB's format), *default false*
- **REDASH_DATABASE_URL**: *default "postgresql://postgres"*
- **REDASH_CELERY_BROKER**: *default REDIS_URL*
- **REDASH_CELERY_BACKEND**: *default CELERY_BROKER*
- **REDASH_CELERY_TASK_RESULT_EXPIRES**: How many seconds to keep Celery task results in cache (in seconds), *default 3600*
- **REDASH_HEROKU_CELERY_WORKER_COUNT**: *default 2*
- **REDASH_QUERY_RESULTS_CLEANUP_ENABLED**: *default "true"*
- **REDASH_QUERY_RESULTS_CLEANUP_COUNT**: *default "100"*
- **REDASH_QUERY_RESULTS_CLEANUP_MAX_AGE**: *default "7"*
- **REDASH_SCHEMAS_REFRESH_SCHEDULE**: how often to refresh the data sources schemas (in minutes), *default 30*
- **REDASH_AUTH_TYPE**: *default "api_key"*
- **REDASH_PASSWORD_LOGIN_ENABLED**: *default "true"*
- **REDASH_ENFORCE_HTTPS**: *default "false"*
- **REDASH_MULTI_ORG**: *default "false"*
- **REDASH_GOOGLE_CLIENT_ID**: *default ""*
- **REDASH_GOOGLE_CLIENT_SECRET**: *default ""*
- **REDASH_SAML_METADATA_URL**: *default ""*
- **REDASH_SAML_LOCAL_METADATA_PATH**: *default ""*
- **REDASH_SAML_CALLBACK_SERVER_NAME**: *default ""*
- **REDASH_SAML_NAMEID_FORMAT**: *default ""*
- **REDASH_SAML_ENTITY_ID**: *default ""*
- **REDASH_STATIC_ASSETS_PATH**: *default "../rd_ui/app/"*
- **REDASH_JOB_EXPIRY_TIME**: *default 3600 * 6*
- **REDASH_COOKIE_SECRET**: *default "c292a0a3aa32397cdb050e233733900f"*
- **REDASH_LOG_LEVEL**: *default "INFO"*
- **REDASH_MAIL_SERVER**: *default "localhost"*
- **REDASH_MAIL_PORT**: *default 25*
- **REDASH_MAIL_USE_TLS**: *default "false"*
- **REDASH_MAIL_USE_SSL**: *default "false"*
- **REDASH_MAIL_USERNAME**: *default None*
- **REDASH_MAIL_PASSWORD**: *default None*
- **REDASH_MAIL_DEFAULT_SENDER**: *default None*
- **REDASH_MAIL_MAX_EMAILS**: *default None*
- **REDASH_MAIL_ASCII_ATTACHMENTS**: *default "false"*
- **REDASH_HOST**: *default ""*
- **REDASH_CORS_ACCESS_CONTROL_ALLOW_ORIGIN**: *default ""*
- **REDASH_CORS_ACCESS_CONTROL_ALLOW_CREDENTIALS**: *default "false"*
- **REDASH_CORS_ACCESS_CONTROL_REQUEST_METHOD**: *default GET, POST, PUT""*
- **REDASH_CORS_ACCESS_CONTROL_ALLOW_HEADERS**: *default "Content-Type"*
- **REDASH_ENABLED_QUERY_RUNNERS**: *default ",".join(default_query_runners)*
- **REDASH_ADDITIONAL_QUERY_RUNNERS**: *default ""*
- **REDASH_SENTRY_DSN**: *default ""*
- **REDASH_ALLOW_SCRIPTS_IN_USER_INPUT**: disable sanitization of text input, allowing full HTML, *default "true"*
- **REDASH_DATE_FORMAT**: *default "DD/MM/YY"*
- **REDASH_FEATURE_ALLOW_ALL_TO_EDIT**: *default "true"*
- **REDASH_FEATURE_SHOW_QUERY_RESULTS_COUNT**: disable/enable showing count of query results in status, *default "true"*
- **REDASH_VERSION_CEHCK**: *default "true"*
- **REDASH_FEATURE_DISABLE_REFRESH_QUERIES**: disable scheduled query execution, *default "false"*
- **REDASH_BIGQUERY_HTTP_TIMEOUT**: *default "600"*
- **REDASH_SCHEMA_RUN_TABLE_SIZE_CALCULATIONS**: *default "false"*

View File

@ -1,246 +0,0 @@
Setting up Re:dash instance
###########################
The `provisioning
script <https://raw.githubusercontent.com/getredash/redash/master/setup/ubuntu/bootstrap.sh>`__
works on Ubuntu 12.04, Ubuntu 14.04 and Debian Wheezy. This script
installs all needed dependencies and creates basic setup.
To ease the process, there are also images for AWS, Google Compute
Cloud and Docker. These images created with the same provision script using Packer.
Create an instance
==================
AWS
---
Launch the instance with from the pre-baked AMI (for small deployments
t2.small should be enough):
- us-east-1: `ami-3ff16228 <https://console.aws.amazon.com/ec2/home?region=us-east-1#LaunchInstanceWizard:ami=ami-3ff16228>`__
- us-west-1: `ami-fdc6869d <https://console.aws.amazon.com/ec2/home?region=us-west-1#LaunchInstanceWizard:ami=ami-fdc6869d>`__
- us-west-2: `ami-670cc507 <https://console.aws.amazon.com/ec2/home?region=us-west-2#LaunchInstanceWizard:ami=ami-670cc507>`__
- eu-west-1: `ami-5f95fb2c <https://console.aws.amazon.com/ec2/home?region=eu-west-1#LaunchInstanceWizard:ami=ami-5f95fb2c>`__
- eu-central-1: `ami-8f1ee9e0 <https://console.aws.amazon.com/ec2/home?region=eu-central-1#LaunchInstanceWizard:ami=ami-8f1ee9e0>`__
- sa-east-1: `ami-3113845d <https://console.aws.amazon.com/ec2/home?region=sa-east-1#LaunchInstanceWizard:ami=ami-3113845d>`__
- ap-northeast-1: `ami-b30ec9d2 <https://console.aws.amazon.com/ec2/home?region=ap-northeast-1#LaunchInstanceWizard:ami=ami-b30ec9d2>`__
- ap-northeast-2: `ami-8f29e3e1 <https://console.aws.amazon.com/ec2/home?region=ap-northeast-2#LaunchInstanceWizard:ami=ami-8f29e3e1>`__
- ap-southeast-2: `ami-acac99cf <https://console.aws.amazon.com/ec2/home?region=ap-southeast-2#LaunchInstanceWizard:ami=ami-acac99cf>`__
- ap-southeast-1: `ami-b5b26cd6 <https://console.aws.amazon.com/ec2/home?region=ap-southeast-1#LaunchInstanceWizard:ami=ami-b5b26cd6>`__
(the above AMIs are of version: 0.11.1)
When launching the instance make sure to use a security group, that **only** allows incoming traffic on: port 22 (SSH), 80 (HTTP) and 443 (HTTPS). These AMIs are based on Ubuntu so you will need to use the user ``ubuntu`` when connecting to the instance via SSH.
Now proceed to `"Setup" <#setup>`__.
Google Compute Engine
---------------------
First, you need to add the images to your account:
.. code:: bash
$ gcloud compute images create "redash-091-b1377" --source-uri gs://redash-images/redash.0.9.1.b1377.tar.gz
Next you need to launch an instance using this image (n1-standard-1
instance type is recommended).
If you plan using Re:dash with BigQuery, you can use a dedicated image which comes with BigQuery preconfigured
(using instance permissions):
.. code:: bash
$ gcloud compute images create "redash-091-b1377-bq" --source-uri gs://redash-images/redash.0.9.1.b1377-bq.tar.gz
Note that you need to launch this instance with BigQuery access:
.. code:: bash
$ gcloud compute instances create <your_instance_name> --image redash-091-b1377-bq --scopes storage-ro,bigquery
(the same can be done from the web interface, just make sure to enable
BigQuery access)
Please note that currently the Google Compute Engine images are for version 0.9.1. After creating the instance, please
run the :doc:`upgrade process <upgrade>` and then proceed to `"Setup" <#setup>`__.
Docker Compose
------
1. Make sure you have a Docker machine up and running.
2. Make sure your current working directory is the root of this GitHub repository.
3. Run ``docker-compose up postgres``.
4. Run ``./setup/docker/create_database.sh``. This will access the postgres container and set up the database.
5. Run ``docker-compose up``
6. Run ``docker-machine ls``, take note of the ip for the Docker machine you are using, and open the web browser.
7. Visit that Docker machine IP at port 80, and you should see a Re:dash login screen.
Now proceed to `"Setup" <#setup>`__.
Heroku
------
Due to the nature of Heroku deployments, upgrading to a newer version of Redash
requires performing the steps outlined on the `"How to Upgrade" <http://docs.redash.io/en/latest/upgrade.html>`__ page.
1. Install `Heroku CLI <https://toolbelt.heroku.com/>`__.
2. Create Heroku App::
$ heroku apps:create <app name>
2. Set application buildpacks::
$ heroku buildpacks:set heroku/python
$ heroku buildpacks:add --index 1 heroku/nodejs
3. Add Postgres and Redis addons::
$ heroku addons:create heroku-postgresql:hobby-dev
$ heroku addons:create heroku-redis:hobby-dev
4. Update the cookie secret (**Important** otherwise anyone can sign new cookies and impersonate users. You may be able to run the command ``pwgen 32 -1`` to generate a random string)::
$ heroku config:set REDASH_COOKIE_SECRET='<create a secret token and put here>'
5. Push the repository to Heroku::
$ git push heroku master
6. Create database tables::
$ heroku run ./manage.py database create_tables
7. Create admin user::
$ heroku run ./manage.py users create --admin "Admin" admin
7. Start worker process::
$ heroku ps:scale worker=1
Other
-----
Download the provision script and run it on your machine. Note that:
1. You need to run the script as root.
2. It was tested only on Ubuntu 12.04, Ubuntu 14.04 and Debian Wheezy.
3. It's designed to run on a "clean" machine. If you're running this script on a machine that is used for other purposes, you might want to tweak it to your needs (like removing the ``apt-get dist-upgrade`` call at the beginning of it).
Setup
=====
Once you created the instance with either the image or the script, you
should have a running Re:dash instance with everything you need to get
started . Re:dash should be available using the server IP or DNS name
you assigned to it. You can point your browser to this address, and login
with the user "admin" (password: "admin"). But to make it useful, there are
a few more steps that you need to manually do to complete the setup:
First ssh to your instance and change directory to ``/opt/redash``. If
you're using the GCE image, switch to root (``sudo su``).
Users & Google Authentication setup
-----------------------------------
Most of the settings you need to edit are in the ``/opt/redash/.env``
file.
1. Update the cookie secret (important! otherwise anyone can sign new
cookies and impersonate users): change "veryverysecret" in the line:
``export REDASH_COOKIE_SECRET=veryverysecret`` to something else (you
can run the command ``pwgen 32 -1`` to generate a random string).
2. By default we create an admin user with the password "admin". You
can change this password opening the: ``/users/me#password`` page after
logging in as admin.
3. If you want to use Google OAuth to authenticate users, you need to
create a Google Developers project (see :doc:`instructions </misc/google_developers_project>`)
and then add the needed configuration in the ``.env`` file:
.. code::
export REDASH_GOOGLE_CLIENT_ID=""
export REDASH_GOOGLE_CLIENT_SECRET=""
4. Configure the domain(s) you want to allow to use with Google Apps, by running the command:
.. code::
cd /opt/redash/current
sudo -u redash bin/run ./manage.py org set_google_apps_domains {{domains}}
If you're passing multiple domains, separate them with commas.
5. Restart the web server to apply the configuration changes:
``sudo supervisorctl restart redash_server``.
6. Once you have Google OAuth enabled, you can login using your Google
Apps account. If you want to grant admin permissions to some users,
you can do this by adding them to the admin group (from ``/groups`` page).
7. If you don't use Google OAuth or just need username/password logins,
you can create additional users by opening the ``/users/new`` page.
Datasources
-----------
To make Re:dash truly useful, you need to setup your data sources in it. Browse to ``/data_sources`` on your instance,
to create new data source connection.
See :doc:`documentation </datasources>` for the different options.
Your instance comes ready with dependencies needed to setup supported sources.
Mail Configuration
------------------
For the system to be able to send emails (for example when alerts trigger), you need to set the mail server to use and the
host name of your Re:dash server. If you're using one of our images, you can do this by editing the `.env` file:
.. code::
# Note that not all values are required, as they have default values.
export REDASH_MAIL_SERVER="" # default: localhost
export REDASH_MAIL_PORT="" # default: 25
export REDASH_MAIL_USE_TLS="" # default: false
export REDASH_MAIL_USE_SSL="" # default: false
export REDASH_MAIL_USERNAME="" # default: None
export REDASH_MAIL_PASSWORD="" # default: None
export REDASH_MAIL_DEFAULT_SENDER="" # Email address to send from
export REDASH_HOST="" # base address of your Re:dash instance, for example: "https://demo.redash.io"
Once you updated the configuration, restart all services with ``sudo supervisorctl restart all``.
- Note that not all values are required, as there are default values.
- It's recommended to use some mail service, like `Amazon SES <https://aws.amazon.com/ses/>`__ or `Mailgun <http://www.mailgun.com/>`__ to send emails to ensure deliverability.
To test email configuration, you can run `bin/run ./manage.py send_test_mail` (from `/opt/redash/current`).
How to upgrade?
---------------
It's recommended to upgrade once in a while your Re:dash instance to
benefit from bug fixes and new features. See :doc:`here </upgrade>` for full upgrade
instructions (including Fabric script).
Configuration
-------------
For a full list of environment variables, see :doc:`the settings page </settings>`.
Notes
=====
- If this is a production setup, you should enforce HTTPS and make sure
you set the cookie secret (see :doc:`instructions </misc/ssl>`).

View File

@ -1,37 +0,0 @@
How to Upgrade
##############
It's recommended to upgrade your Re:dash instance once there are new
releases, to benefit from new features and bug fixes. The upgrade
process is relatively simple, and assuming you used one of the base
images we provide, you can just use the
`Fabric <http://www.fabfile.org/>`__ script provided here:
https://gist.github.com/arikfr/440d1403b4aeb76ebaf8.
How to run the Fabric script
============================
1. Install Fabric: ``pip install fabric requests`` (needed only once)
2. Download the ``fabfile.py`` from the gist.
3. Run the script:
``fab -H{your Re:dash host} -u{the ssh user for this host} -i{path to key file for passwordless login} deploy_latest_release``
``-i`` is optional and it is only needed in case you're using private-key based authentication (and didn't add the key file to your authentication agent or set its path in your SSH config).
What the Fabric script does
===========================
Even if you didn't use the image, it's very likely you can reuse most of
this script with small modifications. What this script does is:
1. Find the URL of the latest release tarball (from `GitHub releases
page <http://github.com/getredash/redash/releases>`__).
2. Download it.
3. Create new directory for this version (for example:
``/opt/redash/redash.0.5.0.b685``).
4. Unpack that (``tar -C {dir} -xvf {tarball path}``).
5. Link ``/opt/redash/.env`` file into this directory.
6. Apply any new migrations.
7. Link ``/opt/redash/current`` to new version.
8. Install any new requirements - ``sudo pip install -r requirements.txt``
9. Restart web server and celery workers.

View File

@ -1,10 +0,0 @@
Usage
=====
.. toctree::
:maxdepth: 2
:glob:
usage/*

View File

@ -1,72 +0,0 @@
ElasticSearch: Querying
#######################
ElasticSearch currently supports only simple Lucene style queries (like
Kibana but without the aggregation).
Full blown JSON based ElasticSearch queries (including aggregations)
will be added later.
Simple query example:
=====================
- Query the index named "twitter"
- Filter by "user:kimchy"
- Return the fields: "@timestamp", "tweet" and "user"
- Return up to 15 results
- Sort by @timestamp ascending
.. code:: json
{
"index" : "twitter",
"query" : "user:kimchy",
"fields" : ["@timestamp", "tweet", "user"],
"size" : 15,
"sort" : "@timestamp:asc"
}
Simple query on a logstash ElasticSearch instance:
==================================================
- Query the index named "logstash-2015.04.\*" (in this case its all of
April 2015)
- Filter by type:events AND eventName:UserUpgrade AND channel:selfserve
- Return fields: "@timestamp", "userId", "channel", "utm\_source",
"utm\_medium", "utm\_campaign", "utm\_content"
- Return up to 250 results
- Sort by @timestamp ascending
.. code:: json
{
"index" : "logstash-2015.04.*",
"query" : "type:events AND eventName:UserUpgrade AND channel:selfserve",
"fields" : ["@timestamp", "userId", "channel", "utm_source", "utm_medium", "utm_campaign", "utm_content"],
"size" : 250,
"sort" : "@timestamp:asc"
}
Simple query on a ElasticSearch instance:
==================================================
- Query the index named "twitter"
- Filter by user equal "kimchy"
- Return the fields: "@timestamp", "tweet" and "user"
- Return up to 15 results
- Sort by @timestamp ascending
.. code:: json
{
"index" : "twitter",
"query" : {
"match": {
"user" : "kimchy"
}
},
"fields" : ["@timestamp", "tweet", "user"],
"size" : 15,
"sort" : "@timestamp:asc"
}

View File

@ -1,35 +0,0 @@
JIRA (JQL): Querying
#################
*Simple query, just return issues with no filtering:*
.. code:: json
{
}
*Return only specific fields:*
.. code:: json
{
"fields": "summary,priority"
}
*Return only specific fields and filter by priority:*
.. code:: json
{
"fields": "summary,priority",
"jql": "priority=medium"
}
*Count number of issues with `priority=medium`:*
.. code:: json
{
"queryType": "count",
"jql": "priority=medium"
}

View File

@ -1,106 +0,0 @@
Ongoing Maintenance and Basic Operations
########################################
Configuration and logs
======================
The supervisor config can be found in
``/opt/redash/supervisord/supervisord.conf``.
There you can see the names of its programs (``redash_celery``,
``redash_server``) and the location of their logs.
Restart
=======
* **Restart all processes**: ``sudo supervisorctl restart all``.
* **Restart the Web server**: ``sudo supervisorctl restart redash_server``.
* **Restart Celery workers**: ``sudo supervisorctl restart redash_celery``.
Restarting Celery Workers & the Queries Queue
---------------------------------------------
In case you are handling a problem, and you need to stop the currently
running queries and reset the queue, follow the steps below.
1. Stop celery: ``sudo supervisorctl stop redash_celery`` (celery might
take some time to stop, if it's in the middle of running a query)
2. Flush redis: ``redis-cli flushall``.
3. Start celery: ``sudo supervisorctl start redash_celery``
Changing the Number of Workers
==============================
By default, Celery will start a worker per CPU core. Because most of
Re:dash's tasks are IO bound, the real limit for number of workers you
can use depends on the amount of memory your machine has. It's
recommended to increase number of workers, to support more concurrent
queries.
1. Open the supervisord configuration file:
``/opt/redash/supervisord/supervisord.conf``
2. Edit the ``[program:redash_celery]`` section and add to the *command*
value, the param "-c" with the number of concurrent workers you need.
3. Restart supervisord to apply new configuration:
``sudo /etc/init.d/redash_supervisord restart``.
DB
==
Backup Re:dash's DB:
--------------------
Uncompressed backup: ``sudo -u redash pg_dump > backup_filename.sql``
Compressed backup: ``sudo -u redash pg_dump redash | gzip > backup_filename.gz``
Version
=======
See current version:
``bin/run ./manage.py version``
Monitoring
==========
Re:dash ships by default with a HTTP handler that gives you useful information about the
health of your application. The endpoint is ``/status.json`` and requires a super admin
API key to be given if you're not already logged in. This API key can be obtained from
the dedicated tab in your profile.
You'll find below an example output of this endpoint:
.. code-block:: json
{
"dashboards_count": 30,
"manager": {
"last_refresh_at": "1465392784.433638",
"outdated_queries_count": 1,
"query_ids": "[34]",
"queues": {
"queries": {
"data_sources": "Redshift data, re:dash metadata, MySQL data, MySQL read-only, Redshift read-only",
"size": 1
},
"scheduled_queries": {
"data_sources": "Redshift data, re:dash metadata, MySQL data, MySQL read-only, Redshift read-only",
"size": 0
}
}
},
"queries_count": 204,
"query_results_count": 11161,
"redis_used_memory": "6.09M",
"unused_query_results_count": 32,
"version": "0.10.0+b1774",
"widgets_count": 176,
"workers": []
}
If you plan to hit this endpoint without being logged in, you'll need to provide your API key as a query parameter. Example endpoint with an API key: ``/status.json?api_key=fooBarqsLlGJQIs3maPErUxKuxwWGIpDXoSzQsx7xdv``

View File

@ -1,98 +0,0 @@
MongoDB: Querying
#################
Simple query example:
=====================
.. code:: json
{
"collection" : "my_collection",
"query" : {
"date" : {
"$gt" : "ISODate(\"2015-01-15 11:41\")",
},
"type" : 1
},
"fields" : {
"_id" : 1,
"name" : 2
},
"sort" : [
{
"name" : "date",
"direction" : -1
}
]
}
Live example on the demo instance:
http://demo.redash.io/queries/394/source.
Aggregation
===========
Uses a syntax similar to the one used in PyMongo, however to support the
correct order of sorting, it uses a regular list for the "$sort"
operation that converts into a SON (sorted dictionary) object before
execution.
Aggregation query example:
.. code:: json
{
"collection" : "things",
"aggregate" : [
{
"$unwind" : "$tags"
},
{
"$group" : {
"_id" : "$tags",
"count" : { "$sum" : 1 }
}
},
{
"$sort" : [
{
"name" : "count",
"direction" : -1
},
{
"name" : "_id",
"direction" : -1
}
]
}
]
}
Live examples on the demo instance:
1. http://demo.redash.io/queries/393/source
2. http://demo.redash.io/queries/387/source
MongoDB Extended JSON Support
=============================
We support `MongoDB Extended JSON <https://docs.mongodb.com/manual/reference/mongodb-extended-json/>`__ along with our own extension - ``$humanTime``:
.. code:: json
{
"collection": "date_test",
"query": {
"lastModified": {
"$gt": {
"$humanTime": "3 years ago"
}
}
},
"limit": 100
}
It accepts human readable string like the above ("3 years ago", "yesterday", etc) or timestamps.
Live example on the demo instance: http://demo.redash.io/queries/2112/source.

View File

@ -1,34 +0,0 @@
Permissions Model
#################
In version 0.9.0 we introduced a new permissions model based on groups. Each user by default joins the ``Default`` group, but
can be a member of any number of groups.
Group membership defines the actions you're allowed to take (although currently there is no UI to edit group action permissions),
but also what data sources you have access to (for this we have UI).
How does it work?
=================
* Each user belongs to one or more groups. By default each user joins the ``Default`` group. So the common
data sources, should be associated with this group.
* Each data source will be associated with one or more groups. Each connection to a group will define,
whether this group has full access to this data source (view existing queries and run new ones) or view only access,
which allows only viewing existing queries and results.
* Any dashboard can contain visualizations from any data source (as long as the creating user has access to them). When
a user who doesn't have access to some visualization (because he doesn't have access to the data source) opens a dashboard,
he will see that there is a visualization there but won't see the details.
.. figure:: https://cloud.githubusercontent.com/assets/71468/12002946/dc5032ca-ab16-11e5-90e7-aae9234a596b.png
Dashboard widget with a visualization the user doesn't have access to.
If a user has access to at least one widget on a dashboard, they can see this dashboard in the list of all dashboards.
What if I want to limit the user to only some tables?
=====================================================
The idea is to leverage your database's security model, and hence create a user with access to the tables/columns you
want to give access to. Create a data source that is using this user and then associate it with a group of users who need
this level of access.

View File

@ -1,44 +0,0 @@
Special Features
#################
Re:dash has a lot of very useful features and most of them can be found easily when using the UI. This page features the less well-known ones.
Queries
========
It is possible to have filters for query results and visualizations. Thanks to filters, you can restrain the result to a certain or multiple values. Filters are enabled by following a naming convention for columns.
If you want to focus only on a specific value, you will need to alias your column to ``<columnName>::filter``. Here is an example:
.. code:: sql
select action as "action::filter", count (0) as "actions count"
from events
group by action
You can see this query and the rendered UI `here <http://demo.redash.io/queries/143/source#table>`_.
If you are interested in multi filters (meaning that you can select multiple values), you will need to alias your column to ``<columnName>::multi-filter``. Here is an example:
.. code:: sql
select action as "action::multi-filter", count (0) as "actions count"
from events
group by action
You can see this query and the rendered UI `here <http://demo.redash.io/queries/144/source#table>`_.
Note that you can use ``__filter`` or ``__multiFilter``, (without double quotes) if your database doesn't support ``::`` in column names (such as BigQuery).
Dashboards
==========
It is possible to group multiple dashboards in the dashboards menu. To do this, you need to follow a naming convention by using a column (``:``) to separate the dashboard group and the actual dashboard name. For example, if you name 2 dashboards ``Foo: Bar`` and ``Foo: Baz``, they will be grouped under the ``Foo`` namespace in the dropdown menu.
If you've got queries that have some filters and you want to apply filters at the dashboard level (that apply to all queries), you need to set a flag. You can do it through the admin interface at ``/admin/dashboard`` or manually by setting the column ``dashboard_filters_enabled`` of the table ``dashboards`` to ``TRUE`` in the Re:dash database.
Exporting query results to CSV or JSON
======================================
Query results can be automatically exported to CSV or JSON by using your API key. Your API key can be found when viewing your profile, from the top right menu in the navigation bar.
The format of the URL is the following: ``https://<redash_domain>/api/queries/<query_id>/results.(csv|json)?api_key=<your_api_key>``. Here is a working example: `<http://demo.redash.io/api/queries/63/results.json?api_key=874fcd93ce4b6ef87a9aad41c712bcd5d17cdc8f>`_.
Using this URL you can easily import query results directly into Google Spreadsheets, using the ``importdata`` function. For example: ``=importdata("...")``.

View File

@ -2,7 +2,7 @@
<a href="http://redash.io">Re:dash</a> <span ng-bind="version"></span> <small ng-if="newVersionAvailable" ng-cloak class="ng-cloak"><a href="http://version.redash.io/">(New Re:dash version available)</a></small>
<ul class="f-menu">
<li><a href="http://docs.redash.io/">Documentation</a></li>
<li><a href="https://redash.io/help/">Documentation</a></li>
<li><a href="http://github.com/getredash/redash">Contribute</a></li>
</ul>
</footer>