What it takes to run Toucan
Toucan Toco is a web application built “cloud first” : our packages and deployment scripts are very easy to use on-premises as long as you are used to run web services.
Toucan Toco application packages are used in our own cloud deployments hundred times a day. This said, we know that not every organisation we work with is used to web services technologies and processes, this is why we provide our partners and technical points of contact with a Toucan Toco installer certification. This certification is delivered after a day of training (2x4h) where trainees get to install Toucan Toco several times with different settings for various use cases.
Please contact your Toucan Toco sales representative to organise this training. If do not have resources to spare for this, we will put you in contact with our certified partners. We also provide a self assessment form to determine your level of familiarity with Toucan Toco stack and deployment environment.
Resources’ needs are defined by different elements:
- CPU: directly depends of the maximum number of users connected at the same time, the complexity of the preprocessing scripts, if all services are hosted on the same server…
- Memory: size of all data that will be crunched during each update (our recommendation is three times this amount to allow preprocessing scripts to pivot and compute extra data)
- Storage: a typical installation uses around 1GB, but additional space for the database, data sources files and assets should be planned
For single projects, our recommendation and typical configuration are:
- CPU: Intel® Xeon® E3 1245 v5 (4C/8T @3.5 Ghz)
- Memory: 32 GB DDR4 ECC
- Storage: 250 GB SSD
Of course the right sizing will directly depends of your data and usage.
The installation (and upgrade) process requires an Internet access to a given domains list during the whole installation or upgrade procedure.
The On-Premise node needs to be able to reach the following domains on port 80 (HTTP) and 443 (HTTPS):
- deb.nodesource.com (for Debian OS family only)
- rpm.nodesource.com (for RedHat OS family only)
- keyserver.ubuntu.com (for Debian OS family only)
- packages.microsoft.com (when using Azure Microsoft SQL Server only)
This permission has a real restricted area in time and domains and could be disable just after the installation.
Please note if you choose to configure the Toucan Toco backend to send mails (like reset password) via Sendgrid, you will also need to permantly whitelist api.sendgrid.com on ports 80 and 443.
Toucan Toco sends emails, for example to setup the accounts of new users. You will need :
- either access to a SMTP service
- or a Sendgrid account
To be able to reach the Toucan Toco backend and frontend, you should have 2 DNS which resolve the nodes where you will install the stack. For example :
- toucantoco.example.com -> resolves to the frontend server
- api-toucantoco.example.com -> resolves to the backend server
Check the requirements¶
We have a version of our backend stack in a docker image that you only need to load and run.
The container is as following:
The container holds all the Toucan Toco stack to make easier its installation and use (you don’t need a specific version of docker-compose or a containers manager system like Kubernetes).
However you can configure the container to use external MongoDB/Redis services and in this case, these services are not started in the container:
Please note that only the Nginx HTTP port (80) is exposed and is not secured.
For a production use, you will need to use an HTTPS reverse proxy in front of the Toucan Toco container.
Here’s an idea of implementation you can use:
Thus you can only expose the HTTP port of the container to your reverse proxy according to your rules and your security policies.
Modularity and Scaling¶
Several parts of the Toucan Toco backend are totally modular.
The main concerned parts are the MongoDB and the Redis services where you can choose:
- to let the deploy script to install them directly on your targeted node,
- or to plug the Toucan Toco stack to your own redundant services.
Thus you can easily imagine to scale the stack with a modular approach of the stack as following: