Skip to content
This repository has been archived by the owner on Oct 30, 2020. It is now read-only.

Log Storage set up How To

clamprecht edited this page Jan 20, 2012 · 2 revisions

How to set up IndexTank's storage service (LogStorage)

To set up the storage service you have to do the following:

  1. Select one or more machines to run the service
  2. Install the software requirements in all machines
  3. Configure your network so your api can access certain ports in these machines
  4. Configure your network so all of your workers can access certain ports in these machines
  5. Install the storage software in these machines
  6. Configure the master server
  7. Configure the slave servers (optional)
  8. Configure your network so your slaves have rsync (ssh) access to the master (optional)
  9. Start the LogWriterServer in all your servers
  10. Create the Service objects in the db for name "storage"
  11. Start the IndexesLogServer in all your servers
  12. Create the Service object in the db for name "recovery"
  13. Start the synchronization scripts on the slaves (optional)

Install the software requirements

  • Install daemontools or your preferred method for running services.
  • Install java 6 (Sun's VM is recommended)

Configure network for api access

The api will need to connect to the writer processes in the servers, the suggested port for the writers is 15000 but you can choose whichever you like, just make sure the api machine has access to it.

Configure network for worker access

The workers will need to connect to the reader processes in the master server, the suggested port for the writers is 15100 but you can choose whichever you like, just make sure the worker machines have access to it.

Install the storage software in these machines

  • Choose the user that will be running the service.
  • Copy the storage folder (from the repo) to it's home.
  • Build and package the indextank-engine jar with dependencies.
  • Place the jar inside ~/storage/lib
  • Create the following directories with full access for the user /data/storage /data/logs

Configure the master server

You will need to choose one of the machines to be the master, if you're running a single machine, then that's your master.

  • Run touch /data/master
  • Run touch /data/safe_to_read (you know it's safe to read because it's your initial master)

Performance notes: The live logs will be written to /data/storage/raw/live and gradually moved to /data/storage/raw/history when they are dealt. The dealt segments will be written under /data/storage/indexes. To get better api latency it is recommended to have the raw logs written to a separate physical drive, ideally reserved exclusively for this task. It's also recommended to have a cfq scheduler on that drive to take advantage of the ionice directives in the service scripts. These directives will prioritize write requests from the writer from read requests from the dealer and reader in the IndexesLogServer. This is important because write requests are synchronous within an API request, thus adding up to the total cost of the request. However, writes are synced to disk in parallel every second and not synchronously within the call, so the benefit may not be significant.

Configure the slave servers

This step is optional, it's only needed if you want to have redundant storage. TO BE DOCUMENTED.

Configure network for slave -> master rsync access

This step is optional, it's only needed if you want to have redundant storage. TO BE DOCUMENTED.

Start the LogWriterServer process

The recommended method for this is using the supervise method provided by daemontools. For this, sample service scripts are available in the repository under the server_configurations/storage/etc/service/storage-writer folder, you may want to adjust the port numbers defined inside the run script or other parameters. If you want to use an alternate method to manage your services or just run them manually you can look at the run script to understand exactly how the writer is started.

Create "storage" Service objects

The storage cluster is defined in the database via the Service django model, which maps to the "storefront_service" table. You can either create these objects through django models or directly in the database. There should be one Service object with type="storage" per storage machine, the host should be one by which the api can point to the machine, and the port should be the writer's port. Here's a sample python script to create a Service object from the storefront code:

$ cd storefront
$ python manage.py shell
> from storefront.models import Service
> s = Service()
> s.name = 'storage'
> s.type = 'master'    # this currently has no use
> s.host = '10.0.0.1'  # this is the address of the storage master
> s.port = 15000
> s.save()

Start the IndexesLogServer process

The recommended method for this is using the supervise method provided by daemontools. For this, sample service scripts are available in the repository under the server_configurations/storage/etc/service/storage-indexes folder, you may want to adjust the port numbers defined inside the run script or other parameters. If you want to use an alternate method to manage your services or just run them manually you can look at the run script to understand exactly how the writer is started.

Create "recovery" Service object

For the index recovery, the Service model should define the access point for the reader. For this, a single object should be created with name="recovery" and host/port to the master machine / reader port.

$ cd storefront
$ python manage.py shell
> from storefront.models import Service
> s = Service()
> s.name = 'recovery'
> s.host = '10.0.0.1'
> s.port = 15100
> s.save()

Notes: this information is loaded and sent to the indexengine when they are started. If the master were to change during a recovery process (thus requiring this object to be updated), ongoing recoveries will attempt to fetch further pages from a machine that is possibly dead until eventually the index engine will give up. One possible solution for this is using a host that can be dynamically modified on master promotion. For instance, if Amazon's Elastic IP's are used promoting a master could also entail bringing the recovery elastic ip to it. Alternatively a custom DNS could be used and updated on master promotion. Another way is via the workers /etc/hosts files, but that requires every worker to be updated on master promotion. Finally, this issue can be ignored, and have deploys that are recovering be aborted and new ones spawned.

Start slave synchronization scripts

This step is optional, it's only needed if you want to have redundant storage. TO BE DOCUMENTED.