From 74f9c9aa8d44a62b093c0503c85566b3d0fc5276 Mon Sep 17 00:00:00 2001 From: Dai Feng Date: Tue, 18 Dec 2018 23:54:32 +0800 Subject: [PATCH] =?UTF-8?q?For=20branch=20next,=20add=20an=20expression=20?= =?UTF-8?q?function=20named=20FirstDifference,=20wh=E2=80=A6=20(#1458)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * For branch next, add an expression function named FirstDifference, which calculates the first difference of a time series. I noticed there is MovingAverage calculation, so I thought maybe I can enrich the mathematics functions into that. * add some unit tests for FirstDifference Bump version to 2.5.0-SNAPSHOT. Fix a compilation error about missing FirstDifference (#1471) Signed-off-by: Chris Larsen Bugfix of FsckOptions. (#1464) Signed-off-by: Chris Larsen CORE: (#1472) - Add RpcResponder for handling callbacks asynchronously UTILS: - Add two convenient methods in Config Signed-off-by: Chris Larsen fix #1581 by correcting an edge case in TsdbQuery.getScanEndTimeSeconds() (#1582) Dockerfile that works without a script. (#1739) replace FOREVER with a valid value in table creation (#1967) Co-authored-by: Ion DULGHERU Jackson has a serious security problem in 2.9.5, which will cause RCE (#2034) * Jackson has a serious security problem in 2.9.5, which will cause RCE https://github.com/FasterXML/jackson-databind/issues/2295 * Jackson has a serious security problem in 2.9.5, which will cause RCE https://github.com/FasterXML/jackson-databind/issues/2295 Co-authored-by: chi-chi weng <949409306@qq.com> Pr 1663 (#1966) * Make UniqueIdRpc aware of the mode * Update javadoc on new method and rename test methods to be more descriptive Co-authored-by: Simon Matic Langford Re-introduce query timeouts. (#2035) Co-authored-by: Itamar Turner-Trauring Updating maven central urls and versions to match what is available now (#2039) Fixes #1899 Fixes #1941 always write cli tools to stdout (#1488) Signed-off-by: Chris Larsen Fix OpenTSDB#1632 (#1634) Add "check_tsd_v2" script (#1567) Enhanced check_tsd script evaluates each individual metric group separately when given a filter Collect stats from meta cache plugin if configured (#1649) Fix SaltScanner race condition on spans maps (#1651) * Fix SaltScanner race condition on spans maps * Fix 1.6 compatibility Synchronise the KVs list for scanner results Synchronises the list that holds the KeyValues that have been produced by the scanner callbacks. The list is accessed from multiple threads at a time and wasn't thread-safe, causing inconsistent results and partial loss of data in the response. Relates to: #1753 Resolves: #1760 Allow rollup downsample and series aggregator to be different Fix TestSaltScannerHistogram, looks like the method was renamed and the UTs were not adjusted. ExplicitTags filtering with FuzzyFilters Fix PR 1896 with the fuzzy filter list so that it will honor the regex filter and properly ignore rows that don't match the explicit filter. Also sort the fuzzy filter list in ascending order and implement a static comparator instead of instantiating one on each call. Test rollup filter fix for #1083 Fix concurrent result reporting from scanners Fixes a concurrency bug where scanners report their results into a map and would overwrite each other's results Resolves: #1753 Update Maven jars URLs with HTTPS access Remove excess param in javadoc for RpcHandler Fix check_tsd_v2 (#1937) * renamed instancename of logger The previous name was copied from another script, cosmetic change only * Change behaviour of --ignore-recent option Previous option would fetch data from opentsdb from --duration seconds ago to time.now(), and then try to remove timestamps that was inside the --ignore-recent seconds ago, however the logic was flawed and it actually only included these seconds. Furthermore opentsdb supports setting an "end" parameter, so we use this to only get the data we want. for example -d 180 -I 80, would render a query parameter that looks like `?start=180s-ago&end=80s-ago`. Keeps it simple. Also added debuglogging to output the actual query sent to OpenTSDB if --debug option is enabled. * fixed logic of --percent-over parameter Previous behaviour didn't work due to wrong logic, would set "crit" or "warn" to True regardless. This change fixes that. * better output from logging Add logmessages to be consistent across alerting-scenarios, and changed format of some floats. Fixed a log messaged that displayed "crit" value where it should have been "warn" value. * Fixed bug in logic that parses results Removed an if statement that `continue`:ed the for-loop if a result was neither a `crit` or `warn` already, however this check also made the logic skip the test to see if no values were returned by opentsdb and -A flag was specified to alert in such scenarios. * changed check for timestamps type Previous behaviour was to check if a timestamp could be cast as a float, which is a bit weird, because opentsdb will return integers. I do doubt that opentsdb would return a timestamp that is not an integer to begin with, so i suspect this check is redundant, but leaving it in for now regardless, as per discussion in PR. Rename maxScannerUidtoStringTime into maxScannerUidToStringTime (#1875) Fix the missing index from #1754 in the salt scanner. Force Sunday as first day of week. Tweak TestTsdbQueryQueries to pass in older java versions. Fix the min case for doubles in AggregationIterator. Fix the Screw Driver config. Fix UT for JDK8 PR for SD config. Fix: Rollup queries with count aggregator produce unexpected results (#1895) Co-authored-by: Tony Di Nucci Fixed function description Fixes #841 (#2040) Added tracking of metrics which are null due to auto_metric being disabled Fixes #786 (#2042) Add support for splitting rollup queries (#1853) * Add an SLA config flag for rollup intervals Adds a configuration option for rollup intervals to specify their maximum acceptable delay. Queries that cover a time between now and that maximum delay will need to query other tables for that time interval. * Add global config flag to enable splitting queries Adds a global config flag to enable splitting queries that would hit the rollup table, but the rollup table has a delay SLA configured. In that case, this feature allows splitting a query into to; one that gets the data from the rollups table until the time where it's guaranteed to be available, and the rest from the raw table. * Add a new SplitRollupQuery Adds a SplitRollupQuery class that suports splitting a rollup query into two separate queries. This is useful for when a rollup table is filled by e.g. a batch job that processes the data from the previous day on a daily basis. Rollup data for yesterday will then only be available some time today. This delay SLA can be configured on a per-table basis. The delay would specify by how much time the table can be behind real time. If a query comes in that would query data from that blackout period where data is only available in the raw table, but not yet guaranteed to be in the rollup table, the incoming query can be split into two using the SplitRollupQuery class. It wraps a query that queries the rollup table until the last guaranteed to be available timestamp based on the SLA; and one that gets the remaining data from the raw table. * Extract an AbstractQuery Extracts an AbstractQuery from the TsdbQuery implementation since we'd like to reuse some parts of it in other Query classes (in this case SplitRollupQuery) * Extract an AbstractSpanGroup * Avoid NullPointerException when setting start time Avoids a NullPointerException that happened when we were trying to set the start time on a query that would be eligible to split, but due to the SLA config only hit the raw table anyway. * Scale timestamps to milliseconds for split queries Scales all timestamps for split queries to milliseconds. It's important to maintain consistent units between all the partial queries that make up the bigger one. * Fix starting time error for split queries Fixes a bug that would happen when the start time of a query aligns perfectly with the time configured in the SLA for the delay of a rollup table. For a defined SLA, e.g. 1 day, if the start time of the query was exactly 1 day ago, the end time of the rollups part of the query would be updated and then be equal to its start time. That isn't allowed and causes a query exception. --- tools/docker/Dockerfile => Dockerfile | 8 +- Makefile.am | 3 + configure.ac | 3 +- src/core/IncomingDataPoints.java | 71 +--- src/core/Query.java | 2 +- src/core/RpcResponder.java | 110 ++++++ src/core/SaltScanner.java | 22 +- src/core/TSDB.java | 82 ++++- src/core/TsdbQuery.java | 295 ++++++++------- src/create_table.sh | 2 +- src/query/expression/ExpressionFactory.java | 1 + src/query/expression/FirstDifference.java | 101 +++++ src/tsd/PutDataPointRpc.java | 107 +++--- src/tsd/RpcManager.java | 5 +- src/tsd/UniqueIdRpc.java | 38 ++ src/utils/Config.java | 35 ++ test/core/TestRpcResponsder.java | 65 ++++ test/core/TestTsdbQuery.java | 20 + .../query/expression/TestFirstDifference.java | 348 ++++++++++++++++++ test/tsd/TestUniqueIdRpc.java | 91 ++++- .../apache/commons-math3-3.4.1.jar.md5 | 2 +- third_party/jackson/include.mk | 2 +- .../jackson-annotations-2.9.10.jar.md5 | 1 + .../jackson/jackson-core-2.9.10.jar.md5 | 1 + .../jackson/jackson-databind-2.9.10.jar.md5 | 1 + third_party/kryo/asm-4.0.jar.md5.1 | 1 + third_party/kryo/include.mk | 12 +- third_party/kryo/kryo-3.0.0.jar.md5 | 1 + third_party/kryo/kryo-4.0.0.jar.md5 | 1 + third_party/kryo/minlog-1.3.jar.md5 | 1 + .../kryo/reflectasm-1.10.0-shaded.jar.md5 | 1 + tools/docker/docker.sh | 16 - 32 files changed, 1146 insertions(+), 303 deletions(-) rename tools/docker/Dockerfile => Dockerfile (88%) create mode 100644 src/core/RpcResponder.java create mode 100644 src/query/expression/FirstDifference.java create mode 100644 test/core/TestRpcResponsder.java create mode 100644 test/query/expression/TestFirstDifference.java create mode 100644 third_party/jackson/jackson-annotations-2.9.10.jar.md5 create mode 100644 third_party/jackson/jackson-core-2.9.10.jar.md5 create mode 100644 third_party/jackson/jackson-databind-2.9.10.jar.md5 create mode 100644 third_party/kryo/asm-4.0.jar.md5.1 create mode 100644 third_party/kryo/kryo-3.0.0.jar.md5 create mode 100644 third_party/kryo/kryo-4.0.0.jar.md5 create mode 100644 third_party/kryo/minlog-1.3.jar.md5 create mode 100644 third_party/kryo/reflectasm-1.10.0-shaded.jar.md5 delete mode 100755 tools/docker/docker.sh diff --git a/tools/docker/Dockerfile b/Dockerfile similarity index 88% rename from tools/docker/Dockerfile rename to Dockerfile index c9410133e1..35dd48583c 100644 --- a/tools/docker/Dockerfile +++ b/Dockerfile @@ -2,7 +2,7 @@ FROM java:openjdk-8-alpine MAINTAINER jonathan.creasy@gmail.com -ENV VERSION 2.3.0-RC1 +ENV VERSION 2.5.0-SNAPSHOT ENV WORKDIR /usr/share/opentsdb ENV LOGDIR /var/log/opentsdb ENV DATADIR /data/opentsdb @@ -32,10 +32,10 @@ ENV TSDB_PORT 4244 WORKDIR $WORKDIR -ADD libs $WORKDIR/libs -ADD logback.xml $WORKDIR +ADD third_party/*/*.jar $WORKDIR/libs/ +ADD src/logback.xml $WORKDIR ADD tsdb-$VERSION.jar $WORKDIR -ADD opentsdb.conf $ETCDIR/opentsdb.conf +ADD src/opentsdb.conf $ETCDIR/opentsdb.conf VOLUME ["/etc/openstsdb"] VOLUME ["/data/opentsdb"] diff --git a/Makefile.am b/Makefile.am index 9bf563d304..443bd4c10f 100644 --- a/Makefile.am +++ b/Makefile.am @@ -81,6 +81,7 @@ tsdb_SRC := \ src/core/RequestBuilder.java \ src/core/RowKey.java \ src/core/RowSeq.java \ + src/core/RpcResponder.java \ src/core/iRowSeq.java \ src/core/SaltScanner.java \ src/core/SeekableView.java \ @@ -126,6 +127,7 @@ tsdb_SRC := \ src/query/expression/ExpressionReader.java \ src/query/expression/Expressions.java \ src/query/expression/ExpressionTree.java \ + src/query/expression/FirstDifference.java \ src/query/expression/HighestCurrent.java \ src/query/expression/HighestMax.java \ src/query/expression/IntersectionIterator.java \ @@ -322,6 +324,7 @@ test_SRC := \ test/core/TestRateSpan.java \ test/core/TestRowKey.java \ test/core/TestRowSeq.java \ + test/core/TestRpcResponsder.java \ test/core/TestSaltScanner.java \ test/core/TestSeekableViewChain.java \ test/core/TestSpan.java \ diff --git a/configure.ac b/configure.ac index 20b3399356..3f3569ea36 100644 --- a/configure.ac +++ b/configure.ac @@ -14,7 +14,8 @@ # along with this library. If not, see . # Semantic Versioning (see http://semver.org/). -AC_INIT([opentsdb], [2.4.1-SNAPSHOT], [opentsdb@googlegroups.com]) +AC_INIT([opentsdb], [2.5.0-SNAPSHOT], [opentsdb@googlegroups.com]) + AC_CONFIG_AUX_DIR([build-aux]) AM_INIT_AUTOMAKE([foreign]) diff --git a/src/core/IncomingDataPoints.java b/src/core/IncomingDataPoints.java index 0456e47be2..108e641a6c 100644 --- a/src/core/IncomingDataPoints.java +++ b/src/core/IncomingDataPoints.java @@ -18,6 +18,7 @@ import java.util.Date; import java.util.List; import java.util.Map; +import java.util.concurrent.atomic.AtomicLong; import com.stumbleupon.async.Callback; import com.stumbleupon.async.Deferred; @@ -45,6 +46,11 @@ final class IncomingDataPoints implements WritableDataPoints { */ static final Histogram putlatency = new Histogram(16000, (short) 2, 100); + /** + * Keep track of the number of UIDs that came back null with auto_metric disabled. + */ + static final AtomicLong auto_metric_rejection_count = new AtomicLong(); + /** The {@code TSDB} instance we belong to. */ private final TSDB tsdb; @@ -137,9 +143,14 @@ static byte[] rowKeyTemplate(final TSDB tsdb, final String metric, short pos = (short) Const.SALT_WIDTH(); - copyInRowKey(row, pos, - (tsdb.config.auto_metric() ? tsdb.metrics.getOrCreateId(metric) - : tsdb.metrics.getId(metric))); + byte[] metric_id = (tsdb.config.auto_metric() ? tsdb.metrics.getOrCreateId(metric) + : tsdb.metrics.getId(metric)); + + if(!tsdb.config.auto_metric() && metric_id == null) { + auto_metric_rejection_count.incrementAndGet(); + } + + copyInRowKey(row, pos, metric_id); pos += metric_width; pos += Const.TIMESTAMP_BYTES; @@ -151,60 +162,6 @@ static byte[] rowKeyTemplate(final TSDB tsdb, final String metric, return row; } - /** - * Returns a partially initialized row key for this metric and these tags. The - * only thing left to fill in is the base timestamp. - * - * @since 2.0 - */ - static Deferred rowKeyTemplateAsync(final TSDB tsdb, - final String metric, final Map tags) { - final short metric_width = tsdb.metrics.width(); - final short tag_name_width = tsdb.tag_names.width(); - final short tag_value_width = tsdb.tag_values.width(); - final short num_tags = (short) tags.size(); - - int row_size = (Const.SALT_WIDTH() + metric_width + Const.TIMESTAMP_BYTES - + tag_name_width * num_tags + tag_value_width * num_tags); - final byte[] row = new byte[row_size]; - - // Lookup or create the metric ID. - final Deferred metric_id; - if (tsdb.config.auto_metric()) { - metric_id = tsdb.metrics.getOrCreateIdAsync(metric, metric, tags); - } else { - metric_id = tsdb.metrics.getIdAsync(metric); - } - - // Copy the metric ID at the beginning of the row key. - class CopyMetricInRowKeyCB implements Callback { - public byte[] call(final byte[] metricid) { - copyInRowKey(row, (short) Const.SALT_WIDTH(), metricid); - return row; - } - } - - // Copy the tag IDs in the row key. - class CopyTagsInRowKeyCB implements - Callback, ArrayList> { - public Deferred call(final ArrayList tags) { - short pos = (short) (Const.SALT_WIDTH() + metric_width); - pos += Const.TIMESTAMP_BYTES; - for (final byte[] tag : tags) { - copyInRowKey(row, pos, tag); - pos += tag.length; - } - // Once we've resolved all the tags, schedule the copy of the metric - // ID and return the row key we produced. - return metric_id.addCallback(new CopyMetricInRowKeyCB()); - } - } - - // Kick off the resolution of all tags. - return Tags.resolveOrCreateAllAsync(tsdb, metric, tags) - .addCallbackDeferring(new CopyTagsInRowKeyCB()); - } - public void setSeries(final String metric, final Map tags) { checkMetricAndTags(metric, tags); try { diff --git a/src/core/Query.java b/src/core/Query.java index 455f641a25..5cec6c7c80 100644 --- a/src/core/Query.java +++ b/src/core/Query.java @@ -197,7 +197,7 @@ public Deferred configureFromQuery(final TSQuery query, * way we get this one data point is by aggregating all the data points of * that interval together using an {@link Aggregator}. This enables you * to compute things like the 5-minute average or 10 minute 99th percentile. - * @param interval Number of seconds wanted between each data point. + * @param interval Number of milliseconds wanted between each data point. * @param downsampler Aggregation function to use to group data points * within an interval. */ diff --git a/src/core/RpcResponder.java b/src/core/RpcResponder.java new file mode 100644 index 0000000000..97e7b22bb8 --- /dev/null +++ b/src/core/RpcResponder.java @@ -0,0 +1,110 @@ +// This file is part of OpenTSDB. +// Copyright (C) 2010-2017 The OpenTSDB Authors. +// +// This program is free software: you can redistribute it and/or modify it +// under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 2.1 of the License, or (at your +// option) any later version. This program is distributed in the hope that it +// will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty +// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser +// General Public License for more details. You should have received a copy +// of the GNU Lesser General Public License along with this program. If not, +// see . +package net.opentsdb.core; + +import com.google.common.util.concurrent.ThreadFactoryBuilder; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.TimeUnit; + +import net.opentsdb.utils.Config; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * + * This class is responsible for building result of requests and + * respond to clients asynchronously. + * + * It can reduce requests that stacking in AsyncHBase, especially put requests. + * When a HBase's RPC has completed, the "AsyncHBase I/O worker" just decodes + * the response, and then do callback by this class asynchronously. We should + * take up workers as short as possible time so that workers can remove RPCs + * from in-flight state more quickly. + * + */ +public class RpcResponder { + + private static final Logger LOG = LoggerFactory.getLogger(RpcResponder.class); + + public static final String TSD_RESPONSE_ASYNC_KEY = "tsd.core.response.async"; + public static final boolean TSD_RESPONSE_ASYNC_DEFAULT = true; + + public static final String TSD_RESPONSE_WORKER_NUM_KEY = + "tsd.core.response.worker.num"; + public static final int TSD_RESPONSE_WORKER_NUM_DEFAULT = 10; + + private final boolean async; + private ExecutorService responders; + private volatile boolean running = true; + + RpcResponder(final Config config) { + async = config.getBoolean(TSD_RESPONSE_ASYNC_KEY, + TSD_RESPONSE_ASYNC_DEFAULT); + + if (async) { + int threads = config.getInt(TSD_RESPONSE_WORKER_NUM_KEY, + TSD_RESPONSE_WORKER_NUM_DEFAULT); + responders = Executors.newFixedThreadPool(threads, + new ThreadFactoryBuilder() + .setNameFormat("OpenTSDB Responder #%d") + .setDaemon(true) + .setUncaughtExceptionHandler(new ExceptionHandler()) + .build()); + } + + LOG.info("RpcResponder mode: {}", async ? "async" : "sync"); + } + + public void response(Runnable run) { + if (async) { + if (running) { + responders.execute(run); + } else { + throw new IllegalStateException("RpcResponder is closing or closed."); + } + } else { + run.run(); + } + } + + public void close() { + if (running) { + running = false; + responders.shutdown(); + } + + boolean completed; + try { + completed = responders.awaitTermination(5, TimeUnit.MINUTES); + } catch (InterruptedException e) { + completed = false; + } + + if (!completed) { + LOG.warn( + "There are still some results that are not returned to the clients."); + } + } + + public boolean isAsync() { + return async; + } + + private class ExceptionHandler implements Thread.UncaughtExceptionHandler { + @Override + public void uncaughtException(Thread t, Throwable e) { + LOG.error("Run into an uncaught exception in thread: " + t.getName(), e); + } + } +} diff --git a/src/core/SaltScanner.java b/src/core/SaltScanner.java index b5da4e2da7..dc0cc0de15 100644 --- a/src/core/SaltScanner.java +++ b/src/core/SaltScanner.java @@ -491,7 +491,8 @@ final class ScannerCB implements Callback> rows) final List> lookups = filters != null && !filters.isEmpty() ? new ArrayList>(rows.size()) : null; - + + // fail the query when the timeout exceeded + if (this.query_timeout > 0 && fetch_time > (this.query_timeout * 1000000)) { + try { + close(false); + handleException( + new QueryException(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, + "Sorry, your query timed out. Time limit: " + + this.query_timeout + " ms, fetch time: " + + (double)(fetch_time)/1000000 + " ms. Please try filtering " + + "using more tags or decrease your time range.")); + return false; + } catch (Exception e) { + LOG.error("Sorry, Scanner is closed: " + scanner, e); + return false; + } + } + // validation checking before processing the next set of results. It's // kinda funky but we want to allow queries to sneak through that were // just a *tad* over the limits so that's why we don't check at the diff --git a/src/core/TSDB.java b/src/core/TSDB.java index b3e5633cad..5f17d6e5f1 100644 --- a/src/core/TSDB.java +++ b/src/core/TSDB.java @@ -101,9 +101,27 @@ public final class TSDB { /** The operation mode (role) of the TSD. */ public enum OperationMode { - READWRITE, - READONLY, - WRITEONLY + READWRITE(true, true), + READONLY(true, false), + WRITEONLY(false, true); + + private final boolean read; + private final boolean write; + + OperationMode(boolean read, boolean write) { + this.read = read; + this.write = write; + } + + /** Whether this mode allows reading */ + public boolean isRead() { + return read; + } + + /** Whether this mode allows writing */ + public boolean isWrite() { + return write; + } } /** Client for the HBase cluster to use. */ @@ -134,6 +152,9 @@ public enum OperationMode { /** Timer used for various tasks such as idle timeouts or query timeouts */ private final HashedWheelTimer timer; + /** RpcResponder for doing response asynchronously*/ + private final RpcResponder rpcResponder; + /** * Row keys that need to be compacted. * Whenever we write a new data point to a row, we add the row key to this @@ -351,7 +372,10 @@ public TSDB(final HBaseClient client, final Config config) { // set any extra tags from the config for stats StatsCollector.setGlobalTags(config); - + + + rpcResponder = new RpcResponder(config); + LOG.debug(config.dumpConfiguration()); } @@ -816,6 +840,13 @@ public void collectStats(final StatsCollector collector) { collector.clearExtraTag("class"); } + collector.addExtraTag("class", "IncomingDataPoints"); + try { + collector.record("uid.autometric.rejections", IncomingDataPoints.auto_metric_rejection_count, "method=put"); + } finally { + collector.clearExtraTag("class"); + } + collector.addExtraTag("class", "TSDB"); try { collector.record("datapoints.added", datapoints_added, "type=all"); @@ -1673,20 +1704,43 @@ public String toString() { } } + final class RpcResponsderShutdown implements Callback { + @Override + public Object call(Object arg) throws Exception { + try { + TSDB.this.rpcResponder.close(); + } catch (Exception e) { + LOG.error( + "Run into unknown exception while closing RpcResponder.", e); + } finally { + return arg; + } + } + } + final class HClientShutdown implements Callback, ArrayList> { - public Deferred call(final ArrayList args) { + public Deferred call(final ArrayList args) { + Callback nextCallback; if (storage_exception_handler != null) { - return client.shutdown().addBoth(new SEHShutdown()); + nextCallback = new SEHShutdown(); + } else { + nextCallback = new FinalShutdown(); + } + + if (TSDB.this.rpcResponder.isAsync()) { + client.shutdown().addBoth(new RpcResponsderShutdown()); } - return client.shutdown().addBoth(new FinalShutdown()); + + return client.shutdown().addBoth(nextCallback); } - public String toString() { + + public String toString() { return "shutdown HBase client"; } } final class ShutdownErrback implements Callback { - public Object call(final Exception e) { + public Object call(final Exception e) { final Logger LOG = LoggerFactory.getLogger(ShutdownErrback.class); if (e instanceof DeferredGroupException) { final DeferredGroupException ge = (DeferredGroupException) e; @@ -1700,13 +1754,14 @@ public Object call(final Exception e) { } return new HClientShutdown().call(null); } - public String toString() { + + public String toString() { return "shutdown HBase client after error"; } } final class CompactCB implements Callback> { - public Object call(ArrayList compactions) throws Exception { + public Object call(ArrayList compactions) throws Exception { return null; } } @@ -2216,4 +2271,9 @@ final Deferred delete(final byte[] key, final byte[][] qualifiers) { return client.delete(new DeleteRequest(table, key, FAMILY, qualifiers)); } + /** Do response by RpcResponder */ + public void response(Runnable run) { + rpcResponder.response(run); + } + } diff --git a/src/core/TsdbQuery.java b/src/core/TsdbQuery.java index de2a3779a2..f2f1424df2 100644 --- a/src/core/TsdbQuery.java +++ b/src/core/TsdbQuery.java @@ -98,22 +98,22 @@ final class TsdbQuery extends AbstractQuery { /** End time (UNIX timestamp in seconds) on 32 bits ("unsigned" int). */ private long end_time = UNSET; - + /** Whether or not to delete the queried data */ private boolean delete; /** ID of the metric being looked up. */ private byte[] metric; - + /** Row key regex to pass to HBase if we have tags or TSUIDs */ private String regex; - + /** Whether or not to enable the fuzzy row filter for Hbase */ private boolean enable_fuzzy_filter; - + /** Whether or not the user wants to use the fuzzy filter */ private boolean override_fuzzy_filter; - + /** * Tags by which we must group the results. * Each element is a tag ID. @@ -132,7 +132,7 @@ final class TsdbQuery extends AbstractQuery { /** Specifies the various options for rate calculations */ private RateOptions rate_options; - + /** Aggregator function to use. */ private Aggregator aggregator; @@ -141,55 +141,55 @@ final class TsdbQuery extends AbstractQuery { /** Rollup interval and aggregator, null if not applicable. */ private RollupQuery rollup_query; - + /** Map of RollupInterval objects in the order of next best match * like 1d, 1h, 10m, 1m, for rollup of 1d. */ private List best_match_rollups; - + /** How to use the rollup data */ private ROLLUP_USAGE rollup_usage = ROLLUP_USAGE.ROLLUP_NOFALLBACK; - - /** Search the query on pre-aggregated table directly instead of post fetch + + /** Search the query on pre-aggregated table directly instead of post fetch * aggregation. */ private boolean pre_aggregate; - + /** Optional list of TSUIDs to fetch and aggregate instead of a metric */ private List tsuids; - + /** An index that links this query to the original sub query */ private int query_index; - + /** Tag value filters to apply post scan */ private List filters; - + /** An object for storing stats in regarding the query. May be null */ private QueryStats query_stats; - + /** Whether or not to match series with ONLY the given tags */ private boolean explicit_tags; - + private List percentiles; - + private boolean show_histogram_buckets; - + /** Set at filter resolution time to determine if we can use multi-gets */ private boolean use_multi_gets; /** Set by the user if they want to bypass multi-gets */ private boolean override_multi_get; - + /** Whether or not to use the search plugin for multi-get resolution. */ private boolean multiget_with_search; - + /** Whether or not to fall back on query failure. */ private boolean search_query_failure; - + /** The maximum number of bytes allowed per query. */ private long max_bytes = 0; - + /** The maximum number of data points allowed per query. */ private long max_data_points = 0; - + /** * Enum for rollup fallback control. * @since 2.4 @@ -199,7 +199,7 @@ public static enum ROLLUP_USAGE { ROLLUP_NOFALLBACK, //Use rollup data, and don't fallback on no data ROLLUP_FALLBACK, //Use rollup data and fallback to next best match on data ROLLUP_FALLBACK_RAW; //Use rollup data and fallback to raw on no data - + /** * Parse and transform a string to ROLLUP_USAGE object * @param str String to be parsed @@ -207,7 +207,7 @@ public static enum ROLLUP_USAGE { */ public static ROLLUP_USAGE parse(String str) { ROLLUP_USAGE def = ROLLUP_NOFALLBACK; - + if (str != null) { try { def = ROLLUP_USAGE.valueOf(str.toUpperCase()); @@ -217,10 +217,10 @@ public static ROLLUP_USAGE parse(String str) { + "uses raw data but don't fallback on no data"); } } - + return def; } - + /** * Whether to fallback to next best match or raw * @return true means fall back else false @@ -249,15 +249,15 @@ public String getRollupTable() { return "raw"; } } - - /** Search the query on pre-aggregated table directly instead of post fetch - * aggregation. - * @since 2.4 + + /** Search the query on pre-aggregated table directly instead of post fetch + * aggregation. + * @since 2.4 */ public boolean isPreAggregate() { return this.pre_aggregate; } - + /** * Sets the start time for the query * @param timestamp Unix epoch timestamp in seconds or milliseconds @@ -266,7 +266,7 @@ public boolean isPreAggregate() { */ @Override public void setStartTime(final long timestamp) { - if (timestamp < 0 || ((timestamp & Const.SECOND_MASK) != 0 && + if (timestamp < 0 || ((timestamp & Const.SECOND_MASK) != 0 && timestamp > 9999999999999L)) { throw new IllegalArgumentException("Invalid timestamp: " + timestamp); } else if (end_time != UNSET && timestamp >= getEndTime()) { @@ -751,7 +751,7 @@ private void findGroupBys() { use_multi_gets = false; } } - + // make sure the multi-get cardinality doesn't exceed our limit (or disable // multi-gets) if ((use_multi_gets && override_multi_get)) { @@ -780,7 +780,7 @@ public Deferred runAsync() throws HBaseException { if (rollup_usage != null && rollup_usage.fallback()) { result.addCallback(new FallbackRollupOnEmptyResult()); } - + return result; } @@ -789,7 +789,7 @@ public Deferred runHistogramAsync() throws HBaseException { if (!isHistogramQuery()) { throw new RuntimeException("Should never be here"); } - + Deferred result = null; if (use_multi_gets && override_multi_get) { result = findHistogramSpansWithMultiGetter() @@ -798,10 +798,10 @@ public Deferred runHistogramAsync() throws HBaseException { result = findHistogramSpans() .addCallback(new HistogramGroupByAndAggregateCB()); } - + return result; } - + @Override public boolean isHistogramQuery() { if ((this.percentiles != null && this.percentiles.size() > 0) || show_histogram_buckets) { @@ -862,12 +862,12 @@ private Deferred> findSpans() throws HBaseException { final TreeMap spans = // The key is a row key from HBase. new TreeMap(new SpanCmp( (short)(Const.SALT_WIDTH() + metric_width))); - + // Copy only the filters that should trigger a tag resolution. If this list // is empty due to literals or a wildcard star, then we'll save a TON of // UID lookups final List scanner_filters; - if (filters != null) { + if (filters != null) { scanner_filters = new ArrayList(filters.size()); for (final TagVFilter filter : filters) { if (filter.postScan()) { @@ -877,7 +877,7 @@ private Deferred> findSpans() throws HBaseException { } else { scanner_filters = null; } - + if (Const.SALT_WIDTH() > 0) { final List scanners = new ArrayList(Const.SALT_BUCKETS()); for (int i = 0; i < Const.SALT_BUCKETS(); i++) { @@ -885,18 +885,18 @@ private Deferred> findSpans() throws HBaseException { } scan_start_time = DateTime.nanoTime(); return new SaltScanner(tsdb, metric, scanners, spans, scanner_filters, - delete, rollup_query, query_stats, query_index, null, + delete, rollup_query, query_stats, query_index, null, max_bytes, max_data_points).scan(); } else { final List scanners = new ArrayList(1); scanners.add(getScanner(0)); scan_start_time = DateTime.nanoTime(); return new SaltScanner(tsdb, metric, scanners, spans, scanner_filters, - delete, rollup_query, query_stats, query_index, null, max_bytes, + delete, rollup_query, query_stats, query_index, null, max_bytes, max_data_points).scan(); } } - + private Deferred> findSpansWithMultiGetter() throws HBaseException { final short metric_width = tsdb.metrics.width(); final TreeMap spans = // The key is a row key from HBase. @@ -909,16 +909,16 @@ private Deferred> findSpansWithMultiGetter() throws HBas tableToBeScanned(), spans, null, 0, rollup_query, query_stats, query_index, 0, false, search_query_failure).fetch(); } - + /** * Finds all the {@link HistogramSpan}s that match this query. * This is what actually scans the HBase table and loads the data into * {@link HistogramSpan}s. - * + * * @return A map from HBase row key to the {@link HistogramSpan} for that row key. * Since a {@link HistogramSpan} actually contains multiple HBase rows, the row key * stored in the map has its timestamp zero'ed out. - * + * * @throws HBaseException if there was a problem communicating with HBase to * perform the search. * @throws IllegalArgumentException if bad data was retreived from HBase. @@ -926,7 +926,7 @@ private Deferred> findSpansWithMultiGetter() throws HBas private Deferred> findHistogramSpans() throws HBaseException { final short metric_width = tsdb.metrics.width(); final TreeMap histSpans = new TreeMap(new SpanCmp(metric_width)); - + // Copy only the filters that should trigger a tag resolution. If this list // is empty due to literals or a wildcard star, then we'll save a TON of // UID lookups @@ -950,37 +950,37 @@ private Deferred> findHistogramSpans() throws H scanners.add(getScanner(i)); } scan_start_time = DateTime.nanoTime(); - return new SaltScanner(tsdb, metric, scanners, null, scanner_filters, - delete, rollup_query, query_stats, query_index, histSpans, + return new SaltScanner(tsdb, metric, scanners, null, scanner_filters, + delete, rollup_query, query_stats, query_index, histSpans, max_bytes, max_data_points).scanHistogram(); } else { scanners = Lists.newArrayList(getScanner()); scan_start_time = DateTime.nanoTime(); - return new SaltScanner(tsdb, metric, scanners, null, scanner_filters, - delete, rollup_query, query_stats, query_index, histSpans, + return new SaltScanner(tsdb, metric, scanners, null, scanner_filters, + delete, rollup_query, query_stats, query_index, histSpans, max_bytes, max_data_points).scanHistogram(); } } - + private Deferred> findHistogramSpansWithMultiGetter() throws HBaseException { final short metric_width = tsdb.metrics.width(); // The key is a row key from HBase final TreeMap histSpans = new TreeMap(new SpanCmp(metric_width)); scan_start_time = System.nanoTime(); - return new MultiGetQuery(tsdb, this, metric, row_key_literals_list, + return new MultiGetQuery(tsdb, this, metric, row_key_literals_list, getScanStartTimeSeconds(), getScanEndTimeSeconds(), tableToBeScanned(), null, histSpans, 0, rollup_query, query_stats, query_index, 0, false, search_query_failure).fetchHistogram(); } - + /** * Callback that should be attached the the output of * {@link TsdbQuery#findSpans} to group and sort the results. */ - private class GroupByAndAggregateCB implements + private class GroupByAndAggregateCB implements Callback>{ - + /** * Creates the {@link SpanGroup}s to form the final results of this query. * @param spans The {@link Span}s found for this query ({@link #findSpans}). @@ -991,32 +991,32 @@ private class GroupByAndAggregateCB implements @Override public DataPoints[] call(final SortedMap spans) throws Exception { if (query_stats != null) { - query_stats.addStat(query_index, QueryStat.QUERY_SCAN_TIME, + query_stats.addStat(query_index, QueryStat.QUERY_SCAN_TIME, (System.nanoTime() - TsdbQuery.this.scan_start_time)); } - + if (spans == null || spans.size() <= 0) { if (query_stats != null) { query_stats.addStat(query_index, QueryStat.GROUP_BY_TIME, 0); } return NO_RESULT; } - + // The raw aggregator skips group bys and ignores downsampling if (aggregator == Aggregators.NONE) { final SpanGroup[] groups = new SpanGroup[spans.size()]; int i = 0; for (final Span span : spans.values()) { final SpanGroup group = new SpanGroup( - tsdb, + tsdb, getScanStartTimeSeconds(), getScanEndTimeSeconds(), - null, - rate, + null, + rate, rate_options, aggregator, downsampler, - getStartTime(), + getStartTime(), getEndTime(), query_index, rollup_query, @@ -1026,7 +1026,7 @@ public DataPoints[] call(final SortedMap spans) throws Exception { } return groups; } - + if (group_bys == null) { // We haven't been asked to find groups, so let's put all the spans // together in the same group. @@ -1037,7 +1037,7 @@ public DataPoints[] call(final SortedMap spans) throws Exception { rate, rate_options, aggregator, downsampler, - getStartTime(), + getStartTime(), getEndTime(), query_index, rollup_query, @@ -1047,7 +1047,7 @@ public DataPoints[] call(final SortedMap spans) throws Exception { } return new SpanGroup[] { group }; } - + // Maps group value IDs to the SpanGroup for those values. Say we've // been asked to group by two things: foo=* bar=* Then the keys in this // map will contain all the value IDs combinations we've seen. If the @@ -1117,7 +1117,7 @@ public DataPoints[] call(final SortedMap spans) throws Exception { * Callback that should be attached the the output of * {@link TsdbQuery#findHistogramSpans} to group and sort the results. */ - private class HistogramGroupByAndAggregateCB implements + private class HistogramGroupByAndAggregateCB implements Callback>{ /** @@ -1129,10 +1129,10 @@ private class HistogramGroupByAndAggregateCB implements */ public DataPoints[] call(final SortedMap spans) throws Exception { if (query_stats != null) { - query_stats.addStat(query_index, QueryStat.QUERY_SCAN_TIME, + query_stats.addStat(query_index, QueryStat.QUERY_SCAN_TIME, (System.nanoTime() - TsdbQuery.this.scan_start_time)); } - + final long group_build = System.nanoTime(); if (spans == null || spans.size() <= 0) { if (query_stats != null) { @@ -1148,33 +1148,32 @@ public DataPoints[] call(final SortedMap spans) throws Ex // } else { // query_tags = null; // } - + final ArrayList result_dp_groups = new ArrayList(); // The raw aggregator skips group bys and ignores downsampling if (aggregator == Aggregators.NONE) { for (final HistogramSpan span : spans.values()) { - final HistogramSpanGroup group = new HistogramSpanGroup(tsdb, + final HistogramSpanGroup group = new HistogramSpanGroup(tsdb, getScanStartTimeSeconds(), getScanEndTimeSeconds(), null, null, downsampler, - getStartTime(), + getStartTime(), getEndTime(), query_index, RollupQuery.isValidQuery(rollup_query), query_tags); group.add(span); - + // create histogram data points to data points adaptor for each percentile calculation if (null != percentiles && percentiles.size() > 0) { List percentile_datapoints_list = generateHistogramPercentileDataPoints(group); if (null != percentile_datapoints_list && percentile_datapoints_list.size() > 0) result_dp_groups.addAll(percentile_datapoints_list); } - - - // create bucket metric + + // create bucket metric if (show_histogram_buckets) { List bucket_datapoints_list = generateHistogramBucketDataPoints(group); if (null != bucket_datapoints_list && bucket_datapoints_list.size() > 0) { @@ -1182,7 +1181,7 @@ public DataPoints[] call(final SortedMap spans) throws Ex } } } // end for - + int i = 0; DataPoints[] result = new DataPoints[result_dp_groups.size()]; for (DataPoints item : result_dp_groups) { @@ -1190,7 +1189,7 @@ public DataPoints[] call(final SortedMap spans) throws Ex } return result; } - + if (group_bys == null) { // We haven't been asked to find groups, so let's put all the spans // together in the same group. @@ -1206,25 +1205,25 @@ public DataPoints[] call(final SortedMap spans) throws Ex RollupQuery.isValidQuery(rollup_query), query_tags); if (query_stats != null) { - query_stats.addStat(query_index, QueryStat.GROUP_BY_TIME, + query_stats.addStat(query_index, QueryStat.GROUP_BY_TIME, (System.nanoTime() - group_build)); } - + // create histogram data points to data points adaptor for each percentile calculation if (null != percentiles && percentiles.size() > 0) { List percentile_datapoints_list = generateHistogramPercentileDataPoints(group); if (null != percentile_datapoints_list && percentile_datapoints_list.size() > 0) result_dp_groups.addAll(percentile_datapoints_list); } - - // create bucket metric + + // create bucket metric if (show_histogram_buckets) { List bucket_datapoints_list = generateHistogramBucketDataPoints(group); if (null != bucket_datapoints_list && bucket_datapoints_list.size() > 0) { result_dp_groups.addAll(bucket_datapoints_list); } } - + int i = 0; DataPoints[] result = new DataPoints[result_dp_groups.size()]; for (DataPoints item : result_dp_groups) { @@ -1232,7 +1231,7 @@ public DataPoints[] call(final SortedMap spans) throws Ex } return result; } - + // Maps group value IDs to the SpanGroup for those values. Say we've // been asked to group by two things: foo=* bar=* Then the keys in this // map will contain all the value IDs combinations we've seen. If the @@ -1267,22 +1266,22 @@ public DataPoints[] call(final SortedMap spans) throws Ex + " which is unexpected. Query=" + this); continue; } - + //LOG.info("Span belongs to group " + Arrays.toString(group) + ": " + Arrays.toString(row)); HistogramSpanGroup thegroup = groups.get(group); if (thegroup == null) { - thegroup = new HistogramSpanGroup(tsdb, + thegroup = new HistogramSpanGroup(tsdb, getScanStartTimeSeconds(), getScanEndTimeSeconds(), null, HistogramAggregation.SUM, // only SUM is applicable for histogram metric - downsampler, - getStartTime(), + downsampler, + getStartTime(), getEndTime(), query_index, RollupQuery.isValidQuery(rollup_query), query_tags); - + // Copy the array because we're going to keep `group' and overwrite // its contents. So we want the collection to have an immutable copy. final byte[] group_copy = new byte[group.length]; @@ -1291,13 +1290,12 @@ public DataPoints[] call(final SortedMap spans) throws Ex } thegroup.add(entry.getValue()); } - + if (query_stats != null) { - query_stats.addStat(query_index, QueryStat.GROUP_BY_TIME, + query_stats.addStat(query_index, QueryStat.GROUP_BY_TIME, (System.nanoTime() - group_build)); } - - + for (final Map.Entry entry : groups.entrySet()) { // create histogram data points to data points adaptor for each percentile calculation if (null != percentiles && percentiles.size() > 0) { @@ -1305,8 +1303,8 @@ public DataPoints[] call(final SortedMap spans) throws Ex if (null != percentile_datapoints_list && percentile_datapoints_list.size() > 0) result_dp_groups.addAll(percentile_datapoints_list); } - - // create bucket metric + + // create bucket metric if (show_histogram_buckets) { List bucket_datapoints_list = generateHistogramBucketDataPoints(entry.getValue()); if (null != bucket_datapoints_list && bucket_datapoints_list.size() > 0) { @@ -1314,7 +1312,7 @@ public DataPoints[] call(final SortedMap spans) throws Ex } } } // end for - + int i = 0; DataPoints[] result = new DataPoints[result_dp_groups.size()]; for (DataPoints item : result_dp_groups) { @@ -1355,11 +1353,11 @@ private List generateHistogramBucketDataPoints(final HistogramSpanGr return result_dp_groups; } } - + /** * Scan the tables again with the next best rollup match, on empty result set */ - private class FallbackRollupOnEmptyResult implements + private class FallbackRollupOnEmptyResult implements Callback, DataPoints[]>{ /** @@ -1371,13 +1369,13 @@ private class FallbackRollupOnEmptyResult implements */ public Deferred call(final DataPoints[] datapoints) throws Exception { //TODO review this logic during spatial aggregation implementation - + if (datapoints == NO_RESULT && RollupQuery.isValidQuery(rollup_query)) { //There are no datapoints for this query and it is a rollup query //but not the default interval (default interval means raw). //This will prevent redundant scan on raw data on the presense of //default rollup interval - + //If the rollup usage is to fallback directly to raw data //then nullyfy the rollup query, so that the recursive scan will use //raw data and this will not called again because of isValida check @@ -1389,12 +1387,12 @@ public Deferred call(final DataPoints[] datapoints) throws Excepti } else if (best_match_rollups != null && best_match_rollups.size() > 0) { RollupInterval interval = best_match_rollups.remove(0); - + if (interval.isDefaultInterval()) { transformRollupQueryToDownSampler(); } else { - rollup_query = new RollupQuery(interval, + rollup_query = new RollupQuery(interval, rollup_query.getRollupAgg(), rollup_query.getSampleIntervalInMS(), aggregator); @@ -1405,13 +1403,13 @@ else if (best_match_rollups != null && best_match_rollups.size() > 0) { // downsampler = rollup_query.getRollupAgg(); // TODO - default fill downsampler = new DownsamplingSpecification( - rollup_query.getSampleIntervalInMS(), + rollup_query.getSampleIntervalInMS(), rollup_query.getRollupAgg(), - (downsampler != null ? downsampler.getFillPolicy() : + (downsampler != null ? downsampler.getFillPolicy() : FillPolicy.ZERO)); } } - + return runAsync(); } return Deferred.fromResult(NO_RESULT); @@ -1421,11 +1419,11 @@ else if (best_match_rollups != null && best_match_rollups.size() > 0) { } } } - + /** * Returns a scanner set for the given metric (from {@link #metric} or from - * the first TSUID in the {@link #tsuids}s list. If one or more tags are - * provided, it calls into {@link #createAndSetFilter} to setup a row key + * the first TSUID in the {@link #tsuids}s list. If one or more tags are + * provided, it calls into {@link #createAndSetFilter} to setup a row key * filter. If one or more TSUIDs have been provided, it calls into * {@link #createAndSetTSUIDFilter} to setup a row key filter. * @return A scanner to use for fetching data points @@ -1433,11 +1431,11 @@ else if (best_match_rollups != null && best_match_rollups.size() > 0) { protected Scanner getScanner() throws HBaseException { return getScanner(0); } - + /** * Returns a scanner set for the given metric (from {@link #metric} or from - * the first TSUID in the {@link #tsuids}s list. If one or more tags are - * provided, it calls into {@link #createAndSetFilter} to setup a row key + * the first TSUID in the {@link #tsuids}s list. If one or more tags are + * provided, it calls into {@link #createAndSetFilter} to setup a row key * filter. If one or more TSUIDs have been provided, it calls into * {@link #createAndSetTSUIDFilter} to setup a row key filter. * @param salt_bucket The salt bucket to scan over when salting is enabled. @@ -1445,27 +1443,27 @@ protected Scanner getScanner() throws HBaseException { */ protected Scanner getScanner(final int salt_bucket) throws HBaseException { final short metric_width = tsdb.metrics.width(); - + // set the metric UID based on the TSUIDs if given, or the metric UID if (tsuids != null && !tsuids.isEmpty()) { final String tsuid = tsuids.get(0); final String metric_uid = tsuid.substring(0, metric_width * 2); metric = UniqueId.stringToUid(metric_uid); } - + final boolean is_rollup = RollupQuery.isValidQuery(rollup_query); - + // We search at least one row before and one row after the start & end // time we've been given as it's quite likely that the exact timestamp // we're looking for is in the middle of a row. Plus, a number of things // rely on having a few extra data points before & after the exact start // & end dates in order to do proper rate calculation or downsampling near // the "edges" of the graph. - final Scanner scanner = QueryUtil.getMetricScanner(tsdb, salt_bucket, metric, + final Scanner scanner = QueryUtil.getMetricScanner(tsdb, salt_bucket, metric, (int) getScanStartTimeSeconds(), end_time == UNSET ? -1 // Will scan until the end (0xFFF...). - : (int) getScanEndTimeSeconds(), - tableToBeScanned(), + : (int) getScanEndTimeSeconds(), + tableToBeScanned(), TSDB.FAMILY()); if(tsdb.getConfig().use_otsdb_timestamp()) { long stTime = (getScanStartTimeSeconds() * 1000); @@ -1498,7 +1496,7 @@ protected Scanner getScanner(final int salt_bucket) throws HBaseException { new BinaryPrefixComparator(rollup_query.getRollupAgg().toString() .getBytes(Const.ASCII_CHARSET)))); rollup_filters.add(new QualifierFilter(CompareFilter.CompareOp.EQUAL, - new BinaryPrefixComparator(new byte[] { + new BinaryPrefixComparator(new byte[] { (byte) tsdb.getRollupConfig().getIdForAggregator( rollup_query.getRollupAgg().toString()) }))); @@ -1510,7 +1508,7 @@ protected Scanner getScanner(final int salt_bucket) throws HBaseException { new BinaryPrefixComparator(rollup_query.getRollupAgg().toString() .getBytes(Const.ASCII_CHARSET)))); filters.add(new QualifierFilter(CompareFilter.CompareOp.EQUAL, - new BinaryPrefixComparator(new byte[] { + new BinaryPrefixComparator(new byte[] { (byte) tsdb.getRollupConfig().getIdForAggregator( rollup_query.getRollupAgg().toString()) }))); @@ -1527,10 +1525,10 @@ protected Scanner getScanner(final int salt_bucket) throws HBaseException { (byte) tsdb.getRollupConfig().getIdForAggregator("sum") }))); filters.add(new QualifierFilter(CompareFilter.CompareOp.EQUAL, - new BinaryPrefixComparator(new byte[] { + new BinaryPrefixComparator(new byte[] { (byte) tsdb.getRollupConfig().getIdForAggregator("count") }))); - + if (existing != null) { final List combined = new ArrayList(2); combined.add(existing); @@ -1545,14 +1543,14 @@ protected Scanner getScanner(final int salt_bucket) throws HBaseException { } /** - * Identify the table to be scanned based on the roll up and pre-aggregate + * Identify the table to be scanned based on the roll up and pre-aggregate * query parameters * @return table name as byte array * @since 2.4 */ private byte[] tableToBeScanned() { final byte[] tableName; - + if (RollupQuery.isValidQuery(rollup_query)) { if (pre_aggregate) { tableName= rollup_query.getRollupInterval().getGroupbyTable(); @@ -1567,10 +1565,10 @@ else if (pre_aggregate) { else { tableName = tsdb.dataTable(); } - + return tableName; } - + /** Returns the UNIX timestamp from which we must start scanning. */ long getScanStartTimeSeconds() { // Begin with the raw query start time. @@ -1580,15 +1578,15 @@ long getScanStartTimeSeconds() { if ((start & Const.SECOND_MASK) != 0L) { start /= 1000L; } - + // if we have a rollup query, we have different row key start times so find // the base time from which we need to search if (rollup_query != null) { - long base_time = RollupUtils.getRollupBasetime(start, + long base_time = RollupUtils.getRollupBasetime(start, rollup_query.getRollupInterval()); if (rate) { // scan one row back so we can get the first rate value. - base_time = RollupUtils.getRollupBasetime(base_time - 1, + base_time = RollupUtils.getRollupBasetime(base_time - 1, rollup_query.getRollupInterval()); } return base_time; @@ -1614,24 +1612,25 @@ long getScanStartTimeSeconds() { } /** Returns the UNIX timestamp at which we must stop scanning. */ - long getScanEndTimeSeconds() { + @VisibleForTesting + protected long getScanEndTimeSeconds() { // Begin with the raw query end time. long end = getEndTime(); // Convert to seconds if we have a query in ms. if ((end & Const.SECOND_MASK) != 0L) { end /= 1000L; - if (end - (end * 1000) < 1) { + if (end == 0) { // handle an edge case where a user may request a ms time between // 0 and 1 seconds. Just bump it a second. end++; } } - + if (rollup_query != null) { - return RollupUtils.getRollupBasetime(end + - (rollup_query.getRollupInterval().getIntervalSeconds() * - rollup_query.getRollupInterval().getIntervals()), + return RollupUtils.getRollupBasetime(end + + (rollup_query.getRollupInterval().getIntervalSeconds() * + rollup_query.getRollupInterval().getIntervals()), rollup_query.getRollupInterval()); } @@ -1682,13 +1681,13 @@ long getScanEndTimeSeconds() { * @param scanner The scanner on which to add the filter. */ private void createAndSetFilter(final Scanner scanner) { - QueryUtil.setDataTableScanFilter(scanner, group_bys, row_key_literals, - explicit_tags, enable_fuzzy_filter, + QueryUtil.setDataTableScanFilter(scanner, group_bys, row_key_literals, + explicit_tags, enable_fuzzy_filter, (end_time == UNSET ? -1 // Will scan until the end (0xFFF...). : (int) getScanEndTimeSeconds())); } - + /** * Sets the server-side regexp filter on the scanner. * This will compile a list of the tagk/v pairs for the TSUIDs to prevent @@ -1702,9 +1701,9 @@ private void createAndSetTSUIDFilter(final Scanner scanner) { } scanner.setKeyRegexp(regex, CHARSET); } - + /** - * Return the query index that maps this datapoints to the original subquery + * Return the query index that maps this datapoints to the original subquery * @return index of the query in the TSQuery class * @since 2.4 */ @@ -1712,7 +1711,7 @@ private void createAndSetTSUIDFilter(final Scanner scanner) { public int getQueryIdx() { return query_index; } - + /** * set the index that link this query to the original index. * @param idx query index idx @@ -1721,30 +1720,30 @@ public int getQueryIdx() { public void setQueryIdx(int idx) { query_index = idx; } - + /** * Transform downsampler properties to rollup properties, if the rollup * is enabled at configuration level and down sampler is set. - * It falls back to raw data and down sampling if there is no + * It falls back to raw data and down sampling if there is no * RollupInterval is configured against this down sample interval * @param group_by The group by aggregator. * @param str_interval String representation of the interval, for logging * @since 2.4 */ - public void transformDownSamplerToRollupQuery(final Aggregator group_by, + public void transformDownSamplerToRollupQuery(final Aggregator group_by, final String str_interval) { - + if (downsampler != null && downsampler.getInterval() > 0) { if (tsdb.getRollupConfig() != null) { try { best_match_rollups = tsdb.getRollupConfig(). getRollupInterval(downsampler.getInterval() / 1000, str_interval); - //It is thread safe as each thread will be working on unique + //It is thread safe as each thread will be working on unique // TsdbQuery object - //RollupConfig.getRollupInterval guarantees that, + //RollupConfig.getRollupInterval guarantees that, // it always return a non-empty list // TODO - rollup_query = new RollupQuery(best_match_rollups.remove(0), + rollup_query = new RollupQuery(best_match_rollups.remove(0), downsampler.getFunction(), downsampler.getInterval(), group_by); } diff --git a/src/create_table.sh b/src/create_table.sh index 1cbe666319..917a4ff3de 100755 --- a/src/create_table.sh +++ b/src/create_table.sh @@ -23,7 +23,7 @@ COMPRESSION=`echo "$COMPRESSION" | tr a-z A-Z` # This can save a lot of storage space. DATA_BLOCK_ENCODING=${DATA_BLOCK_ENCODING-'DIFF'} DATA_BLOCK_ENCODING=`echo "$DATA_BLOCK_ENCODING" | tr a-z A-Z` -TSDB_TTL=${TSDB_TTL-'FOREVER'} +TSDB_TTL=${TSDB_TTL-'2147483647'} case $COMPRESSION in (NONE|LZO|GZIP|SNAPPY) :;; # Known good. diff --git a/src/query/expression/ExpressionFactory.java b/src/query/expression/ExpressionFactory.java index e0fbdd44e8..43358e6eb6 100644 --- a/src/query/expression/ExpressionFactory.java +++ b/src/query/expression/ExpressionFactory.java @@ -37,6 +37,7 @@ public final class ExpressionFactory { available_functions.put("highestMax", new HighestMax()); available_functions.put("shift", new TimeShift()); available_functions.put("timeShift", new TimeShift()); + available_functions.put("firstDiff", new FirstDifference()); } /** Don't instantiate me! */ diff --git a/src/query/expression/FirstDifference.java b/src/query/expression/FirstDifference.java new file mode 100644 index 0000000000..6dc75bc088 --- /dev/null +++ b/src/query/expression/FirstDifference.java @@ -0,0 +1,101 @@ +// This file is part of OpenTSDB. +// Copyright (C) 2015 The OpenTSDB Authors. +// +// This program is free software: you can redistribute it and/or modify it +// under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 2.1 of the License, or (at your +// option) any later version. This program is distributed in the hope that it +// will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty +// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser +// General Public License for more details. You should have received a copy +// of the GNU Lesser General Public License along with this program. If not, +// see . +package net.opentsdb.query.expression; + +import java.util.ArrayList; + +import java.util.List; + +import net.opentsdb.core.DataPoint; +import net.opentsdb.core.DataPoints; +import net.opentsdb.core.IllegalDataException; +import net.opentsdb.core.MutableDataPoint; +import net.opentsdb.core.SeekableView; +import net.opentsdb.core.TSQuery; +import net.opentsdb.core.Aggregators.Interpolation; + +/** + * Implements a difference function, calculates the first difference of a given series + * + * @since 2.3 + */ +public class FirstDifference implements net.opentsdb.query.expression.Expression { + + @Override + public DataPoints[] evaluate(final TSQuery data_query, + final List query_results, final List params) { + if (data_query == null) { + throw new IllegalArgumentException("Missing time series query"); + } + if (query_results == null || query_results.isEmpty()) { + return new DataPoints[]{}; + } + + + int num_results = 0; + for (final DataPoints[] results : query_results) { + num_results += results.length; + } + final DataPoints[] results = new DataPoints[num_results]; + + int ix = 0; + // one or more sub queries (m=...&m=...&m=...) + for (final DataPoints[] sub_query_result : query_results) { + // group bys (m=sum:foo{host=*}) + for (final DataPoints dps : sub_query_result) { + results[ix++] = firstDiff(dps); + } + } + + return results; + + } + + /** + * return the first difference of datapoints + * + * @param points The data points to do difference + * @return The resulting data points + */ + private DataPoints firstDiff(final DataPoints points) { + final List dps = new ArrayList(); + final SeekableView view = points.iterator(); + List nums = new ArrayList(); + List times = new ArrayList(); + while (view.hasNext()) { + DataPoint pt = view.next(); + nums.add(pt.toDouble()); + times.add(pt.timestamp()); + } + List diff = new ArrayList(); + diff.add(0.0); + for (int j =0;j query_params, + final String inner_expression) { + return "firstDiff(" + inner_expression + ")"; + } + +} \ No newline at end of file diff --git a/src/tsd/PutDataPointRpc.java b/src/tsd/PutDataPointRpc.java index a8dd330ea5..938d4fe47c 100644 --- a/src/tsd/PutDataPointRpc.java +++ b/src/tsd/PutDataPointRpc.java @@ -616,60 +616,65 @@ class GroupCB implements Callback> { public GroupCB(final int queued) { this.queued = queued; } - + @Override public Object call(final ArrayList results) { - if (sending_response.get()) { - if (LOG.isDebugEnabled()) { - LOG.debug("Put data point call " + query + " was marked as timedout"); - } - return null; - } else { - sending_response.set(true); - if (timeout != null) { - timeout.cancel(); - } - } - int good_writes = 0; - int failed_writes = 0; - for (final boolean result : results) { - if (result) { - ++good_writes; - } else { - ++failed_writes; - } - } - - final int failures = dps.size() - queued; - if (!show_summary && !show_details) { - if (failures + failed_writes > 0) { - query.sendReply(HttpResponseStatus.BAD_REQUEST, - query.serializer().formatErrorV1( - new BadRequestException(HttpResponseStatus.BAD_REQUEST, - "One or more data points had errors", - "Please see the TSD logs or append \"details\" to the put request"))); - } else { - query.sendReply(HttpResponseStatus.NO_CONTENT, "".getBytes()); - } - } else { - final HashMap summary = new HashMap(); - if (sync_timeout > 0) { - summary.put("timeouts", 0); - } - summary.put("success", results.isEmpty() ? queued : good_writes); - summary.put("failed", failures + failed_writes); - if (show_details) { - summary.put("errors", details); - } - - if (failures > 0) { - query.sendReply(HttpResponseStatus.BAD_REQUEST, - query.serializer().formatPutV1(summary)); - } else { - query.sendReply(query.serializer().formatPutV1(summary)); + tsdb.response(new Runnable() { + @Override + public void run() { + if (sending_response.get()) { + if (LOG.isDebugEnabled()) { + LOG.debug("Put data point call " + query + " was marked as timedout"); + } + return; + } else { + sending_response.set(true); + if (timeout != null) { + timeout.cancel(); + } + } + int good_writes = 0; + int failed_writes = 0; + for (final boolean result : results) { + if (result) { + ++good_writes; + } else { + ++failed_writes; + } + } + + final int failures = dps.size() - queued; + if (!show_summary && !show_details) { + if (failures + failed_writes > 0) { + query.sendReply(HttpResponseStatus.BAD_REQUEST, + query.serializer().formatErrorV1( + new BadRequestException(HttpResponseStatus.BAD_REQUEST, + "One or more data points had errors", + "Please see the TSD logs or append \"details\" to the put request"))); + } else { + query.sendReply(HttpResponseStatus.NO_CONTENT, "".getBytes()); + } + } else { + final HashMap summary = new HashMap(); + if (sync_timeout > 0) { + summary.put("timeouts", 0); + } + summary.put("success", results.isEmpty() ? queued : good_writes); + summary.put("failed", failures + failed_writes); + if (show_details) { + summary.put("errors", details); + } + + if (failures > 0) { + query.sendReply(HttpResponseStatus.BAD_REQUEST, + query.serializer().formatPutV1(summary)); + } else { + query.sendReply(query.serializer().formatPutV1(summary)); + } + } } - } - + }); + return null; } @Override diff --git a/src/tsd/RpcManager.java b/src/tsd/RpcManager.java index d757d686dc..508e9154f3 100644 --- a/src/tsd/RpcManager.java +++ b/src/tsd/RpcManager.java @@ -305,7 +305,7 @@ private void initializeBuiltinRpcs(final OperationMode mode, http.put("api/rollup", rollups); http.put("api/histogram", histos); http.put("api/tree", new TreeRpc()); - http.put("api/uid", new UniqueIdRpc()); + http.put("api/uid", new UniqueIdRpc(mode)); } break; case READONLY: @@ -321,6 +321,7 @@ private void initializeBuiltinRpcs(final OperationMode mode, http.put("api/query", new QueryRpc()); http.put("api/search", new SearchRpc()); http.put("api/suggest", suggest_rpc); + http.put("api/uid", new UniqueIdRpc(mode)); } break; @@ -347,7 +348,7 @@ private void initializeBuiltinRpcs(final OperationMode mode, http.put("api/rollup", rollups); http.put("api/histogram", histos); http.put("api/tree", new TreeRpc()); - http.put("api/uid", new UniqueIdRpc()); + http.put("api/uid", new UniqueIdRpc(mode)); } } diff --git a/src/tsd/UniqueIdRpc.java b/src/tsd/UniqueIdRpc.java index a9057866f8..c82aec0f13 100644 --- a/src/tsd/UniqueIdRpc.java +++ b/src/tsd/UniqueIdRpc.java @@ -47,6 +47,12 @@ */ final class UniqueIdRpc implements HttpRpc { + private final TSDB.OperationMode mode; + + public UniqueIdRpc(TSDB.OperationMode mode) { + this.mode = mode; + } + @Override public void execute(TSDB tsdb, HttpQuery query) throws IOException { @@ -87,6 +93,10 @@ public void execute(TSDB tsdb, HttpQuery query) throws IOException { * @param query The query for this request */ private void handleAssign(final TSDB tsdb, final HttpQuery query) { + if (!mode.isWrite()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in ro mode."); + } // only accept GET And POST if (query.method() != HttpMethod.GET && query.method() != HttpMethod.POST) { throw new BadRequestException(HttpResponseStatus.METHOD_NOT_ALLOWED, @@ -162,6 +172,10 @@ private void handleUIDMeta(final TSDB tsdb, final HttpQuery query) { final HttpMethod method = query.getAPIMethod(); // GET if (method == HttpMethod.GET) { + if (!mode.isRead()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in wo mode."); + } final String uid = query.getRequiredQueryStringParam("uid"); final UniqueIdType type = UniqueId.stringToUniqueIdType( @@ -178,6 +192,10 @@ private void handleUIDMeta(final TSDB tsdb, final HttpQuery query) { } // POST } else if (method == HttpMethod.POST || method == HttpMethod.PUT) { + if (!mode.isWrite()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in ro mode."); + } final UIDMeta meta; if (query.hasContent()) { @@ -224,6 +242,10 @@ public Deferred call(Boolean success) throws Exception { } // DELETE } else if (method == HttpMethod.DELETE) { + if (!mode.isWrite()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in ro mode."); + } final UIDMeta meta; if (query.hasContent()) { @@ -261,6 +283,10 @@ private void handleTSMeta(final TSDB tsdb, final HttpQuery query) { final HttpMethod method = query.getAPIMethod(); // GET if (method == HttpMethod.GET) { + if (!mode.isRead()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in wo mode."); + } String tsuid = null; if (query.hasQueryStringParam("tsuid")) { @@ -313,6 +339,10 @@ private void handleTSMeta(final TSDB tsdb, final HttpQuery query) { } // POST / PUT } else if (method == HttpMethod.POST || method == HttpMethod.PUT) { + if (!mode.isWrite()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in ro mode."); + } final TSMeta meta; if (query.hasContent()) { @@ -431,6 +461,10 @@ public Boolean call(Boolean exists) throws Exception { } // DELETE } else if (method == HttpMethod.DELETE) { + if (!mode.isWrite()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in ro mode."); + } final TSMeta meta; if (query.hasContent()) { @@ -491,6 +525,10 @@ private UIDMeta parseUIDMetaQS(final HttpQuery query) { * @param query The query for this request */ private void handleRename(final TSDB tsdb, final HttpQuery query) { + if (!mode.isWrite()) { + throw new BadRequestException(HttpResponseStatus.NOT_FOUND, "Operation not allowed", + "This operation is not allowed in ro mode."); + } // only accept GET and POST if (query.method() != HttpMethod.GET && query.method() != HttpMethod.POST) { throw new BadRequestException(HttpResponseStatus.METHOD_NOT_ALLOWED, diff --git a/src/utils/Config.java b/src/utils/Config.java index 723df36c7a..fe05ea3a92 100644 --- a/src/utils/Config.java +++ b/src/utils/Config.java @@ -21,6 +21,7 @@ import java.util.Map; import java.util.Properties; +import net.opentsdb.core.RpcResponder; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -340,6 +341,23 @@ public final int getInt(final String property) { return Integer.parseInt(sanitize(properties.get(property))); } + /** + * Returns the given property as an integer. + * If no such property is specified, or if the specified value is not a valid + * Int, then default_val is returned. + * + * @param property The property to load + * @param default_val default value + * @return A parsed integer or default_val. + */ + public final int getInt(final String property, final int default_val) { + try { + return getInt(property); + } catch (Exception e) { + return default_val; + } + } + /** * Returns the given string trimed or null if is null * @param string The string be trimmed of @@ -420,6 +438,23 @@ public final boolean getBoolean(final String property) { return false; } + /** + * Returns the given property as an boolean. + * If no such property is specified, or if the specified value is not a valid + * boolean, then default_val is returned. + * + * @param property The property to load + * @param default_val default value + * @return A parsed boolean or default_val. + */ + public final boolean getBoolean(final String property, final boolean default_val) { + try { + return getBoolean(property); + } catch (Exception e) { + return default_val; + } + } + /** * Returns the directory name, making sure the end is an OS dependent slash * @param property The property to load diff --git a/test/core/TestRpcResponsder.java b/test/core/TestRpcResponsder.java new file mode 100644 index 0000000000..5ddbf73baf --- /dev/null +++ b/test/core/TestRpcResponsder.java @@ -0,0 +1,65 @@ +// This file is part of OpenTSDB. +// Copyright (C) 2010-2017 The OpenTSDB Authors. +// +// This program is free software: you can redistribute it and/or modify it +// under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 2.1 of the License, or (at your +// option) any later version. This program is distributed in the hope that it +// will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty +// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser +// General Public License for more details. You should have received a copy +// of the GNU Lesser General Public License along with this program. If not, +// see . +package net.opentsdb.core; + + +import net.opentsdb.utils.Config; +import org.jboss.netty.util.internal.ThreadLocalRandom; +import org.junit.Assert; +import org.junit.Test; + +import java.util.concurrent.atomic.AtomicInteger; + +public class TestRpcResponsder { + + private final AtomicInteger complete_counter = new AtomicInteger(0); + + @Test(timeout = 60000) + public void testGracefulShutdown() throws InterruptedException { + RpcResponder rpcResponder = new RpcResponder(new Config()); + + final int n = 100; + for (int i = 0; i < n; i++) { + rpcResponder.response(new MockResponseProcess()); + } + + Thread.sleep(500); + rpcResponder.close(); + + try { + rpcResponder.response(new MockResponseProcess()); + Assert.fail("Expect an IllegalStateException"); + } catch (IllegalStateException ignore) { + } + + Assert.assertEquals(n, complete_counter.get()); + } + + private class MockResponseProcess implements Runnable { + + @Override + public void run() { + long duration = ThreadLocalRandom.current().nextInt(5000); + while (duration > 0) { + try { + Thread.sleep(100); + } catch (InterruptedException ignore) { + } + duration -= 100; + } + complete_counter.incrementAndGet(); + } + } + + +} diff --git a/test/core/TestTsdbQuery.java b/test/core/TestTsdbQuery.java index e564e62560..888e184873 100644 --- a/test/core/TestTsdbQuery.java +++ b/test/core/TestTsdbQuery.java @@ -27,6 +27,8 @@ import net.opentsdb.uid.NoSuchUniqueName; import net.opentsdb.utils.DateTime; +import org.jboss.netty.util.internal.ThreadLocalRandom; +import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; @@ -102,6 +104,24 @@ public void setEndTime() throws Exception { assertEquals(1356998400L, query.getEndTime()); } + @Test + public void getScanEndTimeSeconds() { + long now = System.currentTimeMillis() / 1000; + long baseTime = now - (now % Const.MAX_TIMESPAN); + long expectedEndScanTime = baseTime + Const.MAX_TIMESPAN; + + for (int i = 0; i < 3600; i++) { + long sec = baseTime + i; + long ms = sec * 1000 + ThreadLocalRandom.current().nextInt(1000); + query.setEndTime(sec); + Assert.assertEquals("EndTime=" + sec, expectedEndScanTime, + query.getScanEndTimeSeconds()); + query.setEndTime(ms); + Assert.assertEquals("EndTime=" + ms, expectedEndScanTime, + query.getScanEndTimeSeconds()); + } + } + @Test (expected = IllegalStateException.class) public void getStartTimeNotSet() throws Exception { query.getStartTime(); diff --git a/test/query/expression/TestFirstDifference.java b/test/query/expression/TestFirstDifference.java new file mode 100644 index 0000000000..031f0e5e34 --- /dev/null +++ b/test/query/expression/TestFirstDifference.java @@ -0,0 +1,348 @@ +// This file is part of OpenTSDB. +// Copyright (C) 2015 The OpenTSDB Authors. +// +// This program is free software: you can redistribute it and/or modify it +// under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 2.1 of the License, or (at your +// option) any later version. This program is distributed in the hope that it +// will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty +// of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser +// General Public License for more details. You should have received a copy +// of the GNU Lesser General Public License along with this program. If not, +// see . +package net.opentsdb.query.expression; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import net.opentsdb.core.DataPoint; +import net.opentsdb.core.DataPoints; +import net.opentsdb.core.SeekableView; +import net.opentsdb.core.SeekableViewsForTest; +import net.opentsdb.core.TSQuery; + +import org.junit.Before; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.powermock.api.mockito.PowerMockito; +import org.powermock.core.classloader.annotations.PowerMockIgnore; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.modules.junit4.PowerMockRunner; + +import com.stumbleupon.async.Deferred; + +@RunWith(PowerMockRunner.class) +@PowerMockIgnore({"javax.management.*", "javax.xml.*", + "ch.qos.*", "org.slf4j.*", + "com.sum.*", "org.xml.*"}) +@PrepareForTest({TSQuery.class}) +public class TestFirstDifference { + + private static long START_TIME = 1356998400000L; + private static int INTERVAL = 60000; + private static int NUM_POINTS = 5; + private static String METRIC = "sys.cpu"; + + private TSQuery data_query; + private SeekableView view; + private DataPoints dps; + private DataPoints[] group_bys; + private List query_results; + private List params; + private net.opentsdb.query.expression.FirstDifference func; + + @Before + public void before() throws Exception { + view = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, true, 1, 1); + data_query = mock(TSQuery.class); + when(data_query.startTime()).thenReturn(START_TIME); + when(data_query.endTime()).thenReturn(START_TIME + (INTERVAL * NUM_POINTS)); + + dps = PowerMockito.mock(DataPoints.class); + when(dps.iterator()).thenReturn(view); + when(dps.metricNameAsync()).thenReturn(Deferred.fromResult(METRIC)); + + group_bys = new DataPoints[]{dps}; + + query_results = new ArrayList(1); + query_results.add(group_bys); + + params = new ArrayList(1); + func = new net.opentsdb.query.expression.FirstDifference(); + } + + @Test + public void evaluatePositiveGroupByLong() throws Exception { + SeekableView view2 = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, true, 10, 1); + DataPoints dps2 = PowerMockito.mock(DataPoints.class); + when(dps2.iterator()).thenReturn(view2); + when(dps2.metricNameAsync()).thenReturn(Deferred.fromResult("sys.mem")); + group_bys = new DataPoints[]{dps, dps2}; + query_results.clear(); + query_results.add(group_bys); + + final DataPoints[] results = func.evaluate(data_query, query_results, params); + + assertEquals(2, results.length); + assertEquals(METRIC, results[0].metricName()); + assertEquals("sys.mem", results[1].metricName()); + + long ts = START_TIME; + long v = 0; + for (DataPoint dp : results[0]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(),0.001); + ts += INTERVAL; + v = 1; + } + ts = START_TIME; + v = 0; + for (DataPoint dp : results[1]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = 1; + } + } + + @Test + public void evaluatePositiveGroupByDouble() throws Exception { + SeekableView view2 = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, false, 10, 1); + DataPoints dps2 = PowerMockito.mock(DataPoints.class); + when(dps2.iterator()).thenReturn(view2); + when(dps2.metricNameAsync()).thenReturn(Deferred.fromResult("sys.mem")); + group_bys = new DataPoints[]{dps, dps2}; + query_results.clear(); + query_results.add(group_bys); + + final DataPoints[] results = func.evaluate(data_query, query_results, params); + + assertEquals(2, results.length); + assertEquals(METRIC, results[0].metricName()); + assertEquals("sys.mem", results[1].metricName()); + + long ts = START_TIME; + double v = 0; + for (DataPoint dp : results[0]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v =1; + } + ts = START_TIME; + v = 0; + for (DataPoint dp : results[1]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = 1; + } + } + + @Test + public void evaluatePositiveGroupBy1point5Double() throws Exception { + SeekableView view2 = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, false, 10, 1.5); + DataPoints dps2 = PowerMockito.mock(DataPoints.class); + when(dps2.iterator()).thenReturn(view2); + when(dps2.metricNameAsync()).thenReturn(Deferred.fromResult("sys.mem")); + group_bys = new DataPoints[]{dps, dps2}; + query_results.clear(); + query_results.add(group_bys); + + final DataPoints[] results = func.evaluate(data_query, query_results, params); + + assertEquals(2, results.length); + assertEquals(METRIC, results[0].metricName()); + assertEquals("sys.mem", results[1].metricName()); + + long ts = START_TIME; + double v = 0; + for (DataPoint dp : results[0]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v =1; + } + ts = START_TIME; + v = 0; + for (DataPoint dp : results[1]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = 1.5; + } + } + + @Test + public void evaluateFactorNegativeGroupByLong() throws Exception { + SeekableView view2 = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, true, -10, -1); + DataPoints dps2 = PowerMockito.mock(DataPoints.class); + when(dps2.iterator()).thenReturn(view2); + when(dps2.metricNameAsync()).thenReturn(Deferred.fromResult("sys.mem")); + group_bys = new DataPoints[]{dps, dps2}; + query_results.clear(); + query_results.add(group_bys); + + final DataPoints[] results = func.evaluate(data_query, query_results, params); + + assertEquals(2, results.length); + assertEquals(METRIC, results[0].metricName()); + assertEquals("sys.mem", results[1].metricName()); + + long ts = START_TIME; + long v = 0; + for (DataPoint dp : results[0]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = 1; + } + ts = START_TIME; + v = 0; + for (DataPoint dp : results[1]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = -1; + } + } + + @Test + public void evaluateNegativeGroupByDouble() throws Exception { + SeekableView view2 = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, false, -10, -1); + DataPoints dps2 = PowerMockito.mock(DataPoints.class); + when(dps2.iterator()).thenReturn(view2); + when(dps2.metricNameAsync()).thenReturn(Deferred.fromResult("sys.mem")); + group_bys = new DataPoints[]{dps, dps2}; + query_results.clear(); + query_results.add(group_bys); + + final DataPoints[] results = func.evaluate(data_query, query_results, params); + + assertEquals(2, results.length); + assertEquals(METRIC, results[0].metricName()); + assertEquals("sys.mem", results[1].metricName()); + + long ts = START_TIME; + double v = 0; + for (DataPoint dp : results[0]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = 1; + } + ts = START_TIME; + v = 0; + for (DataPoint dp : results[1]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = -1; + } + } + + @Test + public void evaluateNegativeSubQuerySeries() throws Exception { + params.add("1"); + SeekableView view2 = SeekableViewsForTest.generator(START_TIME, INTERVAL, + NUM_POINTS, true, -10, -1); + DataPoints dps2 = PowerMockito.mock(DataPoints.class); + when(dps2.iterator()).thenReturn(view2); + when(dps2.metricNameAsync()).thenReturn(Deferred.fromResult("sys.mem")); + group_bys = new DataPoints[]{dps, dps2}; + query_results.clear(); + query_results.add(group_bys); + + final DataPoints[] results = func.evaluate(data_query, query_results, params); + + assertEquals(2, results.length); + assertEquals(METRIC, results[0].metricName()); + assertEquals("sys.mem", results[1].metricName()); + + long ts = START_TIME; + long v = 0; + for (DataPoint dp : results[0]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = 1; + } + ts = START_TIME; + v = 0; + for (DataPoint dp : results[1]) { + assertEquals(ts, dp.timestamp()); + assertFalse(dp.isInteger()); + assertEquals(v, dp.doubleValue(), 0.001); + ts += INTERVAL; + v = -1; + } + } + + @Test(expected = IllegalArgumentException.class) + public void evaluateNullQuery() throws Exception { + params.add("1"); + func.evaluate(null, query_results, params); + } + + @Test + public void evaluateNullResults() throws Exception { + params.add("1"); + final DataPoints[] results = func.evaluate(data_query, null, params); + assertEquals(0, results.length); + } + + @Test + public void evaluateNullParams() throws Exception { + assertNotNull(func.evaluate(data_query, query_results, null)); + } + + @Test + public void evaluateEmptyResults() throws Exception { + params.add("1"); + final DataPoints[] results = func.evaluate(data_query, + Collections.emptyList(), params); + assertEquals(0, results.length); + } + + @Test + public void evaluateEmptyParams() throws Exception { + assertNotNull(func.evaluate(data_query, query_results, null)); + } + + @Test + public void writeStringField() throws Exception { + params.add("1"); + assertEquals("firstDiff(inner_expression)", + func.writeStringField(params, "inner_expression")); + assertEquals("firstDiff(null)", func.writeStringField(params, null)); + assertEquals("firstDiff()", func.writeStringField(params, "")); + assertEquals("firstDiff(inner_expression)", + func.writeStringField(null, "inner_expression")); + } +} diff --git a/test/tsd/TestUniqueIdRpc.java b/test/tsd/TestUniqueIdRpc.java index 6969b8f565..b90e1e6ceb 100644 --- a/test/tsd/TestUniqueIdRpc.java +++ b/test/tsd/TestUniqueIdRpc.java @@ -64,7 +64,7 @@ public final class TestUniqueIdRpc { private UniqueId tag_names = mock(UniqueId.class); private UniqueId tag_values = mock(UniqueId.class); private MockBase storage; - private UniqueIdRpc rpc = new UniqueIdRpc(); + private UniqueIdRpc rpc = new UniqueIdRpc(TSDB.OperationMode.READWRITE); @Before public void before() throws Exception { @@ -89,6 +89,15 @@ public void notImplemented() throws Exception { } // Test /api/uid/assign ---------------------- + + @Test (expected = BadRequestException.class) + public void assignReadOnlyMode() throws Exception { + setupAssign(); + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + HttpQuery query = NettyMocks.getQuery(tsdb, + "/api/uid/assign?metric=sys.cpu.0"); + this.rpc.execute(tsdb, query); + } @Test public void assignQsMetricSingle() throws Exception { @@ -540,6 +549,14 @@ public void renameBadMethod() throws Exception { rpc.execute(tsdb, query); } + @Test (expected = BadRequestException.class) + public void renameReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + HttpQuery query = NettyMocks.postQuery(tsdb, "/api/uid/rename", + "{\"metric\":\"sys.cpu.1\",\"name\":\"sys.cpu.2\"}"); + this.rpc.execute(tsdb, query); + } + @Test public void renamePostMetric() throws Exception { HttpQuery query = NettyMocks.postQuery(tsdb, "/api/uid/rename", @@ -686,6 +703,15 @@ public void renameRenameException() throws Exception { } // Teset /api/uid/uidmeta -------------------- + + @Test (expected = BadRequestException.class) + public void uidGetWriteOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.WRITEONLY); + setupUID(); + HttpQuery query = NettyMocks.getQuery(tsdb, + "/api/uid/uidmeta?type=metric&uid=000001"); + rpc.execute(tsdb, query); + } @Test public void uidGet() throws Exception { @@ -719,6 +745,15 @@ public void uidGetNSU() throws Exception { "/api/uid/uidmeta?type=metric&uid=000002"); rpc.execute(tsdb, query); } + + @Test (expected = BadRequestException.class) + public void uidPostReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + setupUID(); + HttpQuery query = NettyMocks.postQuery(tsdb, "/api/uid/uidmeta", + "{\"uid\":\"000001\",\"type\":\"metric\",\"displayName\":\"Hello!\"}"); + rpc.execute(tsdb, query); + } @Test public void uidPost() throws Exception { @@ -770,6 +805,15 @@ public void uidPostQS() throws Exception { rpc.execute(tsdb, query); assertEquals(HttpResponseStatus.OK, query.response().getStatus()); } + + @Test (expected = BadRequestException.class) + public void uidPutReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + setupUID(); + HttpQuery query = NettyMocks.putQuery(tsdb, "/api/uid/uidmeta", + "{\"uid\":\"000001\",\"type\":\"metric\",\"displayName\":\"Hello!\"}"); + rpc.execute(tsdb, query); + } @Test public void uidPut() throws Exception { @@ -821,6 +865,15 @@ public void uidPutQS() throws Exception { rpc.execute(tsdb, query); assertEquals(HttpResponseStatus.OK, query.response().getStatus()); } + + @Test (expected = BadRequestException.class) + public void uidDeleteReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + setupUID(); + HttpQuery query = NettyMocks.deleteQuery(tsdb, "/api/uid/uidmeta", + "{\"uid\":\"000001\",\"type\":\"metric\",\"displayName\":\"Hello!\"}"); + rpc.execute(tsdb, query); + } @Test public void uidDelete() throws Exception { @@ -857,6 +910,15 @@ public void uidDeleteQS() throws Exception { } // Test /api/uid/tsmeta ---------------------- + + @Test (expected = BadRequestException.class) + public void tsuidGetWriteOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.WRITEONLY); + setupTSUID(); + HttpQuery query = NettyMocks.getQuery(tsdb, + "/api/uid/tsmeta?tsuid=000001000001000001"); + rpc.execute(tsdb, query); + } @Test public void tsuidGet() throws Exception { @@ -943,6 +1005,15 @@ public void tsuidGetMissingTSUID() throws Exception { rpc.execute(tsdb, query); } + @Test (expected = BadRequestException.class) + public void tsuidPostReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + setupTSUID(); + HttpQuery query = NettyMocks.postQuery(tsdb, "/api/uid/tsmeta", + "{\"tsuid\":\"000001000001000001\", \"displayName\":\"Hello World\"}"); + rpc.execute(tsdb, query); + } + @Test public void tsuidPost() throws Exception { setupTSUID(); @@ -989,6 +1060,15 @@ public void tsuidPostQSNoTSUID() throws Exception { "/api/uid/tsmeta?display_name=42&method_override=post"); rpc.execute(tsdb, query); } + + @Test (expected = BadRequestException.class) + public void tsuidPutReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + setupTSUID(); + HttpQuery query = NettyMocks.putQuery(tsdb, "/api/uid/tsmeta", + "{\"tsuid\":\"000001000001000001\", \"displayName\":\"Hello World\"}"); + rpc.execute(tsdb, query); + } @Test public void tsuidPut() throws Exception { @@ -1036,6 +1116,15 @@ public void tsuidPutQSNoTSUID() throws Exception { "/api/uid/tsmeta?display_name=42&method_override=put"); rpc.execute(tsdb, query); } + + @Test (expected = BadRequestException.class) + public void tsuidDeleteReadOnlyMode() throws Exception { + rpc = new UniqueIdRpc(TSDB.OperationMode.READONLY); + setupTSUID(); + HttpQuery query = NettyMocks.deleteQuery(tsdb, "/api/uid/tsmeta", + "{\"tsuid\":\"000001000001000001\", \"displayName\":\"Hello World\"}"); + rpc.execute(tsdb, query); + } @Test public void tsuidDelete() throws Exception { diff --git a/third_party/apache/commons-math3-3.4.1.jar.md5 b/third_party/apache/commons-math3-3.4.1.jar.md5 index 9939ae9c15..a26a157f84 100644 --- a/third_party/apache/commons-math3-3.4.1.jar.md5 +++ b/third_party/apache/commons-math3-3.4.1.jar.md5 @@ -1 +1 @@ -14a218d0ee57907dd2c7ef944b6c0afd +14a218d0ee57907dd2c7ef944b6c0afd \ No newline at end of file diff --git a/third_party/jackson/include.mk b/third_party/jackson/include.mk index 7a0a904ce5..06a62b68e1 100644 --- a/third_party/jackson/include.mk +++ b/third_party/jackson/include.mk @@ -13,7 +13,7 @@ # You should have received a copy of the GNU Lesser General Public License # along with this library. If not, see . -JACKSON_VERSION := 2.9.5 +JACKSON_VERSION := 2.9.10 JACKSON_ANNOTATIONS_VERSION = $(JACKSON_VERSION) JACKSON_ANNOTATIONS := third_party/jackson/jackson-annotations-$(JACKSON_ANNOTATIONS_VERSION).jar diff --git a/third_party/jackson/jackson-annotations-2.9.10.jar.md5 b/third_party/jackson/jackson-annotations-2.9.10.jar.md5 new file mode 100644 index 0000000000..78c26afb02 --- /dev/null +++ b/third_party/jackson/jackson-annotations-2.9.10.jar.md5 @@ -0,0 +1 @@ +26c2b6f7bc704ccadc64c83995e0ff7f \ No newline at end of file diff --git a/third_party/jackson/jackson-core-2.9.10.jar.md5 b/third_party/jackson/jackson-core-2.9.10.jar.md5 new file mode 100644 index 0000000000..89a33946ea --- /dev/null +++ b/third_party/jackson/jackson-core-2.9.10.jar.md5 @@ -0,0 +1 @@ +d62d9b1d1d83dd553e678bc8fce8f809 \ No newline at end of file diff --git a/third_party/jackson/jackson-databind-2.9.10.jar.md5 b/third_party/jackson/jackson-databind-2.9.10.jar.md5 new file mode 100644 index 0000000000..a8536777d9 --- /dev/null +++ b/third_party/jackson/jackson-databind-2.9.10.jar.md5 @@ -0,0 +1 @@ +ff43d79c624b0f7d465542fee6648474 \ No newline at end of file diff --git a/third_party/kryo/asm-4.0.jar.md5.1 b/third_party/kryo/asm-4.0.jar.md5.1 new file mode 100644 index 0000000000..7a92e07c69 --- /dev/null +++ b/third_party/kryo/asm-4.0.jar.md5.1 @@ -0,0 +1 @@ +322d8f88c5111af612df838c0191cd7e \ No newline at end of file diff --git a/third_party/kryo/include.mk b/third_party/kryo/include.mk index 9b5a1ca9fb..72694248f1 100644 --- a/third_party/kryo/include.mk +++ b/third_party/kryo/include.mk @@ -13,16 +13,16 @@ # You should have received a copy of the GNU Lesser General Public License # along with this library. If not, see . -KRYO_VERSION := 2.21.1 +KRYO_VERSION := 3.0.0 KRYO := third_party/kryo/kryo-$(KRYO_VERSION).jar -KRYO_BASE_URL := https://repo1.maven.org/maven2/com/esotericsoftware/kryo/kryo/$(KRYO_VERSION) +KRYO_BASE_URL := https://repo1.maven.org/maven2/com/esotericsoftware/kryo/$(KRYO_VERSION) $(KRYO): $(KRYO).md5 set dummy "$(KRYO_BASE_URL)" "$(KRYO)"; shift; $(FETCH_DEPENDENCY) -REFLECTASM_VERSION := 1.07 +REFLECTASM_VERSION := 1.10.0 REFLECTASM := third_party/kryo/reflectasm-$(REFLECTASM_VERSION)-shaded.jar -REFLECTASM_BASE_URL := https://repo1.maven.org/maven2/com/esotericsoftware/reflectasm/reflectasm/$(REFLECTASM_VERSION) +REFLECTASM_BASE_URL :=https://repo1.maven.org/maven2/com/esotericsoftware/reflectasm/$(REFLECTASM_VERSION) $(REFLECTASM): $(REFLECTASM).md5 set dummy "$(REFLECTASM_BASE_URL)" "$(REFLECTASM)"; shift; $(FETCH_DEPENDENCY) @@ -34,9 +34,9 @@ ASM_BASE_URL := https://repo1.maven.org/maven2/org/ow2/asm/asm/$(ASM_VERSION) $(ASM): $(ASM).md5 set dummy "$(ASM_BASE_URL)" "$(ASM)"; shift; $(FETCH_DEPENDENCY) -MINLOG_VERSION := 1.2 +MINLOG_VERSION := 1.3 MINLOG := third_party/kryo/minlog-$(MINLOG_VERSION).jar -MINLOG_BASE_URL := https://repo1.maven.org/maven2/com/esotericsoftware/minlog/minlog/$(MINLOG_VERSION) +MINLOG_BASE_URL := https://repo1.maven.org/maven2/com/esotericsoftware/minlog/$(MINLOG_VERSION) $(MINLOG): $(MINLOG).md5 set dummy "$(MINLOG_BASE_URL)" "$(MINLOG)"; shift; $(FETCH_DEPENDENCY) diff --git a/third_party/kryo/kryo-3.0.0.jar.md5 b/third_party/kryo/kryo-3.0.0.jar.md5 new file mode 100644 index 0000000000..28ce336010 --- /dev/null +++ b/third_party/kryo/kryo-3.0.0.jar.md5 @@ -0,0 +1 @@ +720adc0fa9b1ebfa789c6ceda3ffa990 \ No newline at end of file diff --git a/third_party/kryo/kryo-4.0.0.jar.md5 b/third_party/kryo/kryo-4.0.0.jar.md5 new file mode 100644 index 0000000000..ba8eac67ec --- /dev/null +++ b/third_party/kryo/kryo-4.0.0.jar.md5 @@ -0,0 +1 @@ +e817940f2e49280c3e5ad063f38e7884 \ No newline at end of file diff --git a/third_party/kryo/minlog-1.3.jar.md5 b/third_party/kryo/minlog-1.3.jar.md5 new file mode 100644 index 0000000000..da08b47e84 --- /dev/null +++ b/third_party/kryo/minlog-1.3.jar.md5 @@ -0,0 +1 @@ +b4e9b84eaea9750fe58ac3e196c7ed9b \ No newline at end of file diff --git a/third_party/kryo/reflectasm-1.10.0-shaded.jar.md5 b/third_party/kryo/reflectasm-1.10.0-shaded.jar.md5 new file mode 100644 index 0000000000..9f1c813651 --- /dev/null +++ b/third_party/kryo/reflectasm-1.10.0-shaded.jar.md5 @@ -0,0 +1 @@ +779472dd799c5e9b1469e14b13c73061 \ No newline at end of file diff --git a/tools/docker/docker.sh b/tools/docker/docker.sh deleted file mode 100755 index 17686a0e61..0000000000 --- a/tools/docker/docker.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash -x -BUILDROOT=./build; -TOOLS=./tools -DOCKER=$BUILDROOT/docker; -rm -r $DOCKER; -mkdir -p $DOCKER; -SOURCE_PATH=$BUILDROOT; -DEST_PATH=$DOCKER/libs; -mkdir -p $DEST_PATH; -cp ${TOOLS}/docker/Dockerfile ${DOCKER}; -cp ${BUILDROOT}/../src/opentsdb.conf ${DOCKER}; -cp ${BUILDROOT}/../src/logback.xml ${DOCKER}; -#cp ${BUILDROOT}/../src/mygnuplot.sh ${DOCKER}; -cp ${SOURCE_PATH}/tsdb-2.3.0-RC1.jar ${DOCKER}; -cp ${SOURCE_PATH}/third_party/*/*.jar ${DEST_PATH}; -docker build -t opentsdb/opentsdb $DOCKER