You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[09:42:31,311][INFO ][importer.jdbc ][pool-2-thread-1] strategy standard: settings = {connection_properties.oracle.jdbc.ReadTimeout=50000, connection_propert
ies.oracle.jdbc.TcpNoDelay=false, connection_properties.oracle.net.CONNECT_TIMEOUT=10000, connection_properties.useFetchSizeWithLongColumn=false, elasticsearch.cluster=
elasticsearch, elasticsearch.host=localhost, elasticsearch.port=9200, index=catalogue_data2, index_settings.index.number_of_replica=0, index_settings.index.number_of_sh
ards=1, max_bulk_actions=20000, max_concurrent_bulk_requests=10, password=, sql=select poid_id0,name,descr from product_t, type=catalogue_info2, url=jdbc:oracle:thin
:@//ec2----**.eu-west-1.compute.amazonaws.com:1521/XE, user=pin}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@3090a924
[09:42:31,325][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@34fc0dbd
[09:42:31,330][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@4a8523ef
[09:42:31,361][INFO ][org.xbib.elasticsearch.helper.client.BaseTransportClient][pool-2-thread-1] creating transport client on Linux OpenJDK 64-Bit Server VM Oracle Corp
oration 1.8.0_131-b11 25.131-b11 with effective settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, cl
ient.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=20000, max_concurrent_requests=10, max_volume_p
er_request=10mb, name=importer, port=9200, sniff=false}
[09:42:31,380][INFO ][org.elasticsearch.plugins][pool-2-thread-1] [importer] modules [], plugins [helper], sites []
[09:42:31,773][INFO ][org.xbib.elasticsearch.helper.client.BaseTransportClient][pool-2-thread-1] trying to connect to [localhost/127.0.0.1:9200]
[09:42:36,868][INFO ][org.elasticsearch.org.xbib.elasticsearch.helper.client.TransportClient][pool-2-thread-1] [importer] failed to get node info for {#transport#-1}{12
7.0.0.1}{localhost/127.0.0.1:9200}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][localhost/127.0.0.1:9200][cluster:monitor/nodes/liveness] request_id [0] timed out after [5001ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) ~[elasticsearch-2.3.4.jar:2.3.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[09:42:36,874][ERROR][importer.jdbc ][pool-2-thread-1] error while processing request: no cluster nodes available, check settings {autodiscover=false, client
.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.
0=localhost, max_actions_per_request=20000, max_concurrent_requests=10, max_volume_per_request=10mb, name=importer, port=9200, sniff=false}
org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {autodiscover=false, client.transport.ignore_cluster_name=false,
client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=
20000, max_concurrent_requests=10, max_volume_per_request=10mb, name=importer, port=9200, sniff=false}
at org.xbib.elasticsearch.helper.client.BulkTransportClient.init(BulkTransportClient.java:164) ~[elasticsearch-helper-2.3.4.0.jar:?]
at org.xbib.elasticsearch.helper.client.ClientBuilder.toBulkTransportClient(ClientBuilder.java:113) ~[elasticsearch-helper-2.3.4.0.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.createClient(StandardSink.java:348) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.beforeFetch(StandardSink.java:100) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.beforeFetch(StandardContext.java:183) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:164) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:203) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:189) [elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:53) [elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:50) [elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:16) [elasticsearch-jdbc-2.3.4.1.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
The text was updated successfully, but these errors were encountered:
dileepnarne
changed the title
how to import data to database using elastic search 2.3.4 ?
how to import data from database to elastic search 2.3.4 using elasticsearch-jdbc 2.3.4 ?
Jun 20, 2017
i want to import data from the database to elastic search. can you help me out?
i followed the instruction but i got issue.
My Config File is :
oracle-connection-properties.sh
#!/bin/sh
This example is a template to connect to Oracle
The JDBC URL and SQL must be replaced by working ones.
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
bin=${DIR}/../bin
lib=${DIR}/../lib
echo '
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:oracle:thin:@//ec2----.eu-west-1.compute.amazonaws.com:1521/XE",
"connection_properties" : {
"oracle.jdbc.TcpNoDelay" : false,
"useFetchSizeWithLongColumn" : false,
"oracle.net.CONNECT_TIMEOUT" : 10000,
"oracle.jdbc.ReadTimeout" : 50000
},
"user" : "pin",
"password" : "****",
"sql" : "select poid_id0,name,descr from product_t",
}
' | java
-cp "${lib}/*"
-Dlog4j.configurationFile=${bin}/log4j2.xml
org.xbib.tools.Runner
org.xbib.tools.JDBCImporter
error details :
[09:42:31,311][INFO ][importer.jdbc ][pool-2-thread-1] strategy standard: settings = {connection_properties.oracle.jdbc.ReadTimeout=50000, connection_propert
ies.oracle.jdbc.TcpNoDelay=false, connection_properties.oracle.net.CONNECT_TIMEOUT=10000, connection_properties.useFetchSizeWithLongColumn=false, elasticsearch.cluster=
elasticsearch, elasticsearch.host=localhost, elasticsearch.port=9200, index=catalogue_data2, index_settings.index.number_of_replica=0, index_settings.index.number_of_sh
ards=1, max_bulk_actions=20000, max_concurrent_bulk_requests=10, password=, sql=select poid_id0,name,descr from product_t, type=catalogue_info2, url=jdbc:oracle:thin
:@//ec2----**.eu-west-1.compute.amazonaws.com:1521/XE, user=pin}, context = org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext@3090a924
[09:42:31,325][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found sink class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink@34fc0dbd
[09:42:31,330][INFO ][importer.jdbc.context.standard][pool-2-thread-1] found source class org.xbib.elasticsearch.jdbc.strategy.standard.StandardSource@4a8523ef
[09:42:31,361][INFO ][org.xbib.elasticsearch.helper.client.BaseTransportClient][pool-2-thread-1] creating transport client on Linux OpenJDK 64-Bit Server VM Oracle Corp
oration 1.8.0_131-b11 25.131-b11 with effective settings {autodiscover=false, client.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, cl
ient.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=20000, max_concurrent_requests=10, max_volume_p
er_request=10mb, name=importer, port=9200, sniff=false}
[09:42:31,380][INFO ][org.elasticsearch.plugins][pool-2-thread-1] [importer] modules [], plugins [helper], sites []
[09:42:31,773][INFO ][org.xbib.elasticsearch.helper.client.BaseTransportClient][pool-2-thread-1] trying to connect to [localhost/127.0.0.1:9200]
[09:42:36,868][INFO ][org.elasticsearch.org.xbib.elasticsearch.helper.client.TransportClient][pool-2-thread-1] [importer] failed to get node info for {#transport#-1}{12
7.0.0.1}{localhost/127.0.0.1:9200}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][localhost/127.0.0.1:9200][cluster:monitor/nodes/liveness] request_id [0] timed out after [5001ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:679) ~[elasticsearch-2.3.4.jar:2.3.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
[09:42:36,874][ERROR][importer.jdbc ][pool-2-thread-1] error while processing request: no cluster nodes available, check settings {autodiscover=false, client
.transport.ignore_cluster_name=false, client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.
0=localhost, max_actions_per_request=20000, max_concurrent_requests=10, max_volume_per_request=10mb, name=importer, port=9200, sniff=false}
org.elasticsearch.client.transport.NoNodeAvailableException: no cluster nodes available, check settings {autodiscover=false, client.transport.ignore_cluster_name=false,
client.transport.nodes_sampler_interval=5s, client.transport.ping_timeout=5s, cluster.name=elasticsearch, flush_interval=5s, host.0=localhost, max_actions_per_request=
20000, max_concurrent_requests=10, max_volume_per_request=10mb, name=importer, port=9200, sniff=false}
at org.xbib.elasticsearch.helper.client.BulkTransportClient.init(BulkTransportClient.java:164) ~[elasticsearch-helper-2.3.4.0.jar:?]
at org.xbib.elasticsearch.helper.client.ClientBuilder.toBulkTransportClient(ClientBuilder.java:113) ~[elasticsearch-helper-2.3.4.0.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.createClient(StandardSink.java:348) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardSink.beforeFetch(StandardSink.java:100) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.beforeFetch(StandardContext.java:183) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.elasticsearch.jdbc.strategy.standard.StandardContext.execute(StandardContext.java:164) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.tools.JDBCImporter.process(JDBCImporter.java:203) ~[elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:189) [elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.tools.JDBCImporter.newRequest(JDBCImporter.java:53) [elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:50) [elasticsearch-jdbc-2.3.4.1.jar:?]
at org.xbib.pipeline.AbstractPipeline.call(AbstractPipeline.java:16) [elasticsearch-jdbc-2.3.4.1.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
The text was updated successfully, but these errors were encountered: