Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hive service not working in presto #24180

Open
vikashmalakar275 opened this issue Dec 3, 2024 · 2 comments
Open

Hive service not working in presto #24180

vikashmalakar275 opened this issue Dec 3, 2024 · 2 comments
Labels

Comments

@vikashmalakar275
Copy link

Your Environment

  • Presto version used:
  • Storage (HDFS/S3/GCS..):
  • Data source and connector used:
  • Deployment (Cloud or On-prem):
  • Pastebin link to the complete debug logs:

Expected Behavior

Current Behavior

Possible Solution

Steps to Reproduce

Screenshots (if appropriate)

Context

@imjalpreet
Copy link
Member

@vikashmalakar275 Can you please add an issue description?

@vikashmalakar275
Copy link
Author

I have follow the below steps:

working on centos:

docker run -it --name prestotesting -p 8047:8047 -p 8080:8080 -p 8043:8043 -p 31010:31010 -p 10000:10000 -p 9083:9083 --detach prestodb/presto:latest

Login as root:
docker exec -it -u 0 prestotesting /bin/bash

follow all the below process in root user:

yum update
yum install -y openssh
yum install -y openssh-server
yum install -y nano
yum install -y wget
yum install -y openssh-clients
yum install -y sudo
yum install -y hostname

Generate the missing host keys
Run the following command to generate the default SSH host keys:

ssh-keygen -A

This will create the required keys in /etc/ssh/.

start ssh service:
/usr/sbin/sshd -D

install openjdk-8

mkdir /opt/hadoop
mkdir /opt/hive

cd /opt/hadoop
wget https://dlcdn.apache.org/hadoop/common/hadoop-3.4.0/hadoop-3.4.0.tar.gz
tar -xvf hadoop-3.4.0.tar.gz

cd /opt/hive
wget https://downloads.apache.org/hive/hive-4.0.1/apache-hive-4.0.1-bin.tar.gz
tar -xvf apache-hive-4.0.1-bin.tar.gz

export HADOOP_HOME=/opt/hadoop/hadoop-3.4.0
export JAVA_HOME=/usr/local/openjdk-8
export HIVE_HOME=/opt/hive/apache-hive-4.0.1-bin
export HDFS_NAMENODE_USER=prestouser
export HDFS_DATANODE_USER=prestouser
export HDFS_SECONDARYNAMENODE_USER=prestouser

cd $HIVE_HOME/lib
wget https://repo1.maven.org/maven2/org/apache/tez/tez-dag/0.10.4/tez-dag-0.10.4.jar

chmod -R 777 /opt

Login as prestouser:

follow all the steps below in prestouser

if prestouser is not present, login as root and create this:
adduser prestouser
su -u prestouser

use this:
docker exec -it -u prestouser prestotesting /bin/bash

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

export HADOOP_HOME=/opt/hadoop/hadoop-3.4.0
export JAVA_HOME=/usr/local/openjdk-8
export HIVE_HOME=/opt/hive/apache-hive-4.0.1-bin
export HDFS_NAMENODE_USER=prestouser
export HDFS_DATANODE_USER=prestouser
export HDFS_SECONDARYNAMENODE_USER=prestouser

Edit:
$HADOOP_HOME/etc/hadoop/hadoop-env.sh
add
export JAVA_HOME=/usr/local/openjdk-8

edit:
$HIVE_HOME/conf/hive-site.xml
add

fs.defaultFS hdfs://localhost:9000 hadoop.proxyuser.prestouser.hosts * The superuser mapr can connect from any host to impersonate a user hadoop.proxyuser.prestouser.groups * Allow the superuser mapr to impersonate any member of any group

In above core-site.xml file hadoop.proxyuser.drilluser.hosts and hadoop.proxyuser.drilluser.groups follows format hadoop.proxyuser..groups in my case it is "root"

Setup schema:
$HIVE_HOME/bin/schematool -dbType derby -initSchema

start service:
$HADOOP_HOME/bin/hdfs namenode -format
$HADOOP_HOME/sbin/start-dfs.sh
$HIVE_HOME/bin/hive --service metastore & (don't run this command again if already running)
$HIVE_HOME/bin/hive --service hiveserver2 &

For Debugging logs are present in /tmp/root/hive.log

$HIVE_HOME/bin/beeline -n prestouser -u jdbc:hive2://localhost:10000/default;transportMode=http;httpPath=cliservice

add hive service:
path = /opt/presto-server/etc/catalog

cretae file = hive.properties & sen.properties

connector.name=hive-hadoop2
hive.metastore.uri=thrift://localhost:9083
hive.allow-drop-table=true
hive.allow-rename-table=true
hive.time-zone=UTC
hive.metastore-cache-ttl=0s

error:

$HIVE_HOME/bin/beeline -n prestouser -u jdbc:hive2://localhost:10000/default;transportMode=http;httpPath=cliservice

2024-12-02 11:15:52 2024-12-02T05:45:52.015Z    INFO    main    com.facebook.presto.server.PrestoServer ======== SERVER STARTED ========
2024-12-02 11:20:04 2024-12-02T05:50:04.205Z    INFO    dispatcher-query-3      com.facebook.presto.event.QueryMonitor  TIMELINE: Query 20241202_055010_00000_57acd :: Transaction:[ad43f26d-1f8a-4517-8a26-3996961e5e17] :: elapsed 1078ms :: planning 361ms :: scheduling 556ms :: running 86ms :: finishing 75ms :: begin 2024-12-02T05:50:02.984Z :: end 2024-12-02T05:50:04.062Z
2024-12-02 11:20:42 2024-12-02T05:50:42.596Z    INFO    dispatcher-query-1      com.facebook.presto.event.QueryMonitor  TIMELINE: Query 20241202_055051_00001_57acd :: Transaction:[0ec33e43-569a-4c72-82cb-bfc8b872f749] :: elapsed 143ms :: planning 29ms :: scheduling 46ms :: running 37ms :: finishing 31ms :: begin 2024-12-02T05:50:42.418Z :: end 2024-12-02T05:50:42.561Z
2024-12-02 11:20:54 2024-12-02T05:50:54.203Z    INFO    dispatcher-query-6      com.facebook.presto.event.QueryMonitor  TIMELINE: Query 20241202_055102_00002_57acd :: Transaction:[b1e8cc2a-a700-4705-91f6-a81ec3a48a31] :: elapsed 316ms :: planning 43ms :: scheduling 145ms :: running 59ms :: finishing 69ms :: begin 2024-12-02T05:50:53.823Z :: end 2024-12-02T05:50:54.139Z
2024-12-02 11:21:25 2024-12-02T05:51:25.726Z    INFO    dispatcher-query-4      com.facebook.presto.event.QueryMonitor  TIMELINE: Query 20241202_055103_00003_57acd :: Transaction:[e0c71252-e8da-4a88-b793-77e0ebb59745] :: elapsed 32387ms :: planning 32345ms :: scheduling 42ms :: running 0ms :: finishing 42ms :: begin 2024-12-02T05:50:54.173Z :: end 2024-12-02T05:51:26.560Z
2024-12-02 11:21:25 2024-12-02T05:51:25.726Z    ERROR   dispatcher-query-4      com.facebook.presto.execution.StateMachine      Error notifying state change listener for finalQueryInfo-20241202_055103_00003_57acd
2024-12-02 11:21:25 java.lang.NullPointerException
2024-12-02 11:21:25     at com.facebook.presto.execution.SqlQueryExecution.lambda$pruneFinishedQueryInfo$7(SqlQueryExecution.java:758)
2024-12-02 11:21:25     at java.base/java.util.concurrent.atomic.AtomicReference.getAndUpdate(AtomicReference.java:187)
2024-12-02 11:21:25     at com.facebook.presto.execution.SqlQueryExecution.pruneFinishedQueryInfo(SqlQueryExecution.java:757)
2024-12-02 11:21:25     at com.facebook.presto.execution.QueryTracker.lambda$expireQuery$2(QueryTracker.java:201)
2024-12-02 11:21:25     at java.base/java.util.Optional.ifPresent(Optional.java:183)
2024-12-02 11:21:25     at com.facebook.presto.execution.QueryTracker.expireQuery(QueryTracker.java:200)
2024-12-02 11:21:25     at com.facebook.presto.execution.SqlQueryManager.lambda$createQuery$5(SqlQueryManager.java:311)
2024-12-02 11:21:25     at com.facebook.presto.execution.QueryStateMachine.lambda$addQueryInfoStateChangeListener$18(QueryStateMachine.java:985)
2024-12-02 11:21:25     at com.facebook.presto.execution.StateMachine.fireStateChangedListener(StateMachine.java:229)
2024-12-02 11:21:25     at com.facebook.presto.execution.StateMachine.lambda$fireStateChanged$0(StateMachine.java:221)
2024-12-02 11:21:25     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
2024-12-02 11:21:25     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
2024-12-02 11:21:25     at java.base/java.lang.Thread.run(Thread.java:829)
2024-12-02 11:21:25 
2024-12-02 11:21:25

My goal is to tun Presto and hive service both in same container to test PrestoODBC Driver.

But i am not able to achive that.

If you have steps to do for the same.

CAn you please share ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: 🆕 Unprioritized
Development

No branches or pull requests

2 participants