Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNOW-1778959: Binary value '14843418AF05FDC3C69C' is too long and would be truncated #1952

Open
amergey opened this issue Oct 31, 2024 · 3 comments
Assignees
Labels
bug status-triage Issue is under initial triage

Comments

@amergey
Copy link

amergey commented Oct 31, 2024

Please answer these questions before submitting your issue.
In order to accurately debug the issue this information is required. Thanks!

  1. What version of JDBC driver are you using?
    3.19.0

  2. What operating system and processor architecture are you using?
    Ubuntu 22.04.5

  3. What version of Java are you using?
    JDK 17

  4. What did you do?

Execute an insert in a table with a JDBC prepared statement binding a byte[] value into a binary column

create or replace HYBRID TABLE DM_DATALOC_TEST_AM.TSTUS_JDBC_STEP_INST_REPO_TEST.WF_ATTACHMENT (
	ID VARCHAR(36) NOT NULL COMMENT 'VARCHAR',
	WORKFLOW_INSTANCE_ID NUMBER(38,0) COMMENT 'NUMBER',
	STEP_INSTANCE_ID NUMBER(38,0) COMMENT 'NUMBER',
	FILE_NAME VARCHAR(128) NOT NULL COMMENT 'VARCHAR',
	CONTENT BINARY(8388608) NOT NULL COMMENT 'BLOB',
	constraint PKWF_ATTACHMENT primary key (ID)
insert into WF_ATTACHMENT (
    ID,
    STEP_INSTANCE_ID,
    FILE_NAME,
    CONTENT
)
values (
    ?,
    ?,
    ?,
    ?
)

It fails with

Binary value '418E3F4240F7038405B8' is too long and would be truncated

The odd thing is, if I debug my java program and put a breakpoint to execute step by step, then it succeed

Normal run, fails all the time. Debug with breakpoint succeed all the time, so probably some race conditions

  1. What did you expect to see?

it should succeed, my table is created to contains max size binary and I set a 10 bytes byte array

  1. Can you set logging to DEBUG and collect the logs?
  [net.snowflake.client.core.SFStatement] DEBUG : ExecuteQuery: insert into WF_ATTACHMENT (
    ID,
    STEP_INSTANCE_ID,
    FILE_NAME,
    CONTENT
)
values (
    ?,
    ?,
    ?,
    ?
)
[net.snowflake.client.core.SFStatement] DEBUG : JDBC conservative memory usage is enabled.
[net.snowflake.client.core.SFStatement] DEBUG : conservativeResultChunkSize changed from 0 to 48
[net.snowflake.client.core.QueryContextCache] DEBUG :  Cache Entry: id: 0 readTimestamp: 1730391796448935 priority: 0
[net.snowflake.client.core.QueryContextCache] DEBUG :  Cache Entry: id: 213157808 readTimestamp: 1730391796423668 priority: 1
[net.snowflake.client.core.StmtUtil] DEBUG : queryContextDTO: {"entries":[{"id":0,"timestamp":1730391796448935,"priority":0,"context":null},{"id":213157808,"timestamp":1730391796423668,"priority":1,"context":{}}]}
[net.snowflake.client.core.StmtUtil] DEBUG : JSON: {"sqlText":"insert into WF_ATTACHMENT (\n    ID,\n    STEP_INSTANCE_ID,\n    FILE_NAME,\n    CONTENT\n)\nvalues (\n    ?,\n    ?,\n    ?,\n    ?\n)","sequenceId":263,"bindings":{"1":{"type":"TEXT","fmt":null,"schema":null,"value":"337a51ad-5ba9-4899-b26d-73038b87642e"},"2":{"type":"FIXED","fmt":null,"schema":null,"value":"102"},"3":{"type":"TEXT","fmt":null,"schema":null,"value":"file 2"},"4":{"type":"BINARY","fmt":null,"schema":null,"value":"14843418AF05FDC3C69C"}},"bindStage":null,"describeOnly":false,"parameters":{"CLIENT_RESULT_CHUNK_SIZE":48},"queryContextDTO":{"entries":[{"id":0,"timestamp":1730391796448935,"priority":0,"context":null},{"id":213157808,"timestamp":1730391796423668,"priority":1,"context":{}}]},"describedJobId":null,"querySubmissionTime":1730391796540,"isInternal":false,"asyncExec":false}
[net.snowflake.client.core.HttpUtil] DEBUG : Executing request with OCSP enabled: null
[net.snowflake.client.core.HttpUtil] DEBUG : Pool: [leased: 1; pending: 0; available: 2; max: 300] Executing: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Executing rest request: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1, retry timeout: 0, socket timeout: 0, max retries: 7, inject socket timeout: 0, canceling: false, without cookies: false, include retry parameters: true, include request guid: true, retry http 403: false, no retry: false
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Retry count: 0, max retries: 7, retry timeout: 0 s, backoff: 1000 ms. Attempting request: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Request POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1 guid: 5cdd11fc-acaf-4ff4-bd8a-09f2ca52864c
[net.snowflake.client.jdbc.RestRequest] DEBUG : HTTP response code for request POST https://host.snowflakecomputing.com:443/telemetry/send HTTP/1.1: 200
[net.snowflake.client.jdbc.RestRequest] DEBUG : Execution of request POST https://host.snowflakecomputing.com:443/telemetry/send HTTP/1.1 took 152 ms with total of 0 retries
[net.snowflake.client.core.HttpUtil] DEBUG : Pool: [leased: 1; pending: 0; available: 2; max: 300] Request returned for: POST https://host.snowflakecomputing.com:443/telemetry/send HTTP/1.1 took 153 ms
[net.snowflake.client.core.SFBaseSession] DEBUG : Sending telemetry took 153 ms. Batch size: 2
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] HTTP response code for request POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1: 200
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Execution of request POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1 took 250 ms with total of 0 retries
[net.snowflake.client.core.HttpUtil] DEBUG : Pool: [leased: 0; pending: 0; available: 3; max: 300] Request returned for: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1 took 250 ms
[net.snowflake.client.jdbc.SnowflakeSQLException] DEBUG : Snowflake exception: Binary value '14843418AF05FDC3C69C' is too long and would be truncated, sqlState: 22000, vendorCode: 100145, queryId: 01b80fb7-0004-18dc-000c-b48704462f56
[net.snowflake.client.jdbc.SnowflakeStatementV1] TRACE : close()
[net.snowflake.client.core.SFStatement] TRACE : void close()
@amergey amergey added the bug label Oct 31, 2024
@github-actions github-actions bot changed the title Binary value '418E3F4240F7038405B8' is too long and would be truncated SNOW-1778959: Binary value '418E3F4240F7038405B8' is too long and would be truncated Oct 31, 2024
@amergey amergey changed the title SNOW-1778959: Binary value '418E3F4240F7038405B8' is too long and would be truncated SNOW-1778959: Binary value '14843418AF05FDC3C69C' is too long and would be truncated Oct 31, 2024
@sfc-gh-wfateem sfc-gh-wfateem self-assigned this Oct 31, 2024
@sfc-gh-wfateem sfc-gh-wfateem added the status-triage Issue is under initial triage label Oct 31, 2024
@sfc-gh-wfateem
Copy link
Collaborator

@amergey thanks for raising this issue.

That error is returned from the backend; it's not a JDBC driver issue unless there's a problem with the data that it's sending and it's not the same data you were expecting to see.

Does that binary data 418E3F4240F7038405B8 look right to you in terms of what you tried to send with the JDBC driver, or has this been modified?

If the error is showing you a value that you're not expecting, then can you provide an example code snippet showing how you generated this data and how you prepared your statement in your Java code?

Otherwise, if the data matches what you sent, then it's not a JDBC issue. You will need to open a support case to have that looked at.

@sfc-gh-wfateem sfc-gh-wfateem added status-information_needed Additional information is required from the reporter and removed status-triage Issue is under initial triage labels Oct 31, 2024
@amergey
Copy link
Author

amergey commented Oct 31, 2024

@sfc-gh-wfateem thanks for you answer.
Actually I do a setObject on prepared statement with a byte array as parameter (10 bytes) supposed to be set as the binary column value. The jdc driver transforms the byte array into 418E3F4240F7038405B8 which correponds to the byte array value passed as argument but converted into hex string.
So in a sense it is the expected value (or not) depending of how the jdbc driver is supposed to work and tread bytes in prepared statement.
A direct sql execution through snowflake front or a through a regular statement is working, that's why I suspected an issue related to jdbc driver (with binary in prepared statement or communication between jdbc driver and backend), but if you think it is a backend issue, then I am sorry to have opened this and I will open a support case instead

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-triage Issue is under initial triage label Nov 1, 2024
@sfc-gh-wfateem sfc-gh-wfateem removed the status-information_needed Additional information is required from the reporter label Nov 1, 2024
@sfc-gh-wfateem
Copy link
Collaborator

Thanks for the response @amergey.
Yes, the key point here is that you were able to insert the same data into the binary column outside of the JDBC driver.
I'll take a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug status-triage Issue is under initial triage
Projects
None yet
Development

No branches or pull requests

3 participants