Batch_get_cmd and batch_get_cmd_status does not work as intended #1066
-
Hi! I was attempting to use batch_get_cmd and batch_get_cmd_status to get a file from a batch and then download the file from the batch get. However, when I run batch_get_cmd_status after batch_get_cmd, I see that the resources is empty even though the get was successful. batch_get_cmd_status response: I made sure to use the batch_get_cmd_req_id from the batch_get_cmd for batch_get_cmd_status. The code is also similar to the code in discussion #579. Thanks in advanced! |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 10 replies
-
Hi @lostkid456 - Can you provide us a sanitized version of the code you're executing?
|
Beta Was this translation helpful? Give feedback.
-
Hi @lostkid456 - The following code works in my test environment with two changes to the
Give this a try and let us know the result.
import os
import logging
from falconpy import RealTimeResponse
def get_custom_script_files(
rtr: RealTimeResponse,
batch_id: str,
hostname_session_mapping: dict[str, str],
output_directory: str,
):
batch_get = rtr.batch_get_command(
batch_id=batch_id,
file_path="/path/to/target/file",
host_timeout_duration="2m",
)
print(batch_get)
batch_get_cmd_req_id = batch_get["body"]["batch_get_cmd_req_id"]
response = rtr.batch_get_command_status(
batch_get_cmd_req_id=batch_get_cmd_req_id, timeout=120
)
for key in response["body"]["resources"]:
file_sha = response["body"]["resources"][key]["sha256"]
session_id = response["body"]["resources"][key]["session_id"]
file_name = f"{hostname_session_mapping[session_id]}.7z"
with open(os.path.join(output_directory, file_name), "wb") as file:
download_response = rtr.get_extracted_file_contents(
session_id=session_id, sha256=file_sha, filename=file_name
)
file.write(download_response)
# For debugging purposes, I'm providing a singular ID but a list can also be provided here.
DEBUG_AID = "AID_OR_LIST_OF_AIDS_HERE"
logging.basicConfig(level=logging.DEBUG)
# Construct an instance of the RTR Service Class and enable debug logging.
# NOTE: I am leveraging Environment Authentication in this example. If you do not
# have FALCON_CLIENT_ID and FALCON_CLIENT_SECRET defined as environment variables
# you will need to provide these using the client_id and client_secret keywords.
rtr = RealTimeResponse(debug=True)
# Initialize a RTR session with each of the hosts in our list of AIDs.
session_init = rtr.batch_init_sessions(host_ids=DEBUG_AID)
# Retrieve the batch ID for the entire RTR batch.
batch_id = session_init["body"]["batch_id"]
# Retrieve the dictionary of sessions returned for the RTR batch.
sessions = session_init["body"]["resources"]
# Create a mapping of session ID to AID to use to save our retrieved files.
mapping = {
session_detail["session_id"]: host_aid
for host_aid, session_detail in sessions.items()
}
# Retrieve the files for each host in our batch saving them to the current folder.
get_custom_script_files(rtr, batch_id, mapping, "")
# Delete our open sessions.
for session_id in mapping:
rtr.delete_session(session_id) |
Beta Was this translation helpful? Give feedback.
-
Hi @lostkid456 - I couldn't recreate the exact behavior you are seeing, but I did get a 400 error when first trying the code as shown above. ( I've made a few minor changes and this is now working in my test environment. A couple of notes:
Try this out and let us know if you continue to have questions. def run_custom_script(rtr: RealTimeResponseAdmin, batch_id: str) -> int:
print("Running custom script...")
response = rtr.batch_admin_command(
base_command="runscript",
batch_id=batch_id,
command_string=f"runscript -CloudFile={TEST_SCRIPT} " # I used a different script name
f"-CommandLine=```{TEMP_DIR}```",
timeout=240,
)
print("Custom script finished running")
return response["status_code"]
def get_custom_script_files(
rtr: RealTimeResponse,
batch_id: str,
hostname_session_mapping: dict[str, str],
output_directory: str,
):
batch_get = rtr.batch_get_command(
batch_id=batch_id,
#file_path=TEMP_DIR + ".zip", # In my testing, this is missing the created zip file name
file_path=TEMP_DIR + "/1066.zip", # My test creates a zip called "1066.zip"
host_timeout_duration="2m",
)
batch_get_cmd_req_id = batch_get["body"]["batch_get_cmd_req_id"]
response = rtr.batch_get_command_status(
batch_get_cmd_req_id=batch_get_cmd_req_id, timeout=120
)
for key in response["body"]["resources"]:
file_sha = response["body"]["resources"][key]["sha256"]
session_id = response["body"]["resources"][key]["session_id"]
file_name = f"{hostname_session_mapping[session_id]}.7z"
with open(os.path.join(output_directory, file_name), "wb") as file:
download_response = rtr.get_extracted_file_contents(
session_id=session_id, sha256=file_sha, filename=file_name
)
file.write(download_response)
# We don't need to create a standalone auth object
# auth = OAuth2(
# client_id=os.getenv("CSK_CLIENT_ID"),
# client_secret=os.getenv("CSK_CLIENT_SECRET"),
# ssl_verify=False,)
rtr = RealTimeResponse(client_id=os.getenv("CSK_CLIENT_ID"),
client_secret=os.getenv("CSK_CLIENT_SECRET"),
debug=True # I turned on debugging so I could see the requests
)
# Any Service Class can be leveraged to authenticate
admin_rtr = RealTimeResponseAdmin(auth_object=rtr)
# Initialize a RTR session with each of the hosts in our list of AIDs.
session_init = rtr.batch_init_sessions(host_ids=DEBUG_AID)
# Retrieve the batch ID for the entire RTR batch.
batch_id = session_init["body"]["batch_id"]
# Retrieve the dictionary of sessions returned for the RTR batch.
sessions = session_init["body"]["resources"]
# Create a mapping of session ID to AID to use to save our retrieved files.
mapping = {
session_detail["session_id"]: host_aid
for host_aid, session_detail in sessions.items()
}
if run_custom_script(rtr=admin_rtr, batch_id=batch_id) != 201:
print("Executing script on target hosts failed")
else:
print("Retrieving results")
get_custom_script_files(
rtr=rtr,
batch_id=batch_id,
hostname_session_mapping=mapping, # I changed this to be the mapping dict we created
output_directory=RESULT_PATH,
) |
Beta Was this translation helpful? Give feedback.
How big is the file? It might not have completed the upload to the cloud yet. Can you wrap this call in a loop that repeats until resources is populated (or you decide to timeout)?