Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

key error = "Operation-Location" #264

Open
katehoward360 opened this issue Jul 16, 2021 · 1 comment
Open

key error = "Operation-Location" #264

katehoward360 opened this issue Jul 16, 2021 · 1 comment
Assignees

Comments

@katehoward360
Copy link

I modified the code to accept local images using: https://stackoverflow.com/questions/63907566/using-local-image-for-read-3-0-azure-cognitive-service-computer-vision

I modified to iterate over the code to detect text in multiple local images (around 100 images); however, I get key error = "Operation-Location" after a few number of iterations (5-8) varying every rerun. How do I go about fixing it?

@fsharpn00b fsharpn00b self-assigned this Jul 20, 2021
@fsharpn00b
Copy link
Collaborator

Hi @katehoward360,

First, I very humbly apologize for my delayed reply.

I have modified the quickstart per the Stack Overflow answer you linked to (that is, to use a local image instead of a remote one). I also added a loop to detect text in the image 20 times instead of once. I have posted the code below. Unfortunately I have not been able to reproduce the error you are seeing. When you have time, can you please post the source code you are using (with your Computer Vision endpoint and subscription key removed)?

Thank you!
fsharpn00b

import json
import os
import sys
import requests
import time
# If you are using a Jupyter Notebook, uncomment the following line.
# %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from PIL import Image
from io import BytesIO

missing_env = False
# Add your Computer Vision subscription key and endpoint to your environment variables.
endpoint = 'PASTE_YOUR_COMPUTER_VISION_ENDPOINT_HERE'
subscription_key = 'PASTE_YOUR_COMPUTER_VISION_SUBSCRIPTION_KEY_HERE'

text_recognition_url = endpoint + "/vision/v3.1/read/analyze"

###
# Use local image instead per
# https://stackoverflow.com/a/63912027
###

# Set image_url to the URL of an image that you want to recognize.
#image_url = "https://raw.githubusercontent.com/MicrosoftDocs/azure-docs/master/articles/cognitive-services/Computer-vision/Images/readsample.jpg"

#headers = {'Ocp-Apim-Subscription-Key': subscription_key}
#data = {'url': image_url}
#response = requests.post(
#    text_recognition_url, headers=headers, json=data)
#response.raise_for_status()

headers = {'Ocp-Apim-Subscription-Key': subscription_key,'Content-Type': 'application/octet-stream'}
with open('readsample.jpg', 'rb') as f:
    data = f.read()

###
# Add for loop to submit image multiple times
###

for x in range(20) :
	print ('Iteration ' + str(x) + '...\n')

	response = requests.post(
		text_recognition_url, headers=headers, data=data)

# Extracting text requires two API calls: One call to submit the
# image for processing, the other to retrieve the text found in the image.

# Holds the URI used to retrieve the recognized text.
	operation_url = response.headers["Operation-Location"]

# The recognized text isn't immediately available, so poll to wait for completion.
	analysis = {}
	poll = True
	while (poll):
		response_final = requests.get(
			response.headers["Operation-Location"], headers=headers)
		analysis = response_final.json()
		
		print(json.dumps(analysis, indent=4))

		time.sleep(1)
		if ("analyzeResult" in analysis):
			poll = False
		if ("status" in analysis and analysis['status'] == 'failed'):
			poll = False

	polygons = []
	if ("analyzeResult" in analysis):
		# Extract the recognized text, with bounding boxes.
		polygons = [(line["boundingBox"], line["text"])
					for line in analysis["analyzeResult"]["readResults"][0]["lines"]]

# Display the image and overlay it with the extracted text.

###
# Use local image instead per
# https://stackoverflow.com/a/63912027
###

#	image = Image.open(BytesIO(requests.get(image_url).content))
	image = Image.open('readsample.jpg')

	ax = plt.imshow(image)
	for polygon in polygons:
		vertices = [(polygon[0][i], polygon[0][i+1])
					for i in range(0, len(polygon[0]), 2)]
		text = polygon[1]
		patch = Polygon(vertices, closed=True, fill=False, linewidth=2, color='y')
		ax.axes.add_patch(patch)
		plt.text(vertices[0][0], vertices[0][1], text, fontsize=20, va="top")
	plt.show()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants