Save all the image from the page #699
codescooper
started this conversation in
General
Replies: 1 comment
-
You can use import requests
from facebook_scraper import *
from urllib.parse import urlparse
import os
os.makedirs("images", exist_ok=True)
for post in get_posts("Nintendo"):
for url in post["images"]:
filename = os.path.basename(urlparse(url).path)
print(f"Fetching {filename}")
r = requests.get(url)
with open(f"images/{filename}", "wb") as f:
f.write(r.content) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello thank for this script,
The work that i have to acheive is take all the picture of our page by scraping and save into a folder.
all i have see till now is get the url of the image,
can you tell me how is possible to save the corresponding image localy ?
Sorry for my sad english ^^
Beta Was this translation helpful? Give feedback.
All reactions