-
Notifications
You must be signed in to change notification settings - Fork 0
Milestone 3 Report
- Rukiye Aslan
- Osman Yasin Baştuğ
- Kamil Deniz Coşkuner
- Muhammed Erkam Gökcepınar
- Halil İbrahim Kasapoğlu
- Mahmut Buğra Mert
- Furkan Şenkal
- İrem Nur Yıldırım
- Executive Summary
- Project Status
- Links
- Challenges Encountered
- Lessons Learned
- Individual Contribution Report
- Honor Code Statements
As we continue to develop SemanticFlix, our film-focused social media app, this third milestone report documents our team’s advancements in implementation, testing, and refining our application. This milestone reflects on our teamwork, the tools and processes we utilized, and the lessons learned during this phase. We evaluate the effectiveness of our strategies and pinpoint areas for improvement to ensure continued progress. The report includes detailed contributions from each team member, focusing on the roles they played in the development stages since the last milestone. By detailing these contributions, we aim to showcase the practical application of our skills in a collaborative setting and the lessons we've learned that will influence our approach moving forward.
This report also serves as a comprehensive record of our development journey with SemanticFlix. We present our updated application, which demonstrates key functionalities such as user registration, login, film exploration, and social interactions among users. It also outlines our next steps in enhancing the platform’s capabilities to enrich the user experience and expand our service offerings.
We have completed the following since the last milestone:
- Developed and integrated additional features for both web and mobile platforms.
- Deployed the updated application using Docker.
- Updated and enhanced our UML diagrams to reflect the current state of the application.
- Implemented unit tests and performed extensive testing to ensure the reliability of new features.
- Addressed and resolved several challenges encountered during development.
- User authentication (login/signup, email verification, logout)
- Film exploration and semantic search functionalities for films, actors, directors
- User profile management
- Initial social interaction features (creating posts)
- Frontend and backend integration
- Mobile app development (APK creation and deployment)
Descriptions and screenshots of the basic functionalities of our app are accessable here.
- GitHub Code Repository: Repository
- Release Tag: Group2-Practice-App-Release-v0.2
- Deployed Application: Deployed Application
- APK File: APK
- Documented API: Swagger API
- Final SRS: SRS Document
- Final Design Diagrams:
- Project Plan: Project Plan
- Meeting Notes: Meeting Notes
- Instructions for Building and Deploying using Docker: Quickstart
- Testing Information:
- Username: testuser
- Password: testpassword
- Slow Semantic Queries: Initially, the Wikidata API struggled with the volume of entities, causing slow query responses. We resolved this by integrating QLever, a more efficient SPARQL engine.
- CORS Settings: Configuring proper CORS settings for frontend-backend communication was challenging due to modern web browsers' security measures. We resolved most issues, but some persist when using certain networks. Also, we still face CORS errors when a user is connected to eduroam while using our app.
- Django Migrations: Django's migration structure caused problems when we wanted to change the user model provided by django.contrib.auth. We needed to remove all previous migrations and make migrations again.
- Deployment Platform: We initially used Digital Ocean's App Platform but switched to a droplet for better control and logging, despite the more complex setup process.
- DB Connection: Database connection on the deployed app sometimes failed and we needed to manually restart the docker container for the db initialization.
- Create Post: Couldn't figured out authorization, we were just using the wrong token.
- Link to Backend: Eduroam was blocking the swagger. So, sometimes we thought our previous work is not working. It slowed down our progress.
- Third-Party Library Compatibility: Ensuring stability and compatibility with the latest React Native versions was challenging.
- Debugging: Debugging React Native apps required extra effort compared to traditional web development.
- Environment Setup: Initial setup of React Native CLI encountered many user-specific issues, requiring patience and problem-solving.
- VSCode: Reliable for both frontend and backend development across different operating systems.
- GitHub Desktop: Useful for handling branches and commits initially, but switched to terminal after issues with branch deletion.
- React and React Native: Provided ease of implementation and extensive resources, despite initial setup challenges for React Native.
- Django: Enhanced backend development with its MVC architecture and ORM, supported by clear documentation.
- Docker: Crucial for consistent deployment across stages, with Docker Compose simplifying multi-service builds.
- Digital Ocean: Provided scalable infrastructure, with benefits from a free starter pack and SSH connectivity.
- Discord and WhatsApp: Facilitated team communication and organization, with Discord used for development channels and WhatsApp for urgent messages.
- Pull Requests and Naming: Improved issue-related naming for branches and maintained reviewer checks for pull requests.
- Teaming: Distributed workload evenly and ensured reviewer presence for all work.
- Meeting Notes: Maintained thorough documentation with moderators, note-takers, and wiki editors.
- Issue Tracking: Enhanced tracking and management of tasks with new issue templates.
-
Contributions:
-
Pull Requests:
-
Issues:
-
Wiki Documentation:
- Personal Wiki Page - Rukiye Aslan
-
Descriptions of Third-Party APIs and API Functions: I implemented post and like endpoints in the backend as follows:
-
Post Endpoints:
-
GET
/post/
: Retrieve a list of all posts in the database. -
POST
/post/
: Create a new post. -
GET
/post/search/{search_query}/
: Search for posts based on a given query.
-
GET
-
Like Endpoints:
-
POST
/like/
: Like a post. -
GET
/like/like-count/{_id}/
: Retrieve the like count for a specific post.
-
POST
-
Post Endpoints:
-
Sample API Call Transcripts:
-
Get All Posts:
-
Request:
curl -X 'GET' \ 'http://example.com/api/post/' \ -H 'accept: application/json' \ -H 'Authorization: Bearer token' \ -H 'X-CSRFToken: token'
-
Response:
HTTP/1.1 200 OK Content-Type: application/json [ { "title": "What an incredible film", "content": "Nice film with an amazing story ending!! Eagerly waiting the next film", "film": "Unsung Hero", "author_username": "rukiye" }, { "title": "Der Atem des Himmels", "content": "I watched the movie last night. It was amazing!!", "film": "Der Atem des Himmels", "author_username": "subartug" } ]
-
Request:
-
Create a Post:
-
Request:
POST /post/ Content-Type: application/json { "title": "New Post", "content": "This is the content of the new post.", "film": "Q123" }
-
Response:
HTTP/1.1 201 Created Content-Type: application/json { "_id": "1", "title": "New Post", "content": "This is the content of the new post.", "author": "1", "film": "Q123", "created_at": "2024-05-17T15:00:00Z", "updated_at": "2024-05-17T15:00:00Z" }
-
Request:
-
Search a Post:
-
Request:
GET /post/search/great/ \ -H 'accept: application/json' \ -H 'Authorization: Bearer token' \ -H 'X-CSRFToken: token'
-
Response:
HTTP/1.1 200 OK Content-Type: application/json [ { "title": "Manchester by the Sea", "content": "great film", "film": "Manchester by the Sea", "author_username": "rukiye.123" } ]
-
Request:
-
Get Like Count for a Post:
-
Request:
GET /like/like-count/1
-
Response:
HTTP/1.1 200 OK Content-Type: application/json { "post_id": "60c72b2f5f1b2c001f8e4e6f", "like_count": 42 }
-
Request:
-
Get All Posts:
-
Unit Tests: I implemented unit tests for the post and like endpoints that I have previously implemented. Each unit test first checks the response status to ensure that the request didn't fail, then does additional checks like checking the response format. Code snippets for unit tests are as follows:
class PostViewSetTests(APITestCase): def setUp(self): self.user = User.objects.create_user(username='testuser', password='testpassword') self.client.force_authenticate(user=self.user) self.post = Post.objects.create(title='Test Post', content='Test Content', film='Test Film', author=self.user) def test_get_all_posts(self): url = reverse('create_post') response = self.client.get(url) self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertEqual(response.data[0]['title'], 'Test Post') def test_create_post(self): url = reverse('create_post') data = {'title': 'New Post', 'content': 'New Content', 'film': 'New Film'} response = self.client.post(url, data, format='json') self.assertEqual(response.status_code, status.HTTP_201_CREATED) def test_search_post(self): url = reverse('search-post', args=['Test']) response = self.client.get(url) self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertEqual(len(response.data), 1) self.assertEqual(response.data[0]['title'], 'Test Post')
Here are the code snippets for unit tests of like endpoints:
class LikeViewSetTests(APITestCase): def setUp(self): self.user = User.objects.create_user(username='testuser', password='testpassword') self.client.force_authenticate(user=self.user) self.post = Post.objects.create(title='Test Post', content='Test Content', film='Test Film', author=self.user) self.like = Like.objects.create(user=self.user, post=self.post) def test_get_like(self): url = reverse('like-detail', args=[self.like._id]) response = self.client.get(url) self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertEqual(response.data['post'], self.post._id) def test_create_like(self): url = reverse('create_like') data = {'post': self.post._id} response = self.client.post(url, data, format='json') self.assertEqual(response.status_code, status.HTTP_201_CREATED) def test_get_like_count(self): url = reverse('like-count', args=[self.post._id]) response = self.client.get(url) self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertEqual(response.data['like_count'], 1)
-
Database Management:
- Managed database migrations using Django's migration framework to ensure smooth schema updates.
- Resolved conflicts and issues related to schema changes and migrations, maintaining database integrity.
-
Django Customizations:
- Implemented middleware for logging and request validation, improving monitoring and security.
- Made adjustments to the Django REST framework to handle custom serialization and validation logic.
-
-
Challenges:
- Django-MySQL Connection: Maintaining a proper connection with MySQL was challenging because sometimes changes in my local MySQL connection affected the project even though we dockerized our DB connection.
- Changing User Model Mid-Project and Migrations: Initially, we used the User model from django.contrib.auth, which met our needs for the first milestone. However, for this milestone, I needed to add fields to the user model, so I wrote a custom user model. Django's migration structure caused problems with the database tables, requiring me to delete all previous migrations and create new ones from scratch.
- Timing Issues: The milestone deadline was very close to the previous one, and my teammates and I had many other deadlines during these two weeks. This made it difficult to manage our time and complete all the required work for this milestone, resulting in some unmet project requirements.
- Testing Some Features: As a backend team member, I implemented post and like endpoints, thinking they would meet the needs of the mobile and frontend teams. However, some minor changes were needed, so continuous communication with these teams was essential. I am grateful to have had teammates who provided valuable feedback.
- Pull Requests:
- Issues:
-
Contributions:
- Updated navigation bar with a better search bar,
- Updated the design items to for a nicer UI,
- Opened all front-end issues and planned how and by whom it will be done
- Arranged meetings with front-end team member and backend team members
- Planned issue and pull request naming convention issue,
- Developed film details page (commit).
- Developed post page (commit).
- Implemented search for films (commit).
- Attached the post page with authorization (commit).
- Updated the main page to have films with posters and an upgraded search bar
-
Contributions:
- Attended meetings, initialized web development, organized frontend file structure, and reviewed code.
-
Challenges:
- Starting backend server locally for testing.
I am a member of our project's mobile team. I have initialize the mobile app from scratch. I have implemented login, signup pages and provide authentication through our database to the app. I have brought the search ability and film, actor and director pages to app by using wikidata endpoints prepared by the backend team. Post creation functionality is also added. I have created the Apk file for our release.
-
- Description: Mobile app should allow the user to create posts about films and those post should be saved in the database of our app.
-
- Description: The main page has to be functional, so some design features should be added like bottom navigation bar
-
- Description: One of the crucial features of our app is the showing movie deatils. Therefore, the movie page should be created and it needs to have information about the film like poster, cast and director. Also, from the movie page, actor and director pages should be accesable, so they should be implemented.
-
- Description: Profile page changes done by my teammate @furkansenkal and create post function are merged
-
- Description: Movie pages, actor and director pages are added and merged with the new features added by @furkansenkal
-
- Description: The last version of the mobile app is merged into the mobile dev branch in order to be prepared for the release
Since I have worked on the mobile development, I have slightly contributed to that part. Communicating with the backend team, we decided to which responses this functions should return and as the mobile team, we implemented the app accordingly.
I have initialized the mobile app with React Native framework and solved initializing errors before the milestone 2. Until the milestone 3, I maintained the proper mobile development environment and solved some errors with gradle. I also created the Apk file of our mobile app for the release.
Our app requires authorization in order to view movie pages, create posts and make searchs. At the starting of the mobile development, we didn't use the access token which is given to the user when s/he was logged in. Therefore, it was challenging to fetch the token and use it in the app. Then with the help of our teammate @rukiyeaslan, we handled the problem and create a config file to store the token.
Since the mobile app consists of lots of pages, the data transmission between the pages can be forceful. Even if the data is given to the page properly, using data in the given page is hard to handle because if different types of navigations are used, the data should be transmitted at the top page of this navigator. I have spent energy to solve this issue, then after making some research and watching videos, I can use the data wherever I want.
-
I have taken part in the development of wikidata api entegration and deployment process. My contributions on implementation part concers writing efficient sparql queries and api endpoints to those services. For the deployment part, I have contributed to the dockerization of the services, setting up the DigitalOcean droplets and deploying the services to the droplets.
-
- Description: Backend needs to be able to get the details of an actor's details using a proper sparql query.
- Description: Backend needs to be able to get the film details using a proper sparql query. It is decided to use TMDB API to get the posters of the films. Queries should be edited to retrieve TMDB id of the films. And a service to get the posters of the films using TMDB API should be implemented.
-
Description: I have discovered that our endpoint for get film details fails when asked with a film with cast members. After investigating the issue, I have found out that the issue is caused by the rate limiting of Wikidata API. I have proposed a solution to this issue.
- Description: Added actor and director details sparql/endpoints. With this change, the backend can now get the details (name, birthdate, description, image and related films) of an actor or director using their wikidata id.
- Description: Added TMDB Integration and edit film details endpoint to get posters using new sparql response with tmdbid.
-
Description: Resolved a critical issue due to rate limiting of Wikidata API. Optimized sparql queries to reduce the number of requests. Also added functionality to respect 429 status code and retry the request after a certain amount of time.
-
-
TMDB API: Used to get the posters of the films.
-
Implemented Functions:
-
Function Name:
get_movie_poster()
- Description: Returns the poster of the film with the given TMDB id.
-
Function Name:
-
QLever REST API: Used to query the wikidata for queries that might be too demanding for the wikidata sparql endpoint. It is a faster sparql endpoint even though it still operates on the same wikidata data.
-
Implemented Functions:
-
Function Name:
recently_released_films()
- Description: Returns the recently released films.
-
Function Name:
-
Implemented Functions:
-
1 [Get Director Details]: API endpoint for retrieving director details from Wikidata.
- Request:
curl -X 'POST' \ 'http://207.154.242.6:8020/get-director-details/' \ -H 'accept: */*' \ -H 'Content-Type: application/json' \ -H 'X-CSRFToken: 7mpWnDgwstKXIBbDlb8uqcyOvbWs9w3p9Ub7sLTmuhaEdZx3xpGSQXXsUFlQ4DRN' \ -d '{ "entity_id": "Q25191" }'
- Response
{ "name": "Christopher Nolan", "description": "British-American filmmaker (born 1970)", "image": "http://commons.wikimedia.org/wiki/Special:FilePath/Christopher%20Nolan%20Cannes%202018.jpg", "dob": "1970-07-30T00:00:00Z", "films": [ { "id": "http://www.wikidata.org/entity/Q13417189", "label": "Interstellar" }, ... { "id": "http://www.wikidata.org/entity/Q429969", "label": "Insomnia" }, ] }
-
2 [Get Actor Details]: API endpoint for retrieving actor details from Wikidata.
- Request:
curl -X 'POST' \ 'http://207.154.242.6:8020/get-actor-details/' \ -H 'accept: */*' \ -H 'Content-Type: application/json' \ -H 'X-CSRFToken: 7mpWnDgwstKXIBbDlb8uqcyOvbWs9w3p9Ub7sLTmuhaEdZx3xpGSQXXsUFlQ4DRN' \ -d '{ "entity_id": "Q38111" }'
- Response
{ "name": "Leonardo DiCaprio", "description": "American actor and film producer (born 1974)", "image": "http://commons.wikimedia.org/wiki/Special:FilePath/Leonardo%20DiCaprio%20in%202023%20%28cropped%29.png", "dob": "1974-11-11T00:00:00Z", "films": [ { "id": "http://www.wikidata.org/entity/Q47300912", "label": "Once Upon a Time in Hollywood" }, ... { "id": "http://www.wikidata.org/entity/Q22272", "label": "Santa Barbara" } ] }
-
3 [Get Film Details]: API endpoint for retrieving film details from Wikidata.
- Request:
curl -X 'POST' \ 'http://207.154.242.6:8020/get-film-details/' \ -H 'accept: */*' \ -H 'Content-Type: application/json' \ -H 'X-CSRFToken: 7mpWnDgwstKXIBbDlb8uqcyOvbWs9w3p9Ub7sLTmuhaEdZx3xpGSQXXsUFlQ4DRN' \ -d '{ "entity_id": "Q108839994" }'
- Response
{ "label": "Oppenheimer", "description": "2023 film by Christopher Nolan", "image": null, "poster": "https://image.tmdb.org/t/p/w500/8Gxv8gSFCU0XGDykEGv7zR1n2ua.jpg", "genres": [ { "id": "http://www.wikidata.org/entity/Q130232", "label": "drama film" }, ... { "id": "http://www.wikidata.org/entity/Q21010853", "label": "historical film" } ], "directors": [ { "id": "http://www.wikidata.org/entity/Q25191", "label": "Christopher Nolan" } ], "castMembers": [ { "id": "http://www.wikidata.org/entity/Q55294", "label": "Kenneth Branagh" }, ... { "id": "http://www.wikidata.org/entity/Q125145908", "label": "Olli Haaskivi" } ] }
-
From milestone 2 to milestone 3, I have developed mainly 3 endpoints as I have described above. I have written unit tests for those endpoints. They are existent in the tests.py file of the project. Also I will provide the snippets of the tests below. What those test does is to check if the endpoints return the expected response for the given input and also check if the response is in the correct format.
class TestDirectorDetailsEndpoint(TestCase): def setUp(self): self.client = APIClient() def test_get_director_details(self): url = '/get-director-details/' data = { "entity_id": "Q25191" } response = self.client.post(url, data, format='json') self.assertEqual(response.status_code, status.HTTP_200_OK) response_data = json.loads(response.content)[0] self.assertIn('name', response_data) self.assertIn('description', response_data) self.assertIn('image', response_data) self.assertIn('dob', response_data) self.assertIn('films', response_data) class TestActorDetailsEndpoint(TestCase): def setUp(self): self.client = APIClient() def test_get_actor_details(self): url = '/get-actor-details/' data = { "entity_id": "Q38111" } response = self.client.post(url, data, format='json') self.assertEqual(response.status_code, status.HTTP_200_OK) response_data = json.loads(response.content)[0] self.assertIn('name', response_data) self.assertIn('description', response_data) self.assertIn('image', response_data) self.assertIn('dob', response_data) self.assertIn('films', response_data) class TestFilmDetailsEndpoint(TestCase): def setUp(self): self.client = APIClient() def test_get_film_details(self): url = '/get-film-details/' data = { "entity_id": "Q108839994" } response = self.client.post(url, data, format='json') self.assertEqual(response.status_code, status.HTTP_200_OK) response_data = json.loads(response.content)[0] self.assertIn('label', response_data) self.assertIn('description', response_data) self.assertIn('image', response_data) self.assertIn('poster', response_data) self.assertIn('genres', response_data) self.assertIn('directors', response_data) self.assertIn('castMembers', response_data)
I have set up the DigitalOcean droplets and deployed the services to the droplets. I have also contributed to the dockerization of the services. But those were done prior to the 2nd milestone. I have no main enhancements on that side as they are already working correctly. Up to third milestone, I have just maintained the deployment environment and pulled the latest changes to the droplets as my team members have made changes on the dev branch.
-
I have realized, sometimes our wikidata requests behave unstable and we dont get response on frontend. After investigating the issue, I have found out that the issue is caused by the rate limiting of Wikidata API. I have proposed a solution to this issue. I have optimized sparql queries to reduce the number of requests. Also added functionality to respect 429 status code and retry the request after a certain amount of time.
Wikidata does not provide a comprehensive list of film posters but they are sure needed to display on the frontend. They make a huge difference in the user experience. Me and irem have investigated the issue and found out that TMDB API provides a comprehensive list of film posters. And good thing is that nearly all films entities in wikidata have a TMDB id. So we have decided to use TMDB API to get the posters of the films. I have implemented the TMDB integration and edited the film details endpoint to get posters using new sparql response with tmdbid.
Sending multiple sparql queries carelessly can cause rate limiting issues. To prevent it, I realized one should crefully engineer the queries to minimize the number of requests. For example in the case of getting a film details with cast members we were getting their id from one request and then sending for each of them to get their name to display them on the film details page under cast members section. Sending those queries for each cast member overwhemled the Wikidata API, making it throw 429 errors. Later on I have figured out, it is possible to batch those into a single query by using a subquery and also returning a name list in another column.
-
Contributions:
- Sequence diagrams creation, frontend development, project plan update, meeting notes, and demo presentation.
-
Most Important Issues:
-
Most Important Pull Requests:
-
API
- I worked on the frontend development. So I have made a small contribution to APIs. I communicated with backend team to chose which responses the functions should return, and as the frontend team, we created the app according to the that decisions.
-
Challanges Faced an their Solutions
- I struggled a lot while linking frontend with backend. Swagger was not working with eduroam and I was forgetting this. So, most of the time my endpoint requests was not working. I was trying to debug the code until I realize I am connected to the eduroam. This situation most of the time demoralized me and slowed my progress.
- I think React is hard to learn for in a short time interval. Also, there were no person with former React experience in our frontend team. Therefore, understanding functuonality was hard most of the time. Especially, handling requests. I used online tools and asked for my teammates help for solution.
-
I have contributed to the creation of our project's mobile application. My contributions to that part include preparing mock data, initializing main and profile page, adding filtering by genre feature to movies page and connecting it to backend, updating profile page by adding edit profile functionality and my posts sections. I also reviewed my partner @m-erkam's works and played an active role in solving application bugs and design issues.
-
-
Mobile app development, user scenarios update, use case diagrams creation, and milestone report preparation.
- Profile page updated so that the users can see their information and the posts, film lists etc properly. #173
- Filter buttons created and genre filter connected to the relevant backend endpoint.#210
- Description: Profile page changes done by me and create post function done by my teammate @m-erkam are merged by my teammate @m-erkam.
- Description: Filter by genre functionality to movies page added and connected to backend and merged with the new features added by @m-erkam
- Description: Added filter by genre functionality to movies page and connected to backend.
-
Description: Added edit profile page, also made user's posts visible in profile page.
-
-
I have made a small contribution to that section because I had worked on the mobile development. We chose which responses the functions should return in communication with the backend team, and as the mobile team, we created the app in line with that decisions.
-
Since my username, which I could not change due to my Windows education membership, contained Turkish characters, I had problems installing and using React Native CLI and Android Studio. As a solution, I had to format my computer. Since I could not do this in the middle of the semester, I borrowed my room mate's old computer and used it to develop the mobile app.
While adding filters to the movies page, I had to brainstorm and research for a while to adapt the backend functions to the design of the mobile app and make it work smoothly.
I am in the backend team, for milestone 3, we prioritized adding Wikidata API call's and enhance the semantic search capability of our app, thus I wrote some sparql queries and wrote another query for getting the poster from OMDB API. I also added the genre label field to the search query for displaying movies's genres. Also wrote unit tests in tests.py under the src/app for login and get-film-info endpoints which I wrote before. Also discussed and developed solutions for the problems that we continuously encountered during deployment and dockerization processes.
-
- Description: We decided to implement filtering functionality, I did some research and analyzed our use case design and requirements to detect what will be included in the endpoint to enable filtering, further I created another issue to add some specific fields to the response like in #184 .
-
- Description: This was for Wikidata queries' additional fields in the response that will be added to app for 3rd milestone.
-
- Description: Added entity id to director call response.
-
#185
- Description: I wrote here the SPARQL query for enabling filtering with respect to genre of the movies and displaying their posters in the main page.
-
#196
- Description: I first wrote another function for searching the name of directors in Wikidata, after discussing with frontend team, I added their Wikidata entity id to the query result for properly displaying their page in the app.
-
#186
- Description: Although this was not a game changer, I solved here some problems related to the query result, same films were showing off multiple times in the result and also there was a mismatch in their release dates, I ordered and constrained the dates and added some DISTINCT constraints to the query.
import requests
import os,dotenv
from dotenv import load_dotenv
load_dotenv()
omdb_api_key = os.getenv('OMDB_API_KEY')
def get_movie_details(imdb_id, api_key:str=omdb_api_key):
url = f"http://www.omdbapi.com/?i={imdb_id}&apikey={api_key}"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
print("-----")
print(data)
print("-----")
details = {}
if 'Ratings' in data:
for rating in data['Ratings']:
if rating['Source'] == 'Rotten Tomatoes':
details['rotten_rating'] = rating['Value']
if 'Poster' in data:
details['Poster URL'] = data['Poster']
return details
else:
return None
- Main Page Movie Rendering WikiData call :
def recently_released_and_info(self, limit):
current_date = datetime.now().isoformat()
# Define the limit for the number of results
# SPARQL query
SPARQL = f"""
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT ?film ?filmLabel (SAMPLE(?publicationDate) AS ?earliestPublicationDate)
(SAMPLE(?genreLabel) AS ?sampleGenreLabel) (SAMPLE(?imdbID) AS ?sampleImdbID) WHERE {{
?film wdt:P31 wd:Q11424; # Instance of film
wdt:P364 wd:Q1860; # Original language is English
wdt:P577 ?publicationDate; # Publication date
wdt:P136 ?genre; # Genre
wdt:P345 ?imdbID; # IMDb ID
FILTER (?publicationDate < "{current_date}"^^xsd:dateTime)
SERVICE wikibase:label {{
bd:serviceParam wikibase:language "en".
?film rdfs:label ?filmLabel.
?genre rdfs:label ?genreLabel.
}}
}}
GROUP BY ?film ?filmLabel
ORDER BY DESC(?earliestPublicationDate)
LIMIT {limit}
"""
results = self.execute_query(SPARQL)
print("heyyy ",results)
i=0
for result in results['results']['bindings']:
imdbID=results['results']['bindings'][i]['sampleImdbID']['value']
poster_url=get_movie_details(imdbID)['Poster URL'] if 'Poster URL' in get_movie_details(imdbID) else "No poster found"
ratings=get_movie_details(imdbID)['rotten_rating'] if 'rotten_rating' in get_movie_details(imdbID) else "No rating found"
print("poster url is ",poster_url)
results['results']['bindings'][i]['poster_url']=poster_url
results['results']['bindings'][i]['ratings']=ratings
i+=1
return results
class TestLoginFunction(TestCase):
def setUp(self):
self.client = APIClient()
def test_login_success(self):
# Test case for successful login
url='/login/'
request_json = {
"username": "irem17",
"password": "{app_user_pass}"
}
response = self.client.post(url, request_json, format='json')
print(response)
self.assertTrue('access' in response)
self.assertTrue('refresh' in response)
def test_login_failure(self):
# Test case for failed login
url='/login/'
request_json = {
"username": "invalid_username",
"password": "invalid_password"
}
"""
{
"detail": "No active account found with the given credentials"
}
"""
response = self.client.post(url, request_json, format='json')
self.assertTrue('detail' in response)
class TestGetFilmInfo(TestCase):
def setUp(self):
self.client = APIClient()
def test_get_film_info(self):
# Test case for successful get film info
url='/get-film-info/'
request_json = {
"limit": 10,
}
"""
RESPONSE:
{
"id": "http://www.wikidata.org/entity/Q124370507",
"label": "Thelma the Unicorn",
"publicationDate": "2024-05-17T00:00:00Z",
"genreLabel": "",
"imdbID": "",
"poster_url": "https://m.media-amazon.com/images/M/MV5BNDFmZGNhNjYtYjI2Ni00ZDIwLTlmOTItYjBjMDhiZWRiMjk5XkEyXkFqcGdeQXVyODAyNTM3NjQ@._V1_SX300.jpg",
"rating": "No rating found"
},
"""
response = self.client.post(url, request_json, format='json')
self.assertEqual(response.status_code, status.HTTP_200_OK)
response_data = json.loads(response.content)
self.assertEqual(len(response_data), 10)
self.assertIn('id', response_data[0])
self.assertIn('label', response_data[0])
self.assertIn('publicationDate', response_data[0])
self.assertIn('genreLabel', response_data[0])
self.assertIn('imdbID', response_data[0])
self.assertIn('poster_url', response_data[0])
self.assertIn('rating', response_data[0])
As I was requesting too many fields from a large space of film entities, I was getting timeout error from Wikidata service, then parametrized the limit of requested queries to solve this.
On remote machine we could not properly connect to our dockerized database and do some operation other than reading the data, then I searched for solutions and in mysql bash we run the following command:
GRANT ALL PRIVILEGES ON * TO 'root'@'%' WITH GRANT OPTION;
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, Halil İbrahim Kasapoğlu, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, Rukiye Aslan, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, Osman Yasin Bastug, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, Furkan Şenkal, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, Muhammed Erkam Gökcepınar, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, Mahmut Buğra Mert, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
Related to the submission of all the project deliverables for the project of the CMPE 352 course, during the 2024 Spring semester, reported in this report, I, İrem Nur Yıldırım, declare that:
-
I am a student in the Computer Engineering program at Boğaziçi University and am registered for the CMPE 352 course during the 2024 Spring semester.
-
All the material that I am submitting related to my project (including but not limited to the project repository, the final project report, and supplementary documents) have been exclusively prepared by myself.
-
I have prepared this material individually without the assistance of anyone else with the exception of permitted peer assistance which I have explicitly disclosed in this report.
- Lab Report #1,24.09.2024
- Lab Report #2,01.10.2024
- Lab Report #3,08.10.2024
- Lab Report #4,15.10.2024
- Lab Report #5,05.11.2024
- 1st Meeting, 03.10.2024
- 2nd Meeting, 10.10.2024
- 3rd Meeting, 17.10.2024
- 4th Meeting, 19.10.2024
- 5th Meeting, 24.10.2024
- Halil İbrahim Kasapoğlu
- Rukiye Aslan
- Kamil Deniz Coşkuner
- Mahmut Buğra Mert
- Furkan Şenkal
- Muhammed Erkam Gökcepınar
- Cem Güngör
- Oğuz Pançuk
Orkun Mahir Kılıç
SemanticFlix Archieve
- 1st Meeting,19.02.2024
- 2nd Meeting,21.02.2024
- 3rd Meeting,03.03.2024
- 4th Meeting,07.03.2024
- 5th Meeting,10.03.2024
- 6th Meeting,14.03.2024
- 7th Meeting,21.03.2024
- 8th Meeting,01.04.2024
- 9th Meeting,17.04.2024
- 10th Meeting,18.04.2024
- 11th Meeting,25.04.2024
- 12th Meeting,02.05.2024
- 13th Meeting,09.05.2024
- Halil İbrahim Kasapoğlu (Communicator)
- Rukiye Aslan
- Kamil Deniz Coşkuner
- Mahmut Buğra Mert
- İrem Nur Yıldırım
- Furkan Şenkal
- Muhammed Erkam Gökcepınar
- Osman Yasin Baştuğ
Okay DemirSait Hızlı