Skip to content

Releases: kavicastelo/Ethical-AI-driven-Geographic-Analytics-Platform

version 1.6.0

24 Jan 21:16
Compare
Choose a tag to compare

Release Notes v1.6.0

Overview

Version 1.6.0 release is a major update to the project. This version introduces the following changes:

  • New API releases for handle user data
  • New API releases for handle platform data
  • New API releases for handle messaging and contact data

What's New

  • This is the new API release
  • Extended version of 1.2.0 release
  • Used MongoDB as database
  • Added new 12 models and controllers
  • every each model has its own CRUD endpoints
  • Added new endpoints to handle messaging and contact data
  • Read Docs to learn more

What's Changed

  • This release runs on server
  • Localhost is no longer available
  • Release includes a comparison guide to setup project for localhost if user needs.

What's Fixed

  • This release runs on server
  • This release has most suitable way to handle python environment
  • Added requirements.txt file
  • Added a docker file for deployment java and python step by step

Full Changelog: 1.4.0...1.6.0

version 1.4.0

15 Jan 09:46
Compare
Choose a tag to compare
version 1.4.0 Pre-release
Pre-release

Release Notes v1.4.0

What's New

  • This is completely frontend design
  • Extend version of 1.3.0 release
  • Implemented router guards to prevent unauthorized access for dashboard and admin panel
  • Added test data models and stores to test the frontend
  • Added forecast, users, blogs, comments, feedback, faq and settings pages for administration
  • Added Privacy Policy and Terms of Conditions pages for frontend
  • Implemented markdown support components
  • Created coming-soon, contact-form, forbidden, sign-up, signin-form, and signup-form components as shared components

New Libraries and Packages

  • Angular Material
  • markdown-it : v14.0.0
  • ngx-markdown : v15.1.2

Administration Panel Tree

  • Forecast
    • Forecast New
    • Forecast Edit
  • Users
    • Users Requests
    • Users List
    • Admins
    • New Admin
  • Blogs
    • Blogs New
    • Blogs List
    • Blogs Edit
  • Comments
    • Coming soon
  • Faq
    • Faq New
    • Faq List
    • Faq Edit
  • Feedback
  • Settings
    • User Privacy Policy
    • User Terms of Conditions
    • Admin Privacy Policy
    • Change Privacy

What's changed

Full Changelog: 1.3.1...1.4.0

version 1.3.0

08 Jan 17:13
Compare
Choose a tag to compare
version 1.3.0 Pre-release
Pre-release

What's Changed

Full Changelog: 1.2.0...1.3.1

version 1.2.0

07 Jan 15:18
Compare
Choose a tag to compare

Release Notes v1.2.0

Overview

Version 1.2.0 marks a significant milestone as we introduce AI prediction capabilities using pre-trained Python models.
Leveraging the power of Py4J gateway, our application seamlessly communicates between Python and Java to provide
accurate predictions for various factors. This release also includes the implementation of Linear Regression for
training models, extensive documentation, and the introduction of essential dependencies.

New Features

AI Prediction Methods

  • Pre-trained Python Models: Integrated AI prediction methods using pre-trained Python models for each factor.
  • Py4J Gateway Communication: Established communication between Python and Java using Py4J gateway.
  • Linear Regression Models: Implemented Linear Regression to train models for accurate predictions.

Create and Train Models

from faker import Faker
import csv
import random
from datetime import datetime, timedelta
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import pickle

fake = Faker()


# Function to generate improved dummy air quality data
def generate_improved_air_quality_data(locations, start_time, end_time, frequency):
  fieldnames = ['timestamp', 'location', 'pm25', 'pm10', 'co2', 'ozone', 'no2', 'airTemperature', 'airHumidity',
                'airWind_speed']
  with open('improved_air_quality_data.csv', 'w', newline='') as csvfile:
    writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
    writer.writeheader()

    current_time = start_time
    while current_time <= end_time:
      for location in locations:
        temperature = random.uniform(20, 30)
        humidity = random.uniform(15, 20)

        # Simulate rush hour increase in pollution
        if 8 <= current_time.hour <= 10 or 17 <= current_time.hour <= 19:
          pollution_increase = random.uniform(5, 15)
        else:
          pollution_increase = 0

        writer.writerow({
          'timestamp': current_time.strftime('%Y-%m-%d %H:%M:%S'),
          'location': location,
          'pm25': temperature + humidity + pollution_increase + random.uniform(-2, 2),
          'pm10': temperature + humidity + pollution_increase + random.uniform(-2, 2),
          'co2': temperature + humidity + pollution_increase + random.uniform(-3, 3),
          'ozone': temperature + humidity + pollution_increase + random.uniform(-2, 2),
          'no2': temperature + humidity + pollution_increase + random.uniform(-1, 1),
          'airTemperature': temperature,
          'airHumidity': humidity,
          'airWind_speed': random.uniform(3, 8)
        })

      # Sleep for 5 minutes before generating data for the next interval
      # time.sleep(30)
      current_time += frequency


# Function to generate improved dummy meteorological data
def generate_improved_meteorological_data(locations, start_time, end_time, frequency):
  with open('improved_meteorological_data.csv', 'w', newline='') as csvfile:
    fieldnames = ['timestamp', 'location', 'temperature', 'humidity', 'wind_speed', 'precipitation']
    writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
    writer.writeheader()

    current_time = start_time
    while current_time <= end_time:
      for location in locations:
        temperature = random.uniform(20, 30)
        humidity = random.uniform(15, 20)

        # Simulate higher precipitation during colder months
        precipitation = random.uniform(0, 0.5) + max(0, (25 - temperature) / 25)

        writer.writerow({
          'timestamp': current_time.strftime('%Y-%m-%d %H:%M:%S'),
          'location': location,
          'temperature': temperature,
          'humidity': humidity,
          'wind_speed': random.uniform(3, 8),
          'precipitation': precipitation
        })
      # time.sleep(30)
      current_time += frequency


# Function to generate improved dummy land use information
def generate_improved_land_use_data(locations):
  with open('improved_land_use_data.csv', 'w', newline='') as csvfile:
    fieldnames = ['location', 'land_type']
    writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
    writer.writeheader()

    for location in locations:
      # Simulate variations in land use types
      land_type = random.choice(['Residential', 'Commercial', 'Industrial', 'Park'])
      writer.writerow({
        'location': location,
        'land_type': land_type
      })


# Train and save a model for a given feature
def train_and_save_model(df, feature_name):
  X = df.drop(['timestamp', 'location', feature_name], axis=1)
  y = df[feature_name]

  # Split the dataset into training and testing sets
  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

  # Train a model (example: Random Forest Regressor)
  model = LinearRegression()
  model.fit(X_train, y_train)

  # Make predictions
  predictions = model.predict(X_test)

  # Evaluate the model
  mse = mean_squared_error(y_test, predictions)
  print(f'Mean Squared Error for {feature_name}: {mse}')

  # Save the trained model
  model_filename = f'{feature_name}_model.pkl'
  with open(model_filename, 'wb') as f:
    pickle.dump(model, f)

  print(f'Model saved as {model_filename}')


if __name__ == "__main__":
  # Define parameters
  locations = ['CentralPark', 'Downtown', 'SuburbA', 'SuburbB']
  start_time = datetime.now()
  end_time = start_time + timedelta(days=1)  # Run for 1 day
  frequency = timedelta(minutes=15)

  # Generate dummy datasets continuously
  generate_improved_air_quality_data(locations, start_time, end_time, frequency)
  generate_improved_meteorological_data(locations, start_time, end_time, frequency)
  generate_improved_land_use_data(locations)

  # Load datasets for each factor
  df_air_quality = pd.read_csv('improved_air_quality_data.csv')
  df_meteorological = pd.read_csv('improved_meteorological_data.csv')

  # Train and save models for each factor
  for feature in ['pm25', 'pm10', 'co2', 'ozone', 'no2', 'temperature', 'humidity', 'airHumidity', 'wind_speed',
                  'precipitation', 'airWind_speed', 'airTemperature']:
    if feature in df_meteorological.columns:
      train_and_save_model(df_meteorological, feature)
    else:
      print(f"Warning: {feature} not found in meteorological dataset. Skipping...")

    if feature in df_air_quality.columns:
      train_and_save_model(df_air_quality, feature)
    else:
      print(f"Warning: {feature} not found in air quality dataset. Skipping...")

Documentation

  • Code Documentation: Extensively documented AI prediction methods, Py4J gateway usage, and model training.
  • Statistical API Tests: Rigorously tested all statistical APIs and documented test cases.
  • API Documentation: Comprehensive documentation for all APIs, providing usage guidelines and examples.

New Dependencies

  • py4j: Version 0.10.9.7 for seamless Python-Java integration.
  • junit: Used for testing purposes.
  • pmml-sklearn: Version 1.7.45 for working with PMML models.
  • slf4j-jdk14: Version 2.0.9 for logging in the JDK 14 environment.

Next Steps

In upcoming releases, we plan to enhance the AI prediction capabilities, introduce more advanced machine learning models, and further refine the statistical analysis methods. Your feedback is invaluable as we continue to improve and expand the features of our Air Quality Monitoring Application.

Thank you for your continued support.

- Kavindu Kokila(Fullstack developer)

version 1.1.0

07 Jan 15:17
Compare
Choose a tag to compare
version 1.1.0 Pre-release
Pre-release

Release Notes v1.1.0

Overview

We are thrilled to introduce version 1.2.0 of our Air Quality Monitoring Application. This release focuses on implementing essential statistical analysis methods and enhancing data management capabilities.

New Features

Statistical Analysis

  • Get All Data by Date Range: Introduces a method to retrieve data within a specified date range.
  • Get Means for Each Factor by Date Range: Adds a method to calculate means for each factor within a given date range.
  • Get Median for Each Factor: Implements a method to calculate the median for each factor.
  • Get Mode for Each Factor: Introduces a method to find the mode for each factor.
  • Find Correlations Between Factors: Adds a method to determine correlations between each pair of factors.

Bulk Data Import

  • Bulk Import Data from CSV File: Implements a method to bulk import data from a CSV file into the database.

New Dependencies

  • opencsv: Upgraded to version 5.7.1 for enhanced CSV processing.
  • commons-math3: Introduced version 3.6.1 to leverage advanced mathematical functions.

DTO Enhancements

  • CorrelationDTO: A new DTO (Data Transfer Object) specifically for converting correlation values to double values.

Next Steps

This release lays the groundwork for advanced data analysis and management. In future versions, we plan to build upon these features, introducing more sophisticated statistical methods and expanding the application's capabilities.

Thank you for your ongoing support, and we look forward to delivering more powerful functionalities in the next release.

- Kavindu Kokila(Fullstack developer)

version 1.0.0

07 Jan 15:16
Compare
Choose a tag to compare
version 1.0.0 Pre-release
Pre-release

Release Notes v1.0.0

Overview

We are excited to announce the initial release of our Air Quality Monitoring Application, version 1.0.0. This release lays the foundation for our platform, focusing on Spring Boot backend functionality and basic CRUD operations for the following entities:

  • Air Quality
  • Land Use
  • Meteorological

Key Features

Backend Structure

The backend is built on the Spring Boot framework (version 3.2.0), utilizing Java 17. The MongoDB database is employed to store and manage data efficiently.

File Structure

The application follows a modular file structure for enhanced organization:

com.api.air_quality
    - controller
        - airQualityController
        - landUseController
        - metrologicalController
    - model
        - airQualityModel
        - landUseModel
        - metrologicalModel
    - repository
        - airQualityRepository
        - landUseRepository
        - metrologicalRepository
    - service
        - airQualityService
        - landUseService
        - metrologicalService
    - dto
        - ApiResponse
    - AirQualityApplication
    - CorsConfig
    - MongoConfig

CRUD Operations

In this version, basic CRUD operations are implemented for Air Quality, Land Use, and Meteorological entities. Repository methods are utilized to handle operations, and service files remain unimplemented.

Technology Stack

The application is built using the following technologies:

  • Spring Boot: version 3.2.0
  • Java: Version 17
  • Database: MongoDB

Configuration

The application has disabled Spring Security, and route configurations are managed using the CorsConfig file. Responses are formatted into JSON using the ApiResponse class.

Next Steps

While v1.0.0 provides a solid foundation for our platform, upcoming releases will focus on implementing service logic, adding authentication, and expanding CRUD functionalities. We appreciate your support and look forward to delivering more features in future releases.

Thank you for being part of our journey!

- Kavindu Kokila(Fullstack developer)