Skip to content

Commit

Permalink
Release v0.4.3 (#101)
Browse files Browse the repository at this point in the history
* Bump actions/checkout from 4.1.2 to 4.1.3
([#97](#97)). The
`actions/checkout` dependency has been updated from version 4.1.2 to
4.1.3 in the `update-main-version.yml` file. This new version includes a
check to verify the git version before attempting to disable
`sparse-checkout`, and adds an SSH user parameter to improve
functionality and compatibility. The release notes and CHANGELOG.md file
provide detailed information on the specific changes and improvements.
The pull request also includes a detailed commit history and links to
corresponding issues and pull requests on GitHub for transparency. You
can review and merge the pull request to update the `actions/checkout`
dependency in your project.
* Maintain PySpark compatibility for databricks.labs.lsql.core.Row
([#99](#99)). In this
release, we have added a new method `asDict` to the `Row` class in the
`databricks.labs.lsql.core` module to maintain compatibility with
PySpark. This method returns a dictionary representation of the `Row`
object, with keys corresponding to column names and values corresponding
to the values in each column. Additionally, we have modified the `fetch`
function in the `backends.py` file to return `Row` objects of
`pyspark.sql` when using `self._spark.sql(sql).collect()`. This change
is temporary and marked with a `TODO` comment, indicating that it will
be addressed in the future. We have also added error handling code in
the `fetch` function to ensure the function operates as expected. The
`asDict` method in this implementation simply calls the existing
`as_dict` method, meaning the behavior of the `asDict` method is
identical to the `as_dict` method. The `as_dict` method returns a
dictionary representation of the `Row` object, with keys corresponding
to column names and values corresponding to the values in each column.
The optional `recursive` argument in the `asDict` method, when set to
`True`, enables recursive conversion of nested `Row` objects to nested
dictionaries. However, this behavior is not currently implemented, and
the `recursive` argument is always `False` by default.

Dependency updates:

* Bump actions/checkout from 4.1.2 to 4.1.3
([#97](#97)).
  • Loading branch information
nfx authored May 8, 2024
1 parent 2245043 commit 9032c9d
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 1 deletion.
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
# Version changelog

## 0.4.3

* Bump actions/checkout from 4.1.2 to 4.1.3 ([#97](https://github.com/databrickslabs/lsql/issues/97)). The `actions/checkout` dependency has been updated from version 4.1.2 to 4.1.3 in the `update-main-version.yml` file. This new version includes a check to verify the git version before attempting to disable `sparse-checkout`, and adds an SSH user parameter to improve functionality and compatibility. The release notes and CHANGELOG.md file provide detailed information on the specific changes and improvements. The pull request also includes a detailed commit history and links to corresponding issues and pull requests on GitHub for transparency. You can review and merge the pull request to update the `actions/checkout` dependency in your project.
* Maintain PySpark compatibility for databricks.labs.lsql.core.Row ([#99](https://github.com/databrickslabs/lsql/issues/99)). In this release, we have added a new method `asDict` to the `Row` class in the `databricks.labs.lsql.core` module to maintain compatibility with PySpark. This method returns a dictionary representation of the `Row` object, with keys corresponding to column names and values corresponding to the values in each column. Additionally, we have modified the `fetch` function in the `backends.py` file to return `Row` objects of `pyspark.sql` when using `self._spark.sql(sql).collect()`. This change is temporary and marked with a `TODO` comment, indicating that it will be addressed in the future. We have also added error handling code in the `fetch` function to ensure the function operates as expected. The `asDict` method in this implementation simply calls the existing `as_dict` method, meaning the behavior of the `asDict` method is identical to the `as_dict` method. The `as_dict` method returns a dictionary representation of the `Row` object, with keys corresponding to column names and values corresponding to the values in each column. The optional `recursive` argument in the `asDict` method, when set to `True`, enables recursive conversion of nested `Row` objects to nested dictionaries. However, this behavior is not currently implemented, and the `recursive` argument is always `False` by default.

Dependency updates:

* Bump actions/checkout from 4.1.2 to 4.1.3 ([#97](https://github.com/databrickslabs/lsql/pull/97)).

## 0.4.2

* Added more `NotFound` error type ([#94](https://github.com/databrickslabs/lsql/issues/94)). In the latest update, the `core.py` file in the `databricks/labs/lsql` package has undergone enhancements to the error handling functionality. The `_raise_if_needed` function has been modified to raise a `NotFound` error when the error message includes the phrase "does not exist". This update enables the system to categorize specific SQL query errors as `NotFound` error messages, thereby improving the overall error handling and reporting capabilities. This change was a collaborative effort, as indicated by the co-authored-by statement in the commit.
Expand Down
2 changes: 1 addition & 1 deletion src/databricks/labs/lsql/__about__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.4.2"
__version__ = "0.4.3"

0 comments on commit 9032c9d

Please sign in to comment.