Skip to content

instilled/cljdbc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cljdbc Build Status

This is alpha software! Query & DML statements work. DDL statemnts are still missing. Expect some breaking API changes before a first release.

An opinonated JDBC wrapper for clojure inspired by clojure.java.jdbc and funcool's clojure.jdbc. It uses PreparedStatements for all of the queries and adds further capabilities for named parameters as does yesql. Declaring named & positional parameters uses Spring notation (colon).

Contributions & bugfixes are much appreciated!

Clojars Project

Features & WIP

  • Named & positional parameters as well as usual ? supported
  • java.sql.PreparedStatement everywhere (proven by other implementations building on jdbc, e.g. hibernate)
  • Full transaction support based on savepoints (thus only compatible with drivers supporting savegoints [add non-savepoint based transaction manager?]
  • Connection pools for Hikari, Tomcat JDBC [wip: C3P0]
  • Unified autogenerated pks retrieval (currently mysql, postgres, oracle with ojdbc 7)
  • [tbd] DBUnit integration
  • [tbd] Provide clojure.spec support to spec query result and do type coercion
  • [tbd] Auto-select queries based on vendor (to support multi-vendor code)
  • [tbd] Support Spring's transaction manager & JDBC Template bindings
  • [tbd] DDL statements

Vendor support

  • MySQL -> ok
  • Oracle -> ok
  • Postgres -> ok
  • H2 -> ok
  • (SQLite, DB2, MSSQL, ...) -> undecided

Rationale

Why yet another clojure jdbc wrapper?

TBD

Compares to and build on:

  • clojure.java.jdbc
  • funcool java.jdbc
  • hibernate

Missing features:

  • simple & standardized API
  • named params
  • connection pool
  • savepoints
  • unified vendor support (e.g. autogenerated pks) when working with code targeting multiple vendors

Quick Start

To get started create a datasource. make-datasource takes either a

  • jdbc url
  • pre-configured java.sql.DataSource
  • string pointing to some jndi datasource (prefix the location with jndi:)

Unlike clojure.java.jdbc cljdbc does not support other forms of creating a connection/datasource, e.g. through a map. All connection details must thus be provided in the jdbc-url or result from a preconfigured java.sql.DataSource.

(require [instilled.cljdbc :as jdbc])

;; The preferred way of connecting to the database is to use a connection pool.
;; It is expected that the connection pool dependency (hikari, tomcat, ...)
;; be on the classpath. Cljdbc does not depend on any of the connection
;; pool implementations.
(def ds
  (jdbc/make-datasource "jdbc:mysql://localhost:3306/cljdbc?username=cljdbc?password=cljdbc"
   {:hikari {}}
   ;; {:tomcat {}}
   ;; {:c3p0 {}} ;; WIP, not yet supported
   )

;; From jndi (any options will be ignored
(def ds-from-jndi
  (jdbc/make-datasource "jndi:java:/comp/env/jdbc/MyDatasource"))

;; This is an unpooled connection. It uses java.sql.DriverManager/getConnection
;; each time a connection is requested.
(def ds-unpooled
  (jdbc/make-datasource "jdbc:mysql://localhost:3306/cljdbc?username=cljdbc?password=cljdbc"))

See instilled.cljdbc.cp.{hikari,tomcat} for options supported by a connection pool implementation.

A connection can be obtained with with-connection macro. The body is always executed in a transactional context and will rollback in case of an exception.

(jdbc/with-connection [conn ds] <optional-options> ;; ds may also be an existing conn
  (let [result (jdbc/query conn my-query {:mass 100})]
    ;; do something with `result`
  ))

The <optional-options> will be passed to

with-connection-binding may also be used to establish a connection. It binds to a dynamic var. This may be useful if passing the connection around as parameter is not be desired.

(def ^:dynamic *my-global-var*)

(jdbc/with-connection-binding [*my-global-var* ds]
  (let [result (jdbc/query conn my-query {:mass 100})]
    ;; do something with `result`
  ))

Query

;; with named params
(def my-query (jdbc/parse-statement "select mass from planet where mass > :mass"))
(jdbc/with-connection [conn ds]
  (let [result (jdbc/query conn my-query {:mass 100})]
    ;; do something with `result`
  ))

;; with positional params
(def my-query (jdbc/parse-statement "select mass from planet where mass > :? and system = :?"))
(jdbc/with-connection [conn ds]
  (let [result (jdbc/query conn my-query {:? [100 "Solar System"]})]
    ;; do something with `result`
    ))

;; or classic positional
(def my-query (jdbc/parse-statement "select mass from planet where mass > ? and system = ?"))
(jdbc/with-connection [conn ds]
  (let [result (jdbc/query conn my-query [100 "Solar System"])]
    ;; do something with `result`
    ))

DML (insert, update, delete)

DML statements work similar to the query statement. Multiple statements can be executed in batch mode (if requested) to increase performance. Insert statements may optionally return autogenerated keys. cljdbc attempts to abstract the key retrieval so that the same code can target multiple vendors.

To retrieve autogenerated keys pass {:returning [:id]} as options to the parse-statement function where :id is the name of the column containing the autogenerated key. Some drivers have full support for the returning clause, e.g. Oracle & Postgres. With these drivers the returning clause defines wich columns to return, not only the autogenerated keys. For maximum compatibility it is suggested the autogen key column be first. (TBD: shall the returning clause be generalized over all vendors to work the same, e.g mysql to return more cols ???)

(def my-insert (jdbc/parse-statement "insert into planet (system, name, mass) values (:system,:name,:mass)"
                 {:returning [:id]}))
(jdbc/with-connection [conn ds]
  (let [result (jdbc/insert! conn my-insert [{:system "Solar System" :name "Earth" :mass 3.47} {:system "Solar System" :name "Pluto" :mass 1.303}])]
    ;; result is a list of maps each with the key :id
    ))

The above query can be executed in batch mode by passing {:batched? true} to the insert! method. Note that all options can also be provided to the parse-statement function.

(def my-insert (jdbc/parse-statement "insert into planet (system, name, mass) values (:system,:name,:mass)"
                 {:returning [:id]
                  ;; :batched? true ;; optionally here
                  }))
(jdbc/with-connection [conn ds]
  (let [result (jdbc/insert! conn my-insert [{:system "Solar System" :name "Earth" :mass 3.47} {:system "Solar System" :name "Pluto" :mass 1.303}]
                  {:batched? true)]
    ;; result is a list of maps each with the key :id as in :returning
    ))

Some other examples:

This will execute a single statement (not passing the map in a vector):

(def my-delete (jdbc/parse-statement "delete from planet where mass < :mass"))
(jdbc/with-connection [conn ds]
  (let [rows-affected (jdbc/delete! conn my-delete {:mass 3.47})]
    ;; do something useful here
    ))

All DML statement support batching:

(def my-update (jdbc/parse-statement "update planet set system = :system-new where system = :system-old"))
(jdbc/with-connection [conn ds]
  ;; method.
  (let [rows-affected (jdbc/update! conn my-update [{:system-old "Solar System" :system-new "Solar System (updated)"} {:system-old "???" :system-new "xxx"}]
                        {:batched? true)]
    ;; do something useful
    ))

Transactions

The default transaction strategy in cljdbc builds on a database's savepoints feature. A new transaction context is automatically started when a connection is obtained with the with-connection macro. Isolation levels and read-only settings should thus be passed to the with-connection function.

(jdbc/with-connection [conn ds] {:read-only? true :isolation :serializable}
  ;; body
  )

The with-transaction macro will spawn a new nested transaction that can be rolled back individually.

(jdbc/with-transaction [conn conn]
   ;; the application may invoke
   ...)

DDL

TBD

Hacking

TBD

Running the repl will add src/test/{clojure,resources} to the classpath. If you use cider you may want to add cider deps to the classpath as well. Start the repl with

boot [cider] repl

where [cider] assumes you've got a boot cider tasks defined somewhere, e.g. in ~/.boot/profile.boot.

Running tests

Tests can be run with

boot test

For integration tests make sure docker containers are running (see Docker Database Setup below for details how to spawn docker boxes) then type

boot {oracle,mysql,postgres} test

Docker Database Setup

It is assumed that docker agent runs natively on the machine, i.e. not through anything like boot2docker.

Oracle

(cd src/docker/docker-oracle && docker build -t cljdbc-oracle .)
docker run -d -p 49161:1521 cljdbc-oracle

You'll now be able to connect to it with

user: cljdbc
password: cljdbc
host: localhost:49161

MySQL

(cd src/docker/docker-mysql && docker build -t cljdbc-mysql .)
docker run -e MYSQL_ROOT_PASSWORD=cljdbc-root -d -p 3306:3306 cljdbc-mysql

You can connect to it with

user: cljdbc
password: cljdbc
host: localhost:49161

Postgres

(cd src/docker/docker-postgres && docker build -t cljdbc-postgres .)
docker run -e POSTGRES_PASSWORD=cljdbc-root -d -p 5432:5432 cljdbc-postgres

You can connect to it with

user: cljdbc
password: cljdbc
host: localhost:5432

License

Copyright (C) 2016 Fabio Bernasconi

Distributed under the Eclipse Public License, the same as Clojure.