Releases: google/TensorNetwork
Removal of the TensorNetwork class
The TensorNetwork class has been REMOVED and will raise an error if you try to use it. To upgrade your code, please see our simple upgrade guide.
- Added
tn.FiniteMPS
class for your standard MPS applications - Added
tn.reduce_density
method for creating a reduced density matrices. - Added
ignore_edge_order
argument to the contractor methods. - Added
tn.split_edge
method. - Added support for contracting subgraphs of a network. Now you can use the
tn.contractor
methods for contracting a subset of your nodes. - Added support for copying a subgraph of the network. When an edge is non-dangling, but only one of the nodes for that edge is in the set of given edges, the "copied" edge become a dangling edge. This is useful for calculating things like environments and normalizations.
- Added a
tn.NodeCollection
context manager. This allows you to collect all of your created nodes into a single list or set easily.
Free Node Paradigm
We have switched to using a "Free Node" paradigm. Examples of what this looks like can be found on our latest README.
Users have told us that the TensorNetwork
object is more cumbersome than helpful, so we will be totally removing it in the next release. This release is out to help users upgrade their code. This is a BREAKING CHANGE. We have made a simple tutorial to upgrade your code.
- New prefered way to import
tensornetwork
is now asimport tensornetwork as tn
- Deprecated the
TensorNetwork
object. Prefer now to just create your nodes usingtn.Node
. All operations such asnet.split_node
can now be called liketn.split_node
. - Added
tn.reachable
method to get a set of all nodes reachable from another node or set of nodes. This usually can be a drop in replacement for when you had to originally give aTensorNetwork
object in methods like all of thecontractors
.
Change Default Backend to Numpy
We have changed the default backend to numpy. This is a BREAKING CHANGE. To fix your code, you can simply do tensornetwork.set_default_backend("tensorflow")
at the top of your main file.
- Slice notation has been added to accessing a node's edges. Simple do
node[:3]
to get the edges for the first 3 axes in a node. - Contraction algorithms for
optimal
,branch
andgreedy
now require anoutput_edge_order
when there is more than one dangling edge in the network. This is to prevent a user from accidentally depending on a non-deterministic edge order after the network contraction. CopyNode
s can now be created outside of aTensorNetwork
, so nownet.add_node(tensornetwork.CopyNode(...))
is the preferred way to add a copy node to a network. We will be removingnet.add_copy_node
in the future.- Added
net.split_node_qr
andnet.split_node_rq
methods that do QR decomposition of aNode
. - Added
net.copy
operation that will copy aTensorNetwork
. This copied network will keep the same tensors objects between the two nodes. This is to make taking gradients of nodes relative to the final contracted value much easier. - Added a
conj
option tonet.copy(conj=True)
. This will copy theTensorNetwork
and conjugate all of the tensors in the network. This is useful for calculating things like reduced density matrices. - Added
net.save(..)
andtensornetwork.load(...)
methods for saving and loading aTensorNetwork
object. - Removed support for python 3.5. If this is a blocker for you, please raise a github issue to readd support.
Opt_einsum contraction algorithms.
Added integration to opt_einsum
's deterministic contraction algorithms. See https://optimized-einsum.readthedocs.io/en/latest/path_finding.html for definitions on how these algorithms work.
- Added
contractors.optimal
to find the optimal contraction based on required flops. - Added
contractors.branch
, which uses branch heuristics to determine which paths to explore. - Added
contractors.greedy
, which uses a fast greedy heuristic. - Added
contractors.auto
, which chooses which of the above algorithms to use based on network size. - Added
contractors.custom
, which allows users to develop their own contraction algorithms.
PyTorch Support!
- Added PyTorch support. Simply do
tensornetwork.set_default_backend("pytorch")
orTensorNetwork(backend="pytorch")
to enable it. - Added
edge1 ^ edge2
as an alias tonet.connect(edge1, edge2)
0.0.5
- Added
net.switch_backend
method - Fixed an SVD bug when the backend is either
numpy
orjax
TensorNetwork 0.0.4 release
- Added the
greedy
contraction algorithm. This will greedily contract the lowest cost node pair first. - Added the
bucket
contraction algorithm. This algorithm is optimized for tensor networks with a lot of copy tensors. - Upgraded the
naive
contraction algorithm. Now it should work even after some edge have been contracted. - Added the
@
operator. Doingnode1 @ node2
is equal to runningnet.contract_between(node1, node2)
- Added
graphviz
visualization integration. Simply dotensornetwork.to_graphviz(net)
to get a graphviz object that is isomorphic to your network. - Added
net.remove_node(node)
method. - Added
node.shape
andedge.dimension
properties. - Improved TF 2.0 beta compatibility.
Multi backend support
- Added support for JAX and NumPy backends.
- Added new
tensordot2
that compiles ~20% faster thantf.tensordot