Skip to content

Commit

Permalink
Fix non convergence in example of usage (#13)
Browse files Browse the repository at this point in the history
Hi,

The current example of usage does not converge in the amount of
iterations specified

```python
import netket as nk
import netket_fidelity as nkf

# Create the Hilbert space and the variational states |ψ⟩ and |ϕ⟩
hi = nk.hilbert.Spin(0.5, 4)
sampler = nk.sampler.MetropolisLocal(hilbert=hi, n_chains_per_rank=16)
model = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)
phi = nk.vqs.MCState(sampler=sampler, model=model, n_samples=100)
psi = nk.vqs.MCState(sampler=sampler, model=model, n_samples=100)

# Transformation U
U = nkf.operator.Hadamard(hi, 0)

# Create the driver
optimizer = nk.optimizer.Sgd(learning_rate=0.01)
te =  nkf.driver.InfidelityOptimizer(phi, optimizer, U=U, U_dagger=U, variational_state=psi, is_unitary=True, cv_coeff=-1/2)

# Run the driver
te.run(n_iter=100)
```

This keeps yielding an infidelity around 0.5. Either adding more
iterations or changing to the Adam optimizer fixes this.
With Adam and the same number of iterations, infidelity is consistently
smaller than 0.001

Co-authored-by: Mohammed Boky <mohammedboky@gmail.com>
  • Loading branch information
Mboky and Mohammed Boky authored Dec 5, 2023
1 parent 57fbddb commit 5f1bad5
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ psi = nk.vqs.MCState(sampler=sampler, model=model, n_samples=100)
U = nkf.operator.Hadamard(hi, 0)

# Create the driver
optimizer = nk.optimizer.Sgd(learning_rate=0.01)
optimizer = nk.optimizer.Adam(learning_rate=0.01)
te = nkf.driver.InfidelityOptimizer(phi, optimizer, U=U, U_dagger=U, variational_state=psi, is_unitary=True, cv_coeff=-1/2)

# Run the driver
Expand Down

0 comments on commit 5f1bad5

Please sign in to comment.