You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello all, and thanks for such a great package. Straight to the question, I have the following problem related to loss calculation during training. I am trying to solve nagumo equation with initial condition that has trinable parameter. I d like to optimize this parameter knowing that the solution of this pde becomes stationary after certain time. So pde solution with trained initial condition parameter is assumed to be teoretically same as the time-independent solution of same equation (which is an ode). I dont know how exactly i should define the loss. I combine the residual of original equation and time-inpedendent version of the sam equation but i feel it doesnt seem perfectly right as training doesnt produce the optimum initial condition parameter i expect. Judging from the code line added below, can you kindly suggest me how to modify/make corrections so that I d get the expected outcome. thanks in advance.
# Define boundary condition (Homogeneous Dirichlet)
bc = dde.DirichletBC(geomtime, lambda x: 0, lambda _, on_boundary: on_boundary)
# Initial condition
ic = dde.IC(geomtime, lambda x:US_trainable * myinit(x[:, 0:1]), lambda _, on_initial: on_initial)
# Define the PDE residual
def pde(x, u):
du_t = dde.grad.jacobian(u, x, j=1)
du_xx = dde.grad.hessian(u, x)
RHS=du_xx+u * (1 - u)*(u-0.05)
pderes= RHS-du_t
oderes= RHS
pdecoef=1
odecoef=8
return [pdecoef*pderes, odecoef*oderes]
final_T=T
# Data for matching analytical solution at all time steps from 1 to final_T
# Initialize empty arrays for storing the entire dataset
all_x_values = []
all_t_values = []
all_noisy_values = []
for t in range(1, final_T + 1):
x_values = np.linspace(-L, L, num=5000)
t_values = t * np.ones(5000)
observe_t = np.vstack((x_values, t_values)).T
# Assuming mycritiisonnew returns noisy values directly
noisy_values = mycritiisonnew(x_values).reshape(-1, 1) # expected analytical solution
# Append the generated values to the list
all_x_values.append(x_values)
all_t_values.append(t_values)
all_noisy_values.append(noisy_values)
# Convert lists to numpy arrays
all_x_values = np.concatenate(all_x_values)
all_t_values = np.concatenate(all_t_values)
all_noisy_values = np.concatenate(all_noisy_values)
# Combine x and t values
all_observe_t = np.vstack((all_x_values, all_t_values)).T
# Assuming observe_y is the noisy data generated for each time step
observe_y = dde.PointSetBC(all_observe_t, all_noisy_values, component=0)
# Define the model
net = dde.maps.FNN([2] + [50] * 6 + [1], "tanh", "Glorot uniform")
data = dde.data.TimePDE(
geomtime,
pde,
[bc, ic, observe_y],
num_domain=20000,
num_boundary=5000,
num_initial=10000,
anchors=observe_t,
num_test=45000 # Adjust this as needed
)
# Assuming US_trainable is a trainable variable and already defined
US_trainable = dde.Variable(initial_value=10.0)
# Define the correct callback for tracking or modifying US_trainable
variable_callback = dde.callbacks.VariableValue([US_trainable], period=100, filename="variable_main_US.dat")
# Define the model and compile it with the appropriate settings
model = dde.Model(data, net)
model.compile("adam", lr=0.001, external_trainable_variables=[US_trainable])
# Start training with the correct callback
loss_history, train_state = model.train(iterations=5000, callbacks=[variable_callback], display_every=1000)
# Switch to L-BFGS-B for fine-tuning
model.compile("L-BFGS",external_trainable_variables=[US_trainable])
early_stopping = EarlyStopping(min_delta=1e-17, patience=1000)
loss_history, train_state = model.train(iterations=100000, callbacks=[variable_callback], display_every=1000)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello all, and thanks for such a great package. Straight to the question, I have the following problem related to loss calculation during training. I am trying to solve nagumo equation with initial condition that has trinable parameter. I d like to optimize this parameter knowing that the solution of this pde becomes stationary after certain time. So pde solution with trained initial condition parameter is assumed to be teoretically same as the time-independent solution of same equation (which is an ode). I dont know how exactly i should define the loss. I combine the residual of original equation and time-inpedendent version of the sam equation but i feel it doesnt seem perfectly right as training doesnt produce the optimum initial condition parameter i expect. Judging from the code line added below, can you kindly suggest me how to modify/make corrections so that I d get the expected outcome. thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions