안에 gradient가 2개 들어가면 에러 발생

tape를 2개로 늘려서 해결 가능

x = tf.constant(5.0)
with tf.GradientTape() as g:
  g.watch(x)
  with tf.GradientTape() as gg:
    gg.watch(x)
    y = x * x
  dy_dx = gg.gradient(y, x)  # dy_dx = 2 * x
d2y_dx2 = g.gradient(dy_dx, x)  # d2y_dx2 = 2
print(dy_dx)

print(d2y_dx2)

y1_EC = y1 + y1_NN(x)*x
y1_EC = tf.add(y1, y1_NN(x)*x)

에러 문구

Gradients do not exist for variables ['dense_109/bias:0'] when minimizing the loss. If you're using model.compile(), did you forget to provide a lossargument?

해결 : 덧셈 부분 tf.add로 바꿔서 실행


test_x = np.linspace(0, 2*np.pi, 100, dtype='float32')  # 'float64 -> float32'
cos, sin = tf.cos(test_x), tf.sin(test_x)

에러 문구

cannot compute AddV2 as input #1(zero-based) was expected to be a double tensor but is a float tensor [Op:AddV2]

해결 : dtype 같은 형태로 바꿔주면 됨