[MXNet逐梦之旅]实战一运用MXNet拟合曲线(比照MXNet,PyTorch与TensorFlow结束的差异)
在之前的博文中咱们用TensorFlow与PyTorch进行了拟合曲线,抵达了不错的作用。
咱们现在运用MXNet进行相同的曲线拟合,从而来比较一下TensorFlow与PyTorch的异同。
树立神经网络进行操练的进程底子相同,咱们现在开端用MXNet来结束。
- 问题描绘
拟合y= x*x -2x +3 + 0.1(-1到1的随机值) 曲线
给定x规模(0,3) - 问题剖析
在直线拟合博客中,咱们运用最简略的y=wx+b的模型成功拟合了一条直线,现在咱们在进一步进行曲线的拟合。简略的y=wx+b模型现已无法满意咱们的需求,需要利用更多的神经元来解决问题了。 - 生成数据
import numpy as np import matplotlib.pyplot as plt import torch as t from torch.autograd import Variable as var def get_data(x,w,b,d): c,r = x.shape y = (w * x * x + b*x + d)+ (0.1*(2*np.random.rand(c,r)-1)) return(y) xs = np.arange(0,3,0.01).reshape(-1,1) ys = get_data(xs,1,-2,3) xs = var(t.Tensor(xs)) ys = var(t.Tensor(ys))
生成的数据图画为:
![[MXNet逐梦之旅]实战一运用MXNet拟合曲线(比照MXNet,PyTorch与TensorFlow结束的差异) [MXNet逐梦之旅]实战一运用MXNet拟合曲线(比照MXNet,PyTorch与TensorFlow结束的差异)](https://www.6hu.cc/wp-content/uploads/2022/08/e4a367ac5dd3fcc0e5a75872860d7371.png)
- 树立网络
from mxnet.gluon import loss,nn,data from mxnet import autograd, nd, gluon,init import numpy as np import matplotlib.pyplot as plt def get_data(x,w,b,d): c,r = x.shape y = (w * x * x + b*x + d)+ (0.1*(2*np.random.rand(c,r)-1)) return(y) xs = np.arange(0,3,0.01).reshape(-1,1) ys = get_data(xs,1,-2,3) xs,ys = nd.array(xs),nd.array(ys) batch_size = 100 # 将操练数据的特征和标签组合。 dataset = data.ArrayDataset(xs, ys) # 随机读取小批量。 data_iter = data.DataLoader(dataset, batch_size, shuffle=True) model = nn.Sequential() model.add(nn.Dense(16,activation='relu')) model.add(nn.Dense(1)) model.initialize(init.Normal(sigma=0.01)) print(model) loss_f = loss.L2Loss() trainer = gluon.Trainer(model.collect_params(), 'Adam', {'learning_rate': 0.1}) num_epochs = 1000 for epoch in range(1, num_epochs + 1): for X, y in data_iter: with autograd.record(): l = loss_f(model(X), y) l.backward() trainer.step(batch_size) l = loss_f(model(xs), ys) if(epoch%100==0):print('epoch %d, loss: %f' % (epoch, l.mean().asnumpy())) ys_pre = model(xs) plt.title("curve") plt.plot(xs.asnumpy(),ys.asnumpy()) plt.plot(xs.asnumpy(),ys_pre.asnumpy()) plt.show()
- 输出成果
Sequential( (0): Dense(None -> 16, Activation(relu)) (1): Dense(None -> 1, linear) ) epoch 100, loss: 0.229648 epoch 200, loss: 0.233721 epoch 300, loss: 0.233185 epoch 400, loss: 0.178324 epoch 500, loss: 0.018889 epoch 600, loss: 0.009249 epoch 700, loss: 0.007344 epoch 800, loss: 0.003552 epoch 900, loss: 0.003080 epoch 1000, loss: 0.002648
声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。
评论(0)