1) 계단함수 구현하기 - 간단한 버전
1 2 3 4 5 6 7 8 9 10 | def step_func(x): if x>0: print("1") return 1 elif x<=0 : print("0") return 0 step_func(3) step_func(-1) | cs |
2) 계단함수 구현하기 - 배열 지원 가능버전 : np를 반드시 활용해야 함
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | import numpy as np def step_func1(x) : y = x > 0 print(y.astype(np.int)) return y.astype(np.int) def step_func2(x) : y = x > 0 print(y.astype(np.int)) return y.astype(np.int) step_func2(np.array([-1, 3])) step_func1(np.array([4])) | cs |
3) 계단함수 그래프 그리기 - matplotlib 활용하기
** 함수 구현 부분에서, 굳이 np.array(~)를 할 필요가 없다. 알아서 array형식으로 출력됨
1 2 3 4 5 6 7 8 9 10 11 12 13 | import numpy as np import matplotlib.pylab as plt def step_func(x): return np.array(x>0, dtype=np.int) x=np.arange(-5.0, 5.0, 0.1) y=step_func(x) plt.plot(x,y) plt.ylim(-0.1, 1.1) plt.show() | cs |
4) sigmoid 함수 구현
1 2 3 4 5 6 7 8 | import numpy as np def sigmoid(x): print(1/(1+np.exp(-x))) return 1/(1+np.exp(-x)) sigmoid(np.array([-1, 1, 2])) | cs |
5) sigmoid 함수 matplotlib로 표현 - 이런 식으로 함수를 구현해도, 출력되는 값은 array형식으로 나온다
1 2 3 4 5 6 7 8 9 10 11 12 13 | import numpy as np import matplotlib.pylab as plt def sigmoid(x): return 1/(1+np.exp(-x)) x = np.arange(-5, 5, 0.1) y = sigmoid(x) plt.plot(x,y) plt.ylim(-0.1, 1.1) plt.show() | cs |
6) ReLU 함수 - 기본 구현
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | import numpy as np def neural(x,w): b=-2 print(np.dot(x,w)+b) return np.dot(x,w)+b x=np.array([1,2]) w=np.array([[1,3,5],[2,4,6]]) neural(x,w) | cs |
9) 3충 신경망 구현
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | import numpy as np def sigmoid(x): return 1/(1+np.exp(-x)) def identity_func(x): return x def init_network(): network={} network['w1']=np.array([[0.1, 0.2, 0.5], [0.2, 0.4, 0.6]]) network['b1']=np.array([0.1, 0.2, 0.3]) network['w2']=np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]]) network['b2']=np.array([[0.1], [0.2]]) network['w3']=np.array([[0.1, 0.3], [0.2, 0.4]]) network['b3']=np.array([0.1, 0.2]) return network def forward(network, x): w1, w2, w3 = network['w1'], network['w2'], network['w3'] b1, b2, b3 = network['b1'], network['b2'], network['b3'] a1=np.dot(x,w1)+b1 z1=sigmoid(a1) a2=np.dot(z1,w2)+b2 z2=sigmoid(a2) a3=np.dot(z2,w3)+b3 y=identity_func(a3) return y network = init_network() x=np.array([1.0, 0.5]) y = forward(network,x) print(y) | cs |
10) 소프트맥스 함수 구현 - 기본형
1 2 3 4 5 6 7 8 9 10 11 | import numpy as np def softmax(a): exp_a = np.exp(a) sum_exp_a = sum(exp_a) y=exp_a=sum_exp_a return y | cs |
11) 소프트맥스 함수 구현 - 수정 (좀 더 표현하기 쉬운 방식)
1 2 3 4 5 6 7 8 9 10 11 12 13 | import numpy as np def softmax(a): c=np.max(a) exp_a = np.exp(a-c) sum_exp_a = sum(exp_a) y=exp_a/sum_exp_a return y | cs |
12) 소프트맥스 함수의 특징 - 합이 1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | import numpy as np def softmax(a): c=np.max(a) exp_a = np.exp(a-c) sum_exp_a = sum(exp_a) y=exp_a/sum_exp_a return y a=np.array([0.3, 2.9, 4.0]) part=softmax(a) print(np.sum(part)) | cs |
13) MNIST 데이터셋 사용하기 -
** dataset.mnist 만드는 방법 : dataset이라는 폴더를 따로 만든 뒤, mnist.py 파일을 넣어줌.
그걸 현재 내 py 경로에다가 넣어주면 됨 (Script나 include 디렉토리에 넣는거 아님)
1 2 3 4 5 6 7 8 9 10 | import sys, os sys.path.append(os.pardir) from dataset.mnist import load_mnist (x_train, t_train),(x_test, t_test)=load_mnist(flatten=True,normalize=False) print(x_train.shape) print(t_train.shape) print(x_test.shape) print(t_test.shape) | cs |
14) MNIST 데이터셋 출력하기
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | import sys, os sys.path.append(os.pardir) from dataset.mnist import load_mnist import numpy as np from PIL import Image def img_show(img): pil_img=Image.fromarray(np.uint8(img)) pil_img.show() (x_train, t_train),(x_test, t_test)=load_mnist(flatten=True,normalize=False) img = x_train[0] label = t_train[0] print(label) print(img.shape) img = img.reshape(28, 28) print(img.shape) img_show(img) | cs |
15) 신경망의 추론 처리
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | import pickle from dataset.mnist import load_mnist import numpy as np def get_data(): (x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False) return x_test, t_test def init_network(): with open("sample_weight.pkl",'rb') as f: network = pickle.load(f) return network def sigmoid(x): return 1/(1+np.exp(-x)) def softmax(x): c = max(x) exp_x = np.exp(x-c) sum_exp_x = np.sum(exp_x) return exp_x/sum_exp_x def predict(network, x): w1, w2, w3 = network['w1'], network['w2'], network['w3'] b1, b2, b3 = network['b1'], network['b2'], network['b3'] a1 = np.dot(x, w1) + b1 z1 = sigmoid(a1) a2 = np.dot(z1, w2) + b2 z2 = sigmoid(a2) a3 = np.dot(z2, w3) + b3 y =softmax(a3) return y | cs |
16) 신경망 추론 처리 - 정확도 처리
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | import pickle from dataset.mnist import load_mnist import numpy as np def get_data(): (x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False) return x_test, t_test def init_network(): with open("sample_weight.pkl", 'rb') as f: network = pickle.load(f) return network def sigmoid(x): return 1 / (1 + np.exp(-x)) def softmax(x): c = max(x) exp_x = np.exp(x - c) sum_exp_x = np.sum(exp_x) return exp_x / sum_exp_x def predict(network, x): w1, w2, w3 = network['W1'], network['W2'], network['W3'] b1, b2, b3 = network['b1'], network['b2'], network['b3'] a1 = np.dot(x, w1) + b1 z1 = sigmoid(a1) a2 = np.dot(z1, w2) + b2 z2 = sigmoid(a2) a3 = np.dot(z2, w3) + b3 y = softmax(a3) return y x, t = get_data() network = init_network() print(network) accuracy_cnt = 0 for i in range(len(x)): y = predict(network, x[i]) p = np.argmax(y) if p == t[i]: accuracy_cnt += 1 print("Accuracy:" + str(float(accuracy_cnt)/len(x))) | cs |
'기계학습 > [밑바닥부터 시작하는 딥러닝]' 카테고리의 다른 글
4장 신경망 학습 코드 모음 (0) | 2019.08.08 |
---|---|
2장 퍼셉트론 코드들 모음 (0) | 2019.08.06 |
7장 정리 - CNN (0) | 2019.08.06 |
6장 정리 - 학습관련 기술들 (0) | 2019.08.06 |
4장 정리 - 신경망 학습 (0) | 2019.08.06 |