{"id":8498,"date":"2019-11-12T23:50:25","date_gmt":"2019-11-12T14:50:25","guid":{"rendered":"http:\/\/www.gisdeveloper.co.kr\/?p=8498"},"modified":"2020-05-28T09:55:31","modified_gmt":"2020-05-28T00:55:31","slug":"%ec%9d%b4%eb%af%b8%ec%a7%80-%eb%b6%84%eb%a5%98-%eb%aa%a8%eb%8d%b8%ec%9d%98-%ea%b5%ac%ec%84%b1-%eb%a0%88%ec%9d%b4%ec%96%b4%ec%97%90-%eb%8c%80%ed%95%9c-%ea%b2%b0%ea%b3%bc%ea%b0%92-%ec%8b%9c%ea%b0%81","status":"publish","type":"post","link":"http:\/\/www.gisdeveloper.co.kr\/?p=8498","title":{"rendered":"\uc774\ubbf8\uc9c0 \ubd84\ub958 \ubaa8\ub378\uc758 \uad6c\uc131 \ub808\uc774\uc5b4\uc5d0 \ub300\ud55c \uacb0\uacfc\uac12 \uc2dc\uac01\ud654"},"content":{"rendered":"<p>\uc774\ubbf8\uc9c0\uc5d0 \ub300\ud55c Classification \ubc0f Detection, Segmentation\uc5d0 \ub300\ud55c \uc2e0\uacbd\ub9dd \ubaa8\ub378\uc744 \uad6c\uc131\ud558\ub294 \ub808\uc774\uc5b4 \uc911 Convolution \uad00\ub828 \ub808\uc774\uc5b4\uc758 \uacb0\uacfc\uac12\uc5d0 \ub300\ud55c \uc2dc\uac01\ud654\uc5d0 \ub300\ud55c \ub0b4\uc6a9\uc785\ub2c8\ub2e4. \ub525\ub7ec\ub2dd \ub77c\uc774\ube0c\ub7ec\ub9ac \uc911 PyTorch\ub85c \uc608\uc81c\ub97c \uc791\uc131\ud588\uc73c\uba70, CNN \ubaa8\ub378 \uc911 \uac00\uc7a5 \uc774\ud574\ud558\uae30 \uc26c\uc6b4 VGG\ub97c \ub300\uc0c1\uc73c\ub85c \ud558\uc600\uc2b5\ub2c8\ub2e4.<\/p>\n<p>\uba3c\uc800 \ud544\uc694\ud55c \ud328\ud0a4\uc9c0\uc640 \ubbf8\ub9ac \ud559\uc2b5\ub41c VGG \ubaa8\ub378\uc744 \ubd88\ub7ec\uc640 \uadf8 \ub808\uc774\uc5b4 \uad6c\uc131\uc744 \ucd9c\ub825\ud574 \ubd05\ub2c8\ub2e4.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">\r\nimport matplotlib.pyplot as plt\r\nfrom torchvision import transforms\r\nfrom torchvision import models\r\nfrom PIL import Image\r\n\r\nvgg = models.vgg16(pretrained=True).cuda()\r\nprint(vgg)\r\n<\/pre>\n<p>\uacb0\uacfc\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.<\/p>\n<div style='background:black;color:white;padding:7px;margin:0 24px'>VGG(<br \/>\n  (features): Sequential(<br \/>\n    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (1): ReLU(inplace=True)<br \/>\n    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (3): ReLU(inplace=True)<br \/>\n    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)<br \/>\n    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (6): ReLU(inplace=True)<br \/>\n    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (8): ReLU(inplace=True)<br \/>\n    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)<br \/>\n    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (11): ReLU(inplace=True)<br \/>\n    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (13): ReLU(inplace=True)<br \/>\n    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))<br \/>\n    (15): ReLU(inplace=True)<\/p>\n<p>     .<br \/>\n     .<\/p>\n<p>     (\uc0dd\ub7b5)<\/p><\/div>\n<p><\/p>\n<p>\uc704\uc758 \ud2b9\uc9d5(Feature)\ub97c \ucd94\ucd9c\ud558\ub294 \ub808\uc774\uc5b4 \uc911 0\ubc88\uc9f8 \ub808\uc774\uc5b4\uc758 \ucd9c\ub825\uacb0\uacfc\ub97c \uc2dc\uac01\ud654 \ud569\ub2c8\ub2e4. PyTorch\ub294 \ud2b9\uc815 \ub808\uc774\uc5b4\uc758 \uc785\ub825 \ub370\uc774\ud130\uc640 \uadf8 \uc5f0\uc0b0\uc758 \uacb0\uacfc\ub97c \ud2b9\uc815 \ud568\uc218\ub85c, \uc5f0\uc0b0\uc774 \uc644\ub8cc\ub418\uba74 \uc804\ub2ec\ud574 \ud638\ucd9c\ud574 \uc90d\ub2c8\ub2e4. \uc544\ub798\ub294 \uc774\uc5d0 \ub300\ud55c \ud074\ub798\uc2a4\uc785\ub2c8\ub2e4.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">\r\nclass LayerResult:\r\n    def __init__(self, payers, layer_index):\r\n        self.hook = payers[layer_index].register_forward_hook(self.hook_fn)\r\n    \r\n    def hook_fn(self, module, input, output):\r\n        self.features = output.cpu().data.numpy()\r\n    \r\n    def unregister_forward_hook(self):\r\n        self.hook.remove()\r\n<\/pre>\n<p>LayerResult\uc740 \ub808\uc774\uc5b4\uc758 \uc5f0\uc0b0 \uacb0\uacfc\ub97c \uac80\uc0ac\ud560 \ub808\uc774\uc5b4\ub97c \ud2b9\uc815\ud558\ub294 \uc778\uc790\ub97c \uc0dd\uc131\uc790\uc758 \uc778\uc790\uac12\uc73c\ub85c \uac16\uc2b5\ub2c8\ub2e4. \ud574\ub2f9 \ub808\uc774\uc5b4\uc758 register_forward_hook \ud568\uc218\ub97c \ud638\ucd9c\ud558\uc5ec \uadf8 \uacb0\uacfc\ub97c \uc5bb\uc5b4\uc62c \ud568\uc218\ub97c \ub4f1\ub85d\ud569\ub2c8\ub2e4. \ub4f1\ub85d\ub41c \ud568\uc218\uc5d0\uc11c \uc5f0\uc0b0 \uacb0\uacfc\ub97c \uc2dc\uac01\ud654\ud558\uae30 \uc704\ud55c \ub370\uc774\ud130 \uad6c\uc870\ub85c \ubcc0\ud658\ud558\uac8c \ub429\ub2c8\ub2e4. \uc774 \ud074\ub798\uc2a4\ub97c \uc0ac\uc6a9\ud558\ub294 \ucf54\ub4dc\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">\r\nresult = LayerResult(vgg.features, 0)\r\n\r\nimg = Image.open('.\/images\/cat.jpg')\r\nimg = transforms.ToTensor()(img).unsqueeze(0)\r\nvgg(img.cuda())\r\n\r\nactivations = result.features\r\n<\/pre>\n<p>\uc704\uc758 \ucf54\ub4dc\uc758 \ub9c8\uc9c0\ub9c9 \ub77c\uc778\uc5d0\uc11c \uc5b8\uae09\ub41c activations\uc5d0 \ud2b9\uc815 \ub808\uc774\uc5b4\uc758 \uacb0\uacfc\uac12\uc774 \ub2f4\uaca8 \uc788\uc2b5\ub2c8\ub2e4. \uc774\uc81c \uc774 \uacb0\uacfc\ub97c \ucd9c\ub825\ud558\ub294 \ucf54\ub4dc\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">\r\nfig, axes = plt.subplots(8,8)\r\nfor row in range(8):\r\n    for column in range(8):\r\n        axis = axes[row][column]\r\n        axis.get_xaxis().set_ticks([])\r\n        axis.get_yaxis().set_ticks([])\r\n        axis.imshow(activations[0][row*8+column])\r\n\r\nplt.show()\r\n<\/pre>\n<p>\uacb0\uacfc \uc774\ubbf8\uc9c0\uac00 \ucd1d 64\uc778\ub370, \uc774\ub294 \uc55e\uc11c VGG\uc758 \uad6c\uc131 \ub808\uc774\uc5b4\ub97c \uc0b4\ud3b4\ubcf4\uba74, \uccab\ubc88\uc9f8 \ub808\uc774\uc5b4\uc758 \ucd9c\ub825 \ucc44\ub110\uc218\uac00 64\uac1c\uc774\uae30 \ub54c\ubb38\uc785\ub2c8\ub2e4. \uacb0\uacfc\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.gisdeveloper.co.kr\/wp-content\/uploads\/2019\/11\/VGG_Layer_Result.jpg\" alt=\"\" width=\"2934\" height=\"1582\" class=\"aligncenter size-full wp-image-8504\" \/><\/p>\n<p>\ucd94\uac00\ub85c \ud2b9\uc815 \ub808\uc774\uc5b4\uc758 \uac00\uc911\uce58\uac12 \uc5ed\uc2dc \uc2dc\uac01\ud654\uac00 \uac00\ub2a5\ud569\ub2c8\ub2e4. \uc544\ub798\uc758 \ucf54\ub4dc\uac00 \uadf8 \uc608\uc785\ub2c8\ub2e4.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"python\">\r\nimport matplotlib.pyplot as plt\r\nfrom torchvision import transforms\r\nfrom torchvision import models\r\nfrom PIL import Image\r\n\r\nvgg = models.vgg16(pretrained=True).cuda()\r\n\r\nprint(vgg.state_dict().keys())\r\nweights = vgg.state_dict()['features.0.weight'].cpu()\r\n\r\nfig, axes = plt.subplots(8,8)\r\nfor row in range(8):\r\n    for column in range(8):\r\n        axis = axes[row][column]\r\n        axis.get_xaxis().set_ticks([])\r\n        axis.get_yaxis().set_ticks([])\r\n        axis.imshow(weights[row*8+column])\r\n\r\nplt.show()\r\n<\/pre>\n<p>9\ubc88 \ucf54\ub4dc\uc5d0\uc11c \uac00\uc911\uce58\ub97c \uac00\uc9c0\ub294 \ub808\uc774\uc5b4\uc758 ID\ub97c \ucd9c\ub825\ud574 \uc8fc\ub294\ub370, \uadf8 \uacb0\uacfc\ub294 \ub2e4\uc74c\uacfc \uac19\uc2b5\ub2c8\ub2e4.<\/p>\n<div style='background:black;color:white;padding:7px;margin:0 24px'>odict_keys([&#8216;features.0.weight&#8217;, &#8216;features.0.bias&#8217;, &#8216;features.2.weight&#8217;, &#8216;features.2.bias&#8217;, &#8216;features.5.weight&#8217;, &#8216;features.5.bias&#8217;, &#8216;features.7.weight&#8217;, &#8216;features.7.bias&#8217;, &#8216;features.10.weight&#8217;, &#8216;features.10.bias&#8217;, &#8216;features.12.weight&#8217;, &#8216;features.12.bias&#8217;, &#8216;features.14.weight&#8217;, &#8216;features.14.bias&#8217;, &#8216;features.17.weight&#8217;, &#8216;features.17.bias&#8217;, &#8216;features.19.weight&#8217;, &#8216;features.19.bias&#8217;, &#8216;features.21.weight&#8217;, &#8216;features.21.bias&#8217;, &#8216;features.24.weight&#8217;, &#8216;features.24.bias&#8217;, &#8216;features.26.weight&#8217;, &#8216;features.26.bias&#8217;, &#8216;features.28.weight&#8217;, &#8216;features.28.bias&#8217;, &#8216;classifier.0.weight&#8217;, &#8216;classifier.0.bias&#8217;, &#8216;classifier.3.weight&#8217;, &#8216;classifier.3.bias&#8217;, &#8216;classifier.6.weight&#8217;, &#8216;classifier.6.bias&#8217;])<\/div>\n<p><\/p>\n<p>\uc704\uc758 \ub808\uc774\uc5b4 ID\ub85c \uac00\uc911\uce58\uac12\uc744 \uac00\uc838\uc62c \ub808\uc774\uc5b4\ub97c \ud2b9\uc815\ud560 \uc218 \uc788\ub294\ub370\uc694. \ucd5c\uc885\uc801\uc73c\ub85c \uc704\uc758 \ucf54\ub4dc\ub294 \ub2e4\uc74c\uacfc \uac19\uc774 \uac00\uc911\uce58\ub97c \uc2dc\uac01\ud654\ud574 \uc90d\ub2c8\ub2e4.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.gisdeveloper.co.kr\/wp-content\/uploads\/2019\/11\/VGG_Layer_Weight.jpg\" alt=\"\" width=\"2872\" height=\"1594\" class=\"aligncenter size-full wp-image-8507\" \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\uc774\ubbf8\uc9c0\uc5d0 \ub300\ud55c Classification \ubc0f Detection, Segmentation\uc5d0 \ub300\ud55c \uc2e0\uacbd\ub9dd \ubaa8\ub378\uc744 \uad6c\uc131\ud558\ub294 \ub808\uc774\uc5b4 \uc911 Convolution \uad00\ub828 \ub808\uc774\uc5b4\uc758 \uacb0\uacfc\uac12\uc5d0 \ub300\ud55c \uc2dc\uac01\ud654\uc5d0 \ub300\ud55c \ub0b4\uc6a9\uc785\ub2c8\ub2e4. \ub525\ub7ec\ub2dd \ub77c\uc774\ube0c\ub7ec\ub9ac \uc911 PyTorch\ub85c \uc608\uc81c\ub97c \uc791\uc131\ud588\uc73c\uba70, CNN \ubaa8\ub378 \uc911 \uac00\uc7a5 \uc774\ud574\ud558\uae30 \uc26c\uc6b4 VGG\ub97c \ub300\uc0c1\uc73c\ub85c \ud558\uc600\uc2b5\ub2c8\ub2e4. \uba3c\uc800 \ud544\uc694\ud55c \ud328\ud0a4\uc9c0\uc640 \ubbf8\ub9ac \ud559\uc2b5\ub41c VGG \ubaa8\ub378\uc744 \ubd88\ub7ec\uc640 \uadf8 \ub808\uc774\uc5b4 \uad6c\uc131\uc744 \ucd9c\ub825\ud574 \ubd05\ub2c8\ub2e4. import matplotlib.pyplot as plt from torchvision import &hellip; <\/p>\n<p class=\"link-more\"><a href=\"http:\/\/www.gisdeveloper.co.kr\/?p=8498\" class=\"more-link\">\ub354 \ubcf4\uae30<span class=\"screen-reader-text\"> &#8220;\uc774\ubbf8\uc9c0 \ubd84\ub958 \ubaa8\ub378\uc758 \uad6c\uc131 \ub808\uc774\uc5b4\uc5d0 \ub300\ud55c \uacb0\uacfc\uac12 \uc2dc\uac01\ud654&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[131,132],"tags":[],"class_list":["post-8498","post","type-post","status-publish","format-standard","hentry","category-python","category-deep-machine-learning"],"_links":{"self":[{"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/8498","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8498"}],"version-history":[{"count":19,"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/8498\/revisions"}],"predecessor-version":[{"id":9351,"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=\/wp\/v2\/posts\/8498\/revisions\/9351"}],"wp:attachment":[{"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8498"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8498"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.gisdeveloper.co.kr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8498"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}