Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
213 views
in Technique[技术] by (71.8m points)

python - How to create variable names in loop for layers in pytorch neural network

I am implementing a straightforward feedforward neural newtork in PyTorch. However I am wondern if theres a nicer way to add a flexible amount of layer to the network? Maybe by naming them during a loop, but i heard thats impossible?

Currently I am doing it like this

import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):

    def __init__(self, input_dim, output_dim, hidden_dim):
        super(Net, self).__init__()
        self.input_dim = input_dim
        self.output_dim = output_dim
        self.hidden_dim = hidden_dim
        self.layer_dim = len(hidden_dim)
        self.fc1 = nn.Linear(self.input_dim, self.hidden_dim[0])
        i = 1
        if self.layer_dim > i:
            self.fc2 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        if self.layer_dim > i:
            self.fc3 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        if self.layer_dim > i:
            self.fc4 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        if self.layer_dim > i:
            self.fc5 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        if self.layer_dim > i:
            self.fc6 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        if self.layer_dim > i:
            self.fc7 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        if self.layer_dim > i:
            self.fc8 = nn.Linear(self.hidden_dim[i-1], self.hidden_dim[i])
            i += 1
        self.fcn = nn.Linear(self.hidden_dim[-1], self.output_dim)

    def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.relu(self.fc1(x))
        i = 1
        if self.layer_dim > i:
            x = F.relu(self.fc2(x))
            i += 1
        if self.layer_dim > i:
            x = F.relu(self.fc3(x))
            i += 1
        if self.layer_dim > i:
            x = F.relu(self.fc4(x))
            i += 1
        if self.layer_dim > i:
            x = F.relu(self.fc5(x))
            i += 1
        if self.layer_dim > i:
            x = F.relu(self.fc6(x))
            i += 1
        if self.layer_dim > i:
            x = F.relu(self.fc7(x))
            i += 1
        if self.layer_dim > i:
            x = F.relu(self.fc8(x))
            i += 1
        x = F.softmax(self.fcn(x))
        return x
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You can put your layers in a ModuleList container:

import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):

    def __init__(self, input_dim, output_dim, hidden_dim):
        super(Net, self).__init__()
        self.input_dim = input_dim
        self.output_dim = output_dim
        self.hidden_dim = hidden_dim
        current_dim = input_dim
        self.layers = nn.ModuleList()
        for hdim in hidden_dim:
            self.layers.append(nn.Linear(current_dim, hdim))
            current_dim = hdim
        self.layers.append(nn.Linear(current_dim, output_dim))

    def forward(self, x):
        for layer in self.layers[:-1]:
            x = F.relu(layer(x))
        out = F.softmax(self.layers[-1](x))
        return out    

It is very important to use pytorch Containers for the layers, and not just a simple python lists. Please see this answer to know why.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...