Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
487 views
in Technique[技术] by (71.8m points)

sockets - Java, Netty, TCP and UDP connection integration : No buffer space available for UDP connection

I have application which uses both TCP and UDP protocols. Main assumption is that the client connects to server via TCP protocol and when connection is established, UDP datagrams are being send. I have to support two scenarios of connecting to server: - client connects when server is running - client connects when server is down and retries connection until server starts again

For the first scenario everything works pretty fine: I got working both connections. The problem is with second scenario. When client tries few times to connect via TCP and finally connects, the UDP connection function throws an exception:

java.net.SocketException: No buffer space available (maximum connections reached?): bind
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.DatagramChannelImpl.bind(DatagramChannelImpl.java:684)
at sun.nio.ch.DatagramSocketAdaptor.bind(DatagramSocketAdaptor.java:91)
at io.netty.channel.socket.nio.NioDatagramChannel.doBind(NioDatagramChannel.java:192)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:484)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1080)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:197)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:722)

When I restart client application without doing anything with server, client will connect with any problems.

What can cause a problem?

In below I attach source code of classes. All source code comes from examples placed in official Netty project page. The only thing which I have midified is that I replaced static variables and functions with non-static ones. It was caused that in future I will need many TCP-UDP connections to multiple servers.

public final class UptimeClient {
static final String HOST = System.getProperty("host", "192.168.2.193");
static final int PORT = Integer.parseInt(System.getProperty("port", "2011"));
static final int RECONNECT_DELAY = Integer.parseInt(System.getProperty("reconnectDelay", "5"));
static final int READ_TIMEOUT = Integer.parseInt(System.getProperty("readTimeout", "10"));

private static UptimeClientHandler handler;

public void runClient() throws Exception {
    configureBootstrap(new Bootstrap()).connect();
}

private Bootstrap configureBootstrap(Bootstrap b) {
    return configureBootstrap(b, new NioEventLoopGroup());
}

@Override
protected Object clone() throws CloneNotSupportedException {
    return super.clone(); //To change body of generated methods, choose Tools | Templates.
}

Bootstrap configureBootstrap(Bootstrap b, EventLoopGroup g) {
    if(handler == null){
            handler = new UptimeClientHandler(this);
    }
    b.group(g)
     .channel(NioSocketChannel.class)
     .remoteAddress(HOST, PORT)
     .handler(new ChannelInitializer<SocketChannel>() {
        @Override
        public void initChannel(SocketChannel ch) throws Exception {
            ch.pipeline().addLast(new IdleStateHandler(READ_TIMEOUT, 0, 0), handler);
        }
     });

    return b;
}

void connect(Bootstrap b) {
    b.connect().addListener(new ChannelFutureListener() {
        @Override
        public void operationComplete(ChannelFuture future) throws Exception {
            if (future.cause() != null) {
                handler.startTime = -1;
                handler.println("Failed to connect: " + future.cause());
            }
        }
    });
}
}


@Sharable
public class UptimeClientHandler extends SimpleChannelInboundHandler<Object> {
UptimeClient client;
public UptimeClientHandler(UptimeClient client){
    this.client = client;
}
long startTime = -1;

@Override
public void channelActive(ChannelHandlerContext ctx) {
    try {
        if (startTime < 0) {
            startTime = System.currentTimeMillis();
        }
        println("Connected to: " + ctx.channel().remoteAddress());
        new QuoteOfTheMomentClient(null).run();
    } catch (Exception ex) {
        Logger.getLogger(UptimeClientHandler.class.getName()).log(Level.SEVERE, null, ex);
    }
}

@Override
public void channelRead0(ChannelHandlerContext ctx, Object msg) throws Exception {
}

@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) {
    if (!(evt instanceof IdleStateEvent)) {
        return;
    }

    IdleStateEvent e = (IdleStateEvent) evt;
    if (e.state() == IdleState.READER_IDLE) {
        // The connection was OK but there was no traffic for last period.
        println("Disconnecting due to no inbound traffic");
        ctx.close();
    }
}

@Override
public void channelInactive(final ChannelHandlerContext ctx) {
    println("Disconnected from: " + ctx.channel().remoteAddress());
}

@Override
public void channelUnregistered(final ChannelHandlerContext ctx) throws Exception {
    println("Sleeping for: " + UptimeClient.RECONNECT_DELAY + 's');

    final EventLoop loop = ctx.channel().eventLoop();
    loop.schedule(new Runnable() {
        @Override
        public void run() {
            println("Reconnecting to: " + UptimeClient.HOST + ':' + UptimeClient.PORT);
            client.connect(client.configureBootstrap(new Bootstrap(), loop));
        }
    }, UptimeClient.RECONNECT_DELAY, TimeUnit.SECONDS);
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    cause.printStackTrace();
    ctx.close();
}

void println(String msg) {
    if (startTime < 0) {
        System.err.format("[SERVER IS DOWN] %s%n", msg);
    } else {
        System.err.format("[UPTIME: %5ds] %s%n", (System.currentTimeMillis() - startTime) / 1000, msg);
    }
    }
}

public final class QuoteOfTheMomentClient {

private ServerData config;
public QuoteOfTheMomentClient(ServerData config){
    this.config = config;
}

public void run() throws Exception {


    EventLoopGroup group = new NioEventLoopGroup();
    try {
        Bootstrap b = new Bootstrap();
        b.group(group)
         .channel(NioDatagramChannel.class)
         .option(ChannelOption.SO_BROADCAST, true)
         .handler(new QuoteOfTheMomentClientHandler());

        Channel ch = b.bind(0).sync().channel();

        ch.writeAndFlush(new DatagramPacket(
                Unpooled.copiedBuffer("QOTM?", CharsetUtil.UTF_8),
                new InetSocketAddress("192.168.2.193", 8193))).sync();

        if (!ch.closeFuture().await(5000)) {
            System.err.println("QOTM request timed out.");
        }
    }
    catch(Exception ex)
    {
        ex.printStackTrace();
    }
    finally {
        group.shutdownGracefully();
    }
    }
}

public class QuoteOfTheMomentClientHandler extends SimpleChannelInboundHandler<DatagramPacket> {

@Override
public void channelRead0(ChannelHandlerContext ctx, DatagramPacket msg) throws Exception {
    String response = msg.content().toString(CharsetUtil.UTF_8);
    if (response.startsWith("QOTM: ")) {
        System.out.println("Quote of the Moment: " + response.substring(6));
        ctx.close();
    }
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    cause.printStackTrace();
    ctx.close();
    }
}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

If your server is Windows Server 2008 (R2 or R2 SP1), this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #2577795

This issue occurs because of a race condition in the Ancillary Function Driver for WinSock (Afd.sys) that causes sockets to be leaked. With time, the issue that is described in the "Symptoms" section occurs if all available socket resources are exhausted.


If your server is Windows Server 2003, this problem is likely described and solved by this stackoverflow answer which refers to Microsoft KB article #196271

The default maximum number of ephemeral TCP ports is 5000 in the products that are included in the "Applies to" section. A new parameter has been added in these products. To increase the maximum number of ephemeral ports, follow these steps...

...which basically means that you have run out of ephemeral ports.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...