how is it possible that i can detect when a client has disconnected from a socket? In windows a SocketClose event is raised, but in standard BSD sockets, how can i do that?
In standard Berkley sockets, when the peer disconnects, select() will indicate a read on that socket, and read() will return 0 as its number of bytes read.
But if you are writing on a socket which is closed at other end you will get SIGPIPE signal on second write. On first write other end will send RST i.e. socket is already closed and on second write SIGPIPE signal would get generated.
Ofcourse, if you use read then 0 would get return.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.