Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Multiple networks on a single Linux Server (routing)

Status
Not open for further replies.

forrie

MIS
Mar 6, 2009
91
US
I have a Linux server for which I want to provide access to 2 different routed networks. One of the interfaces will have dhcpd bound to it. We'll call them:

For whatever reason, when I configure the interface for this second network (these are both RFC nets, internal), I cannot seem to SSH to the regular host IP which is on a different network. So, for example:

eth0: 192.168.1.0/24

eth1: 10.1.1.0/24

my Mac PC is on the 10.1.1.0 network, but the two can route to each other.

The default router for the system is 192.168.1.1, though I can otherwise ping and nmap the host on 10.1.1.0, I cannot connect via a protocol like SSH or telnet.

I'm utterly baffled, and I'm sure I've overlooked something simple and stupid.

Could someone give me some clarification?


Thanks.
 
There are a couple of things that could be at issue here.
I have been running into the same, or at least similar issue, on my own setup. See this thread for more information:
First, make sure that the application, such as SSH, Telnet, etc is listening on the desired network. You can use netstat -nta | grep port, where port is the applications port (for example 22 for ssh). You may find that it is only listening on the other network. Often times for security, you can bind an application to an IP address and this may be enabled.

What I have been finding in my own setup is that the server seems to only 'listen' on its default gateway segment. I suspect that it has something to do with the routing table. You can get this information by entering the 'route' command. In combination with traceroute, it should show you how traffic is being directed. If the devices are on the same network, it should hit it in one hop.
 
Well, for ssh, look at your config file. Probably located at [blue]/etc/ssh/sshd_config[/blue] and look for a line like so...
Code:
ListenAddress 0.0.0.0
It could be commented out but if it looks like the one above it is listening on all address bound to the machine. If not, it will listen to only to the address specified.

It could also be the firewall preventing access too.

Using traceroute may shed some light on any routing issues.

 
There are no packet filters, there are no directives in /etc/ssh/sshd_config that would restrict the listening of the sshd process.

It occurred to me that I might need to configure the system with static routes for each network, and remove the "default gateway" in /etc/sysconfig/network-scripts/ifcfg-eth0 (GATEWAY=) ... though I'm not certain where this needs to happen in/under the /etc/sysconfig directory.

Thoughts?
 
I read some docs and found that I can place static routes in /etc/sysconfig/network-scripts/route-ethX.

I tried doing this for both the separate networks, and keeping and removing the GATEWAY= directive in eth0. What I found is that I can get out from the server, but not in from anywhere else. I can ping from elsewhere, but not route any TCP to the system from the outside, even on the same LAN segment.

I'm puzzled!

When I remove the connection entry for the secondary network, it all starts working again.

There are no iptables involved, it's wide open.


Thanks...
 
Forrie, out of curiosity, what type of system / distribution are you running?

It sounds like you are running into the same thing I was. I could get traffic out, but nothing could get in. I am not certain if it is a case of things not listening or if it tries to route out the wrong port and gets lost.

What I ultimately did was place the two (dual NIC) servers with their gateway interface defined with the public IP addresses. The second NIC if configured for the LAN only, no gateway, just a network, mask, and a broadcast. The LAN then routes to a separate gateway with a 3rd public IP.
 
@Noway2, thanks for your replay.

This is just Redhat 5.5. The system has the onboard NICs (already used) and a 4-port Intel NIC.

I actually hadn't considered that (no route for the second RFC network). These are all internal RFC networks, nothing on the public net.

I'll give this a try. But I think I still had a problem with it previously...

 
I just tried the above. Additionally, I set ip_forwarding to 1, just for giggles. No luck.

What I can nmap and ping from the other network, but I cannot actually connect to any TCP service. This is really bizarre, and I don't think I've ever seen this before.

There is a third network on there that is point-to-point (no route) that works fine.

I tried doing the RFC network with no default route, then with a static route and no dice.

I'm really puzzled.


 
I'm searching the Internet.. I found this page:


The first part is what I'm curious about - but it still doesn't explain the problem. I can route to/from the point-to-point LAN with no trouble at all.. so what's so special about this.

I have tried changing the IP, just in case, and that had no effect.
 
It doesn't explain the problem, but it is an affirmation that it may be a routing problem where it gets confused and tries to route traffic out the default interface, which is the wrong one. It sounds like there is a lack of mapping from the incoming connection to the outgoing and rather than not 'listening' the server has been muted.

The fact that you are running Red Hat and I am seeing this on an Ubuntu Server says that it isn't a bug specific to the distribution. Rather it sounds like we are either doing something wrong, or there is a more fundamental problem with one of the common sources.

The article you found has a good suggestion and it looks like it may work. If you decide to try it, let us know how it works. I could consider trying it this weekend, but sometimes it takes a few days for me to know if it is working fully or not as the symptoms can be vague, such as slave DNS not updating.
 
The routing table is correct as far as I can tell. This should technically be a no-brainer, considering I have one private routed network and another private point-to-point network configured and they both work fine.

My next test will be to take the non-routed network and slap it onto an interface on the Intel 4-port NIC and see. If that fails, I will try a different internal routed IP and if that fails, then there is either something wrong with the card or the driver; as absurd as that may sound.

 
I did the above mention tests for the point-to-point network, with no trouble.

I had discovered there was a zeroconf address that was also bound to the same interface, it's since been disabled. However, I'm still having trouble even after a reboot of that system. Here's what I'm seeing...

My Mac (client) is on the same /24 LAN as eth5 on my other system, so I can ssh to it with no trouble. However, when I try to ssh to the other eth0 address, which is on another LAN, by DNS or by IP, SSH hangs and I see this in netstat on the server:

tcp 0 0 192.168.1.9:22 10.111.0.43:62280 SYN_RECV
tcp 0 0 192.168.1.9:22 10.111.0.43:62287 SYN_RECV
tcp 0 0 192.168.1.9:22 10.111.0.43:62279 SYN_RECV


The 10.111.0.43 is my client address. The 192.168.21.9 is the server.

I'm still working on this... it's a mystery. But the other non-routed network, 10.222.0.0/24 works fine.


Thanks...
 

Hi

I am facing a similar issue.

I have a centos 5.3 server, with two internet accesses, connected to two different NIC.

Playing with ip route, I can force to use one or the other to get out (browse web sites). However, I can only get in (ssh, web), through ONE address. I see all the symptoms described here.

Did you eventually manage to resolve this ?

Many thanks

J.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top