Recently I was testing the AWS Storage Gateway. AWS storage gateway is a product combing AWS console frontend and a ESXi-based Linux VM (we call it storage gateway VM). The AWS console is responsible to handle user instruction on the storage gateway and pass the instruction to storage VM (e.g. create iscsi target on Storage VM, take or restore snapshot ... etc) while the ESXi-based storage VM is the actual host handling the instruction and storage thing.
From observation, there will be 2 ports be listening on the storage gateway VM, they are TCP port 80 and 3260. Port 80 is actually a java instance which is responsible to serve API call (user submit the request via AWS console or AWS API, and then AWS pass the request to the API handler on port 80 of Storage VM). Port 3260 is the ISCSI target which is responsible to handle all ISCSI request..
So, i was asked if it is possible to run this storage gateway VM behind NAT, i.e. sitting the VM in private network. With this subjective, there are 2 possible scenarios,
- one is sitting the VM behind NAT but without any port mapping on port 80 and 3260
- while another scenario is putting the VM behind NAT but with port mapping enabled (i.e. exposing and mapping the port 80 and 3260 on wan outside to port 80 and 3260 on the VM with private IP address).
Unfortunately both scenario wouldn't work. For first scenario, though the storage VM could be activated without problem, AWS just not able to communicate with the storage gateway VM therefore all instruction from users wouldn't be passed over to storage gateway VM at all. No ISCSI target could be created, no snapshot could be created or restored as all instructions are pended and timed out. For second scenario, we could activate the storage gateway VM, create volume, create and restore snapshot but the ISCSI target is just not working at all due to the ISCSI implementation restriction. ISCSI initiator (or ISCSI guest) could discover the ISCSI target via port 3260 but it just couldn't login to the resources.
To explain why it wont work behind NAT, we will have to go through the process in connecting or mapping an ISCSI target.
The ISCSI connection establishment process is actually a two-step process. The first step would be the iscsi initiator to scan and discover iscsi remote resource on the remote iscsi target. During the test, we could successfully perform this step as we do see the iscsi target during iscsi discovery. However, on the 2nd step when we tried to login into the ISCSI resource and then we see issue. The situation is that, iscsi resource is presented with a combination of on-host ip and iqn, e.g "10.1.1.1, iqn-name". The ip address here is the ip on the host, therefore that is NATTed private address. Once iscsi initiator(the guest VM) try to map the remote resource, due to implementation restriction it will connect to the ip address being presented, therefore the private ip address. As the IP address presented is private, VM initiator from public network wouldn't be able to talk to that IP address and connecting to the target would lead to request timeout.
The only possible workaround we could think of right now is to create a VPN tunnel between Initiator and the storage gateway VM behind NAT. In this way the AWS storage VM's iSCSI targets can be seen as they would be on the same LAN segment. However with this approach it would definitely add extra overhead on the ISCSI's I/O performance.
BTW, this storage appliance is designed for access from on-premise device (i.e., natively they should be on the same network segemtn) and which means the ISCSI traffic should not really need to get through public network. In situation like this the appliance should be good to use.