I have a class that inherits from Process
that takes advantage of a connection SSH
to do a port-forwarding to a remote machine and send a certain number of bytes.
import socket
from multiprocessing import Process,Pipe
class SshConnThread(Process):
def __init__(self,address,message,pipe):
self.result = None
self.address = address #puerto y direccion remota(127.0.0.1,5555)
self.match = "[||]" #delimitador de lectura de socket
self.message = message + self.match # esta es la cadena de bytes terminada por [||]
self.pipe = pipe
Process.__init__(self)
def run(self):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((self.address))
s.sendall(self.message)
data = ""
while not data.endswith(self.match):
data += s.recv(1040)
s.close()
self.result = data.split(self.match)
self.pipe.send(self.result[0])
Well, in the main part of my project, I created two SSH tunnels to two remote machines with Paramiko
. Then, I read a certain number of bytes of a file, divide this amount into two byte strings and send each chain through one of the two created tunnels (using the class SshConnThread
that I put before).
The process works perfectly until I read more than 23 MB of a file. Thereafter, despite performing the division well, the first subprocess sends the appropriate amount to the corresponding remote machine, while the second subprocess sends a considerably smaller number of bytes. If I perform the same process now with 3 remote machines, and dividing the number of bytes by 3, the correct amount arrives again at the first machine, the second amount arrives at a lower amount, but the correct amount arrives at the third one too ...
It is not a problem of the virtual machines, because I have exchanged them between them and the second shipment always fails, and only when I read more than 23 MB. I do not know if maybe it has something to do with the amount of memory that a subprocess can reserve or something like that, but I do not understand why only the second shipment fails ...