bash script closes without further execution


I have a script that worked quite well to date. This makes several simple queries to mysql and then acts on a table of several million records by deleting matches with another table.

The issue is that the last job was much heavier than the previous ones (I calculated about 18 hours) and the script left without any more at the end / during the mysql deletion query although apparently the removal went through all the records since it did not I have found duplicates.
It is also possible that all duplicates had already been deleted when the script cracked, although there are still records to be reviewed.

Here is the code where the anomaly happened:

/bin/echo "Eliminando registros que ya tengo en DB generica....."
/usr/bin/mysql $SQL_ARGS_LOCAL "delete FROM tabla1 where bbdd_fid=$BBDD_FID_ADD and dato1 in (select dato1 from tabla2)"
if [[ $? -ne 0 ]]; then         # Si me da error
/bin/echo "Ha ocurrido un error o no se ha completado la eliminacion de registros en la DB de tabla1 bbdd_fid $BBDD_FID_ADD ($NOMBRE_DB_ADD)"
# De nuevo cuento los registros despues de la eliminacion
/bin/echo -n "Registros en $NOMBRE_DB_ADD bbdd_fid $BBDD_FID_ADD despues de la eliminacion: "

The last line of the log shows:


Deleting records that I already have in generic DB ...

Then it was running /usr/bin/mysql $SQL_ARGS_LOCAL "delete FROM tabla1 where ....... and then the script came out.

Is it possible that at the end of the deletion query the code could not be retrieved and it would go out abnormally?

I have not been able to see the end time either, or it has released some errors in the logs of the system.

asked by kk003 21.08.2016 в 14:01

2 answers


I can advise you during long processes, or executions that will take hours, to create temporary output files. You can do this with tee , redirecting to a file:

/usr/bin/mysql $SQL_ARGS_LOCAL "delete FROM tabla1 where bbdd_fid=$BBDD_FID_ADD and dato1 in (select dato1 from tabla2)" |tee Output1.txt

With this, if the execution fails, you can have up to what part of the process was fulfilled.

To know how long it takes to run a script, you can do it in two ways:

  • Execute your bash by prefixing time./
    where is your program.
  • Or put these lines at the beginning and at the end inside your script:

    res1=$(date +%s.%N) (start after / bin / bash)
    res2=$(date +%s.%N) (end)
    and finally the output in seconds:

    printf "Mi programa tardo :    %.3F\n"  $(echo "$res2 - $res1"|bc )
  • answered by 01.09.2016 в 05:06

    I have the feeling that your problem is more related to the connection timeouts. If you are running the program from an SSH or TELNET connection, after several hours there are timeouts, even less in some cases. It can be thing of the firewall, of the server, even of the own connection or of the client. There are different ways to avoid it. With ssh it would be editing the file / etc / ssh / sshd_config and adding these lines:


    ClientAliveInterval 120
      ClientAliveCountMax 720

    This will allow you to extend your SSH connection to 24 hours
    (120 seconds x 720 times = 86400 seconds).

    However, the solution, in your case, I think is to prevent the process from being interrupted.

    When the connection is closed, what it does in the init process is to kill all the subprocesses associated with that connection.

    The solution: use the bash command nohup , which means "Do not hang up". Tells the script to ignore the signal SIGHUP received Do the following:

    nohup ""

    It will leave you all the screen output in a file called nohup.out in the same directory where you have executed the command.

    answered by 03.04.2017 в 18:45