error log in bash

0

I would like to know how I can create a log or have control of the data that is updated or not in this bash script I read a csv and send the data with CURL and the .sh works correctly now I would like to generate a log of errors or have a message that tells me which fields were updated

while IFS=, read  col1 col2 col3 col4 col5 col6 col7 col8
do
        echo "_________________________________"
        echo "Nombre del Sitio de Proveedor ->[${col1}]"
        echo "NIT del proveedor -> [${col2}]"
        echo "Email de notificación  -> [${col3}]"
        echo "Grupo de contenido -> [${col4}]"
        echo "CODE   -> [${col5}]"
        echo "COD S1ESA  -> [${col6}]"
        echo "CODE NUEVO  -> [${col7}]"
        echo "ACTIVO  -> [${col8}]"

        url="https://example.com/api/suppliers?number=${col2}"
        echo "COnsultando "$url
        curl -g  -H "Accept: application/xml" -H "X-COUPA-API-KEY:3bce24adc7ef16199c10c6dec2d1980a612f4bb3" -H "X-HTTP-Method-Override: GET" $url >nit.xml
        Supplier_id=$(cat nit.xml | xmlstarlet sel -t -m "//supplier" -v "id")
        echo $Supplier_id

        url="https://example.com/api/suppliers/"$Supplier_id"/supplier_sites?code=${col5}"
        echo $url
        curl -g  -H "Accept: application/xml" -H "X-API-KEY:3bce24adc7ef16199c10c6dec2d1980a60000012" -H "X-HTTP-Method-Override: GET" $url >nit1.xml
        supplier_site_id=$(cat nit1.xml | xmlstarlet sel -t -m "//supplier-site"  -v "id" )
        echo "id_site" $supplier_site_id

        IFS='-' read -ra contentG <<< "${col4}"
        for i in "${contentG[@]}"; do
        echo "$i"
      "<supplier-site><content-groups><content-group><name>${col4}</name></content-group>/<content-groups></supplier-site>"
done


        url_put="https://example.com/api/suppliers/"$Supplier_id"/supplier_sites/"$supplier_site_id
        curl -g -X PUT -d "<supplier-site><content-groups><content-group><name>$i</name></content-group></content-groups><name>${col1}</name><po-email>${col3}</po-emai$        echo  "<supplier-site><content-groups><content-group><name>${col4}</content-groups>/<content-group></name><name>${col1}</name><po-email>${col3}</po-email><acti$
done < campos.csv
    
asked by JHABIER 15.11.2018 в 22:04
source

1 answer

1

You can filter by several things, for example, by some value value or message that returns the response or by using the -i of curl parameter, which includes, at the beginning of the response, the information of the http header.

The result of that query, if the service fails, could give you something like that:

$ curl -i --url $url -H etc -s
HTTP/1.1 404 Not Found
Content-Type: text/plain; charset=UTF-8
...

And you could filter according to the status code or some header or message out there. Or you could also use something like verbose curl mode.

$ curl -v etc etc
...
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=UTF-8
...

And you follow a similar dynamic of filtering according to the answer.

You can make a function that filters by one of those values that I mentioned to you and, according to that filter, save the matches in different files.

Something for this style.

create_log() {
    declare id_suplier_code="$1"  # O algún parámetro por lo que lo identifiques.
    declare response="$2"
    ERROR_LOG_FILE="error.log"
    SUCCESS_LOG_FILE="success.log"

    status_code="$(awk '$1=="HTTP/1.1"{$1="";print $0}' <<< "$response")"       

    if [[ $(egrep -q "200 OK" <<< "$status_code") ]]
    then
        echo "[SUCCESS]:$id_suplier_code" >> "$SUCCESS_LOG_FILE"
    else
        echo "[ERROR]:$id_suplier_code:$status_code" >> "$ERROR_LOG_FILE" 
    fi
}

You would use this function by passing as parameters the id of what you want to identify and the response of the curl request that you previously made.

    
answered by 17.11.2018 в 11:50