Save several DataFrames with a for loop in different sheets of the same excel file

1

I would appreciate your help, I have the following code:

for embalse in ["PIL", "PUL", "VDV"]:
    vo = pd.read_csv('...\Desktop\pucv.xlsx',encoding='utf-8')
    vo=vo[vo["Embalse"]==embalse]
    writer = ExcelWriter('...\Desktop\embalses.xlsx')
    vo.to_excel(writer, embalse, index=False)
    writer.save()

What I'm doing is rescuing data from a column whose name is variable and will depend on the path of for , every time the loop ends, it is saved in an Excel and once again with the loop it will open the previously created file and just add the variable name "reservoir".

The problem is that it works but the file is overwritten, and finally an Excel file results with only one sheet whose name is the last one, "VDV".

How could I make sure that it is not overwritten and just add a new sheet, with the aim of ending with an Excel file with as many sheets as loops in for made?

    
asked by Luis Baeza 11.09.2018 в 22:26
source

1 answer

0

The problem is that you open the destination file and call save in each iteration. Let's go in parts:

  • You should not open the source file in each iteration, that is inefficient and unnecessary. Create the DataFrame outside the cycle and in each iteration simply filter using the column and use the generated view to generate the sheet using the method to_excel of it.

  • Likewise, open the destination file only once outside the for .

  • Finally, you should only call the save method once when you have the Excel content complete, that is, when the cycle ends. However it is better to use the% with and it is automatically responsible for saving and closing the file when finished.

The code would look something like this:

import pandas as pd


vo = pd.read_csv('.../Desktop/pucv.csv', encoding='utf-8')
with pd.ExcelWriter('.../Desktop/embalses.xlsx') as writer:
    for embalse in ["PIL", "PUL", "VDV"]:    
        emb_rows = vo[vo.Embalse == embalse]
        emb_rows.to_excel(writer, embalse, index=False)
    
answered by 11.09.2018 в 23:36