Good morning
For my scraping projects I have been using a combination of selenium with beautiful soup. Once the html was converted into plain text, I used .find()
and .find_all()
to move around the info I wanted.
The thing is that now that I've become more familiar with selectors, of the type
element(by.id("id"));
element(by.css("#id"));
element(by.xpath("//*[@id='id']"))
My question is, what is the difference? Why use BSoup if you can locate by selectors? Because some substantial difference in terms of parsing must exist. Apart, in terms of coding, which system is faster, or more robust.