Web scrapping using scrapy

What is the use of 1) callback, and, 2) self.log function when we are creating the spider class?

hey @amanbh_123 ,

  1. Callback.
    When our spider will request over any url , then it will result in some kind of response, either html or something else. So , now we need to work on this response to do our task, so callback is parameter that defines which function is to be called to deal with that response.

  2. Self.log function
    if you notice, our class extends scrapy.Spider. So there are some functions that are implemented in that class to perform some particular task. Self.log is one of them . It will create a file with the name we specify and store the results we provide it to be stored in that file.
    You can also do it manually by your own , but scrapy does provide you a built-in function to do so.

I hope this helped you understand.
Thank You and Happy learning :slightly_smiling_face:.

hey @amanbh_123 ,
i see that you have withdrawn your doubt .

are they resolved , or you still want some clarification on those.

Thank You .

  1. Why is this spider class called QoutesSpider running even if we have not created its object?
  2. And if self.log function is implemented within scrapy.Spider, why don’t we write it as super().log("___")?

Its just the name you have given to your project.
Scrapy is kind of self made programm, not actually. So ,whenever we create a project , it b default defines some naming conventions and naming this class is one such of it.

Hey , there is no issue using that too.
It will work.

I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.

On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.