Profiling in azure webapp?

1

We have developed a WebApi platform which is hosted on Azure, and using MongoDB as a database.

Before release, we did several load tests and it supports up to 11000 requests per second, and right now we have an average load of around 200 requests per second, and everything goes smoothly.

The problem is that, randomly and without any apparent pattern, peaks appear in the queue of HTTP requests and connections to MongoDB, which generates timeouts on the client.

After exchanging several emails with the Azure support department and checking the logs of both MongoDB and Azure, we have no idea what might be happening.

My questions are:

Has anyone experienced the same problem? I see similar cases but they are not exactly the same.

Some tool-library-framework or something to monitor each request, the time it takes each function, and hunt what is causing these spikes?

Greetings

    
asked by dank0ne 02.08.2016 в 15:31
source

1 answer

0

Since the language in which the Web App is written is not specified, the options that can most help you are:

  • New Relic : You have a bonus if you sign up from Azure and have a free service level that holds the information for 7 days.
  • Azure App Insights : It also has a free service level and has as pro that you can see and manage it directly from Azure.
  • Both will allow you to identify actions or internal processes that may be generating timeouts or deadlocks of resources.

    But if there are any external agents that are literally flooding your Web App request maliciously, it will be difficult to detect.

    Alternatively, you can try to define a Automatic Scaling of your App Service based on the size of the HTTP Request Queue (HTTP Request Queue Length). This way you could react by increasing instances to better absorb the increase in traffic and reduce instances when it decreases.

        
    answered by 22.08.2016 в 13:35