Most Beanstalk environments will see a pretty consistent pattern of requests. When Blue Matador detects a recent spike or drop in the ApplicationRequestsTotal metric that is not consistent with your environment, an anomaly will be created. Possible causes of changes in request count include:
While Beanstalk exposes multiple CloudWatch metrics to track latency, Blue Matador detects anomalies in the ApplicationLatencyP99 metric. This ensures that the most requests get considered in the latency calculation while leaving room for the occasional slow endpoint. An increase in latency can indicate a performance issue with your application. If traffic patterns for your application have not changed significantly, check to see if a downstream service such as a database or SQS is experiencing high latency, and propagating that time to your web server. If you have seen an increase in traffic, it is possible that your instances are overloaded and adding capacity to the application may help.
Blue Matador detects anomalies in the LoadAverage1min metric for your EC2 instances launched with Beanstalk. This metric is not normalized, so a high load average may actually be appropriate depending on the size of the EC2 instance. An increase in load could mean some delay in application processing if the underlying EC2 instance does not have enough CPU cores to handle the load. Adding capacity to your EC2 instances or Load Balancers may help alleviate load issues in your Beanstalk environment.
The RootFilesystemUtil metric on Beanstalk EC2 instances exposes the amount of used disk space on the root filesystem. Running out of disk on the root filesystem can negatively impact any applications that rely on the root filesystem. A common cause of disk space issues in Beanstalk is with application logs, so managing log files using logrotate can help. For more information on troubleshooting disk space issues, see the Disk Space document for the Blue Matador agent.