![]() ![]() ![]() This is how we avoid being tightly-coupled to the logging system (only the specific Lambda will need to be changed and deployed). We initiated a Kinesis Stream, which is a generic process that anyone with AWS SDK and appropriate AuthZ can access and send logs in a specific format.Įach application/Lambda should send the logs to the Kinesis Stream (this would be deployed per AWS account per region) - there will be a Lambda that is triggered by the Kinesis Stream, and this Lambda will send the logs to the logging system. Also, if for some reason we deploy a service which is not AWS specific, then it would be harder to create CloudWatch logs just for the sake of the logs being processed into the logging system.įor the final solution, we chose to have a segregation between our code and the logging system specifics. At least here we will not be tightly-coupled to the logging system, but we do force a redundant cost because we save the CloudWatch logs + logging system logs. Use AWS CloudWatch triggers - We can force each Lambda to fire logs to CloudWatch and collect logs from there using a trigger to a Lambda. When we have a change (not necessarily changing the logging system itself, but perhaps they upgraded their agent or changed something in the way they work), we will need to iterate again over all our Lambda functions and update them. But here we encounter another problem of being tightly-coupled to our logging system - which means that our code actually understands the fact that we chose a very specific logging system. Use logging-system agents - Some of the systems have their own agents, which take control over the stdout/stderr of the process and send them to the logging system. Also, since we are not language-agnostic, for each runtime we would need to write some code to process the logs and then send them to our centralized logging system This would require going over all the different existing Lambdas and changing the code, redeploying, and testing. When we have several dozens of Lambda functions, we need to solve the problem in a generic way to reduce the amount of work and migration required by the developers who work with FaaS technology on a daily basis.Ĭode-wise solution - We can create a wrapper/library that any FaaS dev can use. Most of the logging systems have very convenient methods of sending logs, including Filebeat/Metricbeat, agents, APIs, etc. In addition, we use the logging system’s alert mechanism to trigger and send alerts to various sources, including email, Slack channels, etc. └─7771 /usr/share/filebeat/bin/filebeat -environment systemd -c /etc/filebeat/filebeat.yml /usr/share/filebeat nfig /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeatĪug 31 13:07:07 collector filebeat: T13:07:07.940+0100 INFO log/log.Our current centralized logging solution is Logz.io: Cloud Observability for Engineers and most of our application logs are sent there from the k8s cluster. Loaded: loaded (/usr/lib/systemd/system/rvice enabled vendor preset: disabled)Īctive: active (running) since Fri 17:37:07 BST 3 days agoĭocs: Filebeat: Lightweight Log Analysis & Elasticsearch | Elastic ![]() rvice - Filebeat sends log files to Logstash or directly to Elasticsearch. ![]()
0 Comments
Leave a Reply. |