https://www.reddit.com/r/docker/comments/176zhbi/installing_docker_on_a_secondary_partition/
#datadog however suggests otherwise
https://docs.datadoghq.com/security/default_rules/3wk-jj4-zxc/
Flutter アプリにおける Datadog RUM を使った SLI 計測
https://developers.cyberagent.co.jp/blog/archives/61359/
#developers #エンジニア #Dart #Datadog #Flutter #Sentry #WINTICKET
Is #datadog so desperate that they are going through five year-old reddit posts spamming crap comments?
Exhibit A:
https://www.reddit.com/r/devsecops/comments/iq7cf8/comment/nunqawu/
Exhibit B:
https://www.reddit.com/r/netsec/comments/lmug59/shielding_kubernetes_with_image_scanning_on/nt85415/
Given the ludicrous costs of observability vendors and complexity of the OTEL SDK, I'm convinced the best ROI in observability is https://github.com/go-graphite with https://github.com/statsite/statsite and a self-hosted ElasticSearch cluster for logs.
I know it's blasphemous, but I'm not sure tracing and metrics/logs with attributes really give anyone a competitive advantage over what you can achieve leveraging statsd and a Perl script to parse logs to an OpenSearch or ElasticSearch cluster. What I witnessed at Booking.com with just statsd and web access logs parsed to self-hosted ElasticSearch was staggering compared to what I've seen from OTEL and observability vendors.
I think there's a general trend in the industry to migrate to advanced, complex solutions without fully utilizing existing simple, boring solutions.
#observability #blasphemy #otel #datadog #perl #elasticsearch
DatadogのRust製オブザーバビリティデータパイプラインVectorを本番導入した
https://developers.cyberagent.co.jp/blog/archives/60707/
👍 Powerful new AWS AI service: #AWS DevOps Agent is your always-on, autonomous on-call engineer. When issues arise, it automatically correlates data across your operational toolchain, from metrics and logs to recent code deployments in #GitHub or #GitLab. It identifies probable root causes and recommends targeted mitigations, helping reduce mean time to resolution. The agent also manages incident coordination, using #Slack channels for stakeholder updates and maintaining detailed investigation timelines.
To get started, you connect AWS DevOps Agent to your existing tools through the AWS Management Console. The agent works with popular services such as Amazon CloudWatch, #Datadog, #Dynatrace, #NewRelic, and #Splunk for observability data, while integrating with GitHub Actions and GitLab CI/CD to track deployments and their impact on your cloud resources. Through the bring your own (BYO) Model Context Protocol (MCP) server capability, you can also integrate additional tools such as your organization’s custom tools, specialized platforms or open source observability solutions, such as #Grafana and #Prometheus into your investigations.
The agent acts as a virtual team member and can be configured to automatically respond to incidents from your ticketing systems. It includes built-in support for ServiceNow, and through configurable webhooks, can respond to events from other incident management tools like PagerDuty. As investigations progress, the agent updates tickets and relevant Slack channels with its findings. All of this is powered by an intelligent application topology the agent builds—a comprehensive map of your system components and their interactions, including deployment history that helps identify potential deployment-related causes during investigations.

New service acts as an always-on DevOps engineer, helping you respond to incidents, identify root causes, and prevent future issues through systematic analysis of incidents and operational patterns.