- GCP+1 more
- 19. Sep
Stackdriver Trace is a distributed tracing system that collects latency data from your applications and displays it in the Google Cloud Platform Console.
You can track how requests propagate through your application and receive detailed near real-time performance insights. Stackdriver Trace automatically analyzes all of your application’s traces to generate in-depth latency reports to surface performance degradations and can capture traces from all of your VMs, containers, or Google App Engine projects.
Stackdriver Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Stackdriver collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services, hosted uptime probes, application instrumentation, and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others.
Stackdriver ingests that data and generates insights via dashboards, charts, and alerts. Stackdriver alerting helps you collaborate by integrating with Slack, PagerDuty, HipChat, Campfire, and more.
OpenCensus Go has support for this exporter available through the package.Prometheus exporter for Stackdriver, allowing for Google Cloud metrics. You must have appropriate IAM permissions for this exporter to work. If you are passing in an IAM key then you must have:
This chart creates a Stackdriver-Exporter deployment on a Kubernetes cluster using the Helm package manager.
- Kubernetes 1.8+ with Beta APIs enabled
Overview of Logs Exports
You can export copies of some or all of your logs outside of Stackdriver Logging. You might want to export logs for the following reasons:
To store logs for extended periods. Logging typically holds logs for weeks, not years. For more information, see the Quota Policy.
- To use big-data analysis tools on your logs.
- To stream your logs to other applications, other repositories, or third parties.
Overview of exports
Exporting involves writing a filter that selects the log entries you want to export, and choosing a destination in Cloud Storage, BigQuery, or Cloud Pub/Sub. The filter and destination are held in an object called a sink. Sinks can be created in projects, organizations, folders, and billing accounts.
There are no costs or limitations in Logging for exporting logs, but the export destinations charge for storing or transmitting the log data.
Sink properties and terminology
Sinks have the following properties:
Sink identifier: A name for the sink. For example, "my-vm-error-sink".
Parent resource: The resource in which you create the sink. The parent is most often a project, but it can be any of the following:
The sink can only export logs that belong to its parent resource. For the one exception to this rule, see the following Aggregated Exports property.
How sinks work
Every time a log entry arrives in a project, folder, billing account, or organization resource, Logging compares the log entry to the sinks in that resource. Each sink whose filter matches the log entry writes a copy of the log entry to the sink's export destination.
Since exporting happens for new log entries only, you cannot export log entries that Logging received before your sink was created.
To create or modify a sink, you must have the IAM roles Owner or Logging/Logs Configuration Writer in the sink's parent resource. To view existing sinks, you must have the IAM roles Viewer or Logging/Logs Viewer in the sink's parent resource. For more information, see Access Control.
To export logs to a destination, the sink's writer service account must be permitted to write to the destination. For more information about writer identities, see the preceding section, Sink properties.
To secure exported logs from unauthorized access, you must use the access control features of your export destination. Sinks can export any log entries, including private Data Access audit logs.