Skip to main content
Use the logging query language within Logs Explorer to construct search queries for retrieving, processing, and analyzing your log data. Observability Platform provides a lightweight, flexible syntax for querying log data in Logs Explorer. This syntax implements predefined keys that accept a comparison operator, such as an equals sign =, and a value.
KEY =|!=|=~|!~|: VALUE AND|OR (KEY =|!=|=~|!~|: VALUE) AND|OR "full-text search value"
AND|OR KEY EXISTS
The query language supports full-text search using double quotes ("") to find logs that contain the filter expression anywhere in the log. Longer, more specific full-text searches are more optimized and return results faster.

Features

The query syntax for Logs Explorer supports the following features:
  • Autocomplete: Start typing a key to get autocomplete values. Press Control+Space to display suggestions.
  • Run shortcut: Run a query by pressing Control+Enter (Command+Return on macOS).
  • Nested queries: Use parentheses () to establish the order of operations for complex queries.
  • Field search: Query on specific fields by entering a key/value pair such as key = "value".
  • String identification: Use either single '' or double "" quotes to identify strings.
  • Full-text search: Express a single value filter expression, such as "query user token", to find logs that contains the filter expression anywhere in the log. Full-text searches must be surrounded by double quotes.
  • Exists: Find logs with a specific key by entering key.label EXISTS to return any logs containing that key and label combination. For example, kubernetes.namespace_name EXISTS returns any logs containing kubernetes.namespace_name. See logical operators for more information.

Keys

Logs Explorer supports querying all user-defined keys in your data, in addition to the following derived keys:
  • service: Services that Observability Platform discovers in your log data.
  • severity: Severity of issues, sanitized to the following values: DEBUG, INFO, WARN, ERROR, FATAL.
  • message: Human-readable description of the log, derived from your log data.
  • Custom labels you created for your logs.
If some of your data uses a key with a colon, surround the key with double quotes and brackets in your query. For example, if your data contains a key named error: code value, a valid query is structured like this example:
["error:code value"] = critical
Chronosphere uses service as a primary key by default. You can change this primary key, but only one primary key is supported. To map your primary key, contact Chronosphere Support.
Your primary key can’t be changed after it’s been mapped by Chronosphere Support.

Operators

The querying syntax for Logs Explorer supports the following operators:

Logical operators

The querying syntax for Logs Explorer supports the following logical operators:
OperatorDescription
=Equals
!=Does not equals
:Contains substring
ANDAdditive operator
ORSubjective operator
NOTReturns results that don’t match the value or contain the key
EXISTSReturns results containing the key
Queries with the AND operator take precedence. If your query doesn’t use parentheses, Observability Platform evaluates all AND statements sequentially, followed by any OR statements and the next set of AND queries. To use AND plus OR operators in the same query, separate them with parentheses. For example, the following query matches any service named gateway where the kubernetes.namespace_name is test-logging or kubernetes.cluster_name is test and the log message contains "insert success":
service = "gateway" AND (kubernetes.namespace_name = "test-logging" OR
kubernetes.cluster_name = "test") AND "insert success"
The operators AND plus OR aren’t case sensitive, so you can use AND, and, OR, and or interchangeably. The equal sign (=) and colon (:) operators can’t be used interchangeably. The colon operator matches on a substring or indicates that a field contains a value. For example, following query matches any logs where the kubernetes.cluster_name field contains only production-us-east:
kubernetes.cluster_name: "production-us-east"
The colon operator also matches on a substring. For example, the following query matches the specified value of the httpRequest.requestUrl key:
service = "nginx" and httpRequest.requestUrl: "example.com/data/api/query"

Mathematical operators

You can also use mathematical operators for addition (+), subtraction (-), multiplication (*), and division (/) to complete calculations within a query. The querying syntax supports the following mathematical operators:
OperatorDescription
+Addition
-Subtraction
*Multiplication
/Division
This capability is especially helpful when creating monitors to generate alerts and receive notifications. For example, the following query includes a calculation for an alert that returns the ratio of logs without errors to total logs in the nginx service:
service = "nginx"
| summarize 1 - countif(severity="ERROR")/countif(severity EXISTS)
Use parentheses to nest calculations and separate them from the remainder of the search query. Additionally, you can complete mathematical calculations on expressions. For example, the following filter evaluates an expression that calculates the number of bytes received for the emissary-ingress service where the duration is greater than 10:
service = "emissary-ingress" bytes_received / duration > 10
You can then expand the filter to display the results as table, with a column named bytes_received / duration:
service = "emissary-ingress" bytes_received / duration > 10 | project bytes_received / duration

Transformation operators

Transformation queries include operators that modify input records by adding, removing, or updating fields in the results set. Observability Platform supports the following transformation operators you can include in log queries.
All transformation operators require a pipe (|) character in a query.

extend

The extend operator takes the results of a calculation and adds those columns to a result set. Use this operator with the project operator to specify the columns that display in the results. You can also use this operator with the make-series, summarize, and top-nested operators. Syntax
QUERY | extend COLUMN = EXPRESSION, COLUMN = EXPRESSION
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
COLUMNstringSpecifies which column to add computed results to.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Example The following example returns logs for the logging_service, and specifies max_threads greater than zero to exclude rows with zero or negative values for that field. The first calculation creates a new column named ratio, which always returns a value of 1 for non-zero values because it divides max_threads by itself. The second calculation creates a new column named v2, which takes the results from the ratio column, and then adds them to the value of max_threads (which is 1). The project operator creates three columns in the result set, named service, ratio, and v2, and outputs the results of the calculations to those columns.
service = "logging_service" max_threads > 0
| extend ratio = max_threads / max_threads, v2 = ratio + max_threads
| project service, ratio, v2
The table output looks similar to this example:
serviceratiov2
logging_service14
logging_service122
logging_service112
logging_service17

extract

The extract operator extracts a specific capture group from a string to match on a field or expression, through the use of a regular expression. Syntax
QUERY | OPERATOR extract(REGULAR_EXPRESSION,CAPTURE_GROUP,SOURCE_STRING)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the QUERY.none
REGULAR_EXPRESSIONstringThe regular expression to apply on the provided SOURCE_STRING.none
CAPTURE_GROUPintegerThe capture group to extract. A value of 0 returns the source expression unchanged. A value of 1 indicates the first capture group, and subsequent numbers indicate the numeric capture group.none
SOURCE_STRINGstringThe source string to apply the regular expression against.none
Examples The following example matches a string that includes "error INTEGER", where INTEGER is a numeric error code such as 503. For example, the string "error 503" returns only the numeric error code 503 in a column named error_code.
service = "nginx"
| extend error_code = extract("error (\\d+)", 1, message)
| project message, error_code

join

The join operator merges the rows of two tables into a new table by matching values from specific columns in each table. The operator accepts a left query with a right sub-query. The join type can be one of the following types, and both options join all columns from both tables, including the matching keys, but differ on the included rows:
  • inner: Includes only matching rows from both tables.
  • leftouter: Includes records from the left table, and only matching rows from the right table.
In your filter, specify either a subquery that joins on specific columns, or explicitly map the left and right columns using an equality operator (==). Syntax
QUERY | join [kind=inner|leftouter] [(SUBQUERY) on COLUMNS | LEFTCOLUMN==RIGHTCOLUMN]
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
KINDstringThe kind of join to use in the filter. Must be either inner (inner join) or leftouter (left outer join).inner
SUBQUERYstringThe sub-query or right table to merge rows from.none
COLUMNSstringThe columns to join on.none
LEFTCOLUMNstringThe left column to join on, which you can specify when explicitly mapping results of the left table to the right table.none
RIGHTCOLUMNstringThe right column to join on, which you can specify when explicitly mapping results of the left table to the right table.none
Examples The following example completes an inner join on logs from the nginx service, joining logs where severity="ERROR" on the requestID column:
service = "nginx"
| join kind=inner (severity="ERROR") on requestID
| project severity, requestID
| limit 20
The following example searches for messages that include "request finished", and join the output_deployment_id with response status codes to create a table with a column named response_class that includes the response code type:
[output.message] = "request finished"
| join kind=leftouter (
    | filter [output.message] = "request finished"
    | project [output.deployment_id], [output.response_status_code]
  ) on [output.deployment_id]
| extend response_class = case(
    [output.response_status_code] < 200, "1xx",
    [output.response_status_code] < 300, "2xx",
    [output.response_status_code] < 400, "3xx",
    [output.response_status_code] < 500, "4xx",
    [output.response_status_code] < 600, "5xx",
    "n/a"
  )
| summarize count() by response_class

limit

The limit operator returns the most recent log events. Syntax
QUERY | limit NUMBER
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
NUMBERintOptional. Use with limit to specify the number of logs to produce. Maximum value is 1000. Minimum value is 1.500
Examples
severity = "WARNING"
| limit 500

lookup

The lookup operator reads log data from lookup tables, and performs joins to return data in log filters. You can combine this operator with other transformation operators to visualize external data. Syntax
QUERY | lookup kind=inner|leftouter TABLE_NAME on LOG_FIELD == LOOKUP_COLUMN
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
KINDstringThe kind of join to use in the filter. Must be one of inner (inner join) or leftouter (left outer join).inner
TABLE_NAMEstringName of the lookup table to reference.none
LOG_FIELDstringField name from your log data.none
LOOKUP_COLUMNstringColumn name in the lookup table to reference.none
Examples The following example maps severity levels to descriptions and priority. The filter completes a left outer join on the severity_definitions lookup table, using the severity column as reference. It then uses the project operator to visualize several fields in the generated results table.
lookup kind=leftouter severity_definitions on severity
| project timestamp, severity, description, action_required, body
The following example returns service metadata, including owners and teams. The filter completes an inner join on the kubernetes_services lookup table, and references the logging.apis.com/labels.service field against the service column in the table. It then uses the project operator to visualize several fields in the generated results table.
lookup kind=inner kubernetes_services on ['logging.apis.com/labels'].service==service
| project timestamp, service, owner, team, criticality, body
The following example maps IP addresses for Kubernetes Pods to the zone and network tier they operate in. The filter completes an inner join on the pod_ips lookup table, and references the kubernetes.pod_ip field against the pod_ip column in the lookup table. It then uses the project operator to visualize several fields in the generated results table.
lookup kind=inner pod_ips on ['kubernetes.pod_ip']==pod_ip
| project timestamp, ['kubernetes.pod_name'], pod_name, node_name, zone, network_tier
You can also use the lookup function with the project-rename function, which renames columns in the lookup table before completing the join operation. The project-rename function only works with the lookup function. This usage helps to avoid column name conflicts between log fields and values in the lookup table, and lets you control which columns are exposed in the result set. For example, the following filter uses the project-rename function to rename lookup_company_name to company_name to avoid naming conflicts between the log fields and values in the lookup table.
service="ingest-service" AND kubernetes.namespace="production" AND
| lookup kind=leftouter (company-names | project-rename lookup_company_name = company_name) on company.company_id==company_id

make-series

The make-series operator shapes logs to include in a time chart with an X-axis. Use this operator to visualize log queries containing time series data. The operator sorts multiple time series in descending order by the last data point value. Syntax
QUERY | make-series AGGREGATION step TIME by EXPRESSION
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
AGGREGATION stringOptional. Specifies which aggregation function to use.count()
TIMEtimespanOptional. Specifies the difference between specified array elements, such as 2d for two days or 15m for 15 minutes. See the timespan data type in the Microsoft Kusto documentation for more information.none
EXPRESSION string-arrayOptional. Specifies which fields to group. Use with step by to define the time step for each bucket in Prometheus time duration format.none
Examples
severity = "WARNING"
| make-series avg(latencyInSeconds) step 15m by severity, service

parse | parse-where

The parse operator accepts a column name, strings (with optional regular expressions), and variable names to extract information out of your existing data. Use this operator to extract parts of a field at query time that you couldn’t otherwise use, such as a response code that’s part of a message field. The parse operator only extracts data, while the parse-where operator filters the result set to log data that successfully matches the parsing clause. After extracting data, you can specify a name for the new field and also complete aggregation calculations using any of the supported aggregation functions. Use this function with supported transformation operators, such as make-series, project, summarize, and top-nested, to create a visualization from your results. Syntax
QUERY | parse [kind = regex] FIELD with REGEX_STRING NEXT_FIELD [NEXT_STRING NEXT_FIELD]
QUERY | parse-where [kind = regex] FIELD with REGEX_STRING NEXT_FIELD [NEXT_STRING NEXT_FIELD]
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
FIELDstringThe field to parse.none
REGEX_STRINGstring-arraySpecifies the regular expression string to evaluate in the query.none
NEXT_STRING string-arraySpecifies an additional regular expression string to evaluate in the query.none
NEXT_FIELDstringSpecifies the name of the generated output field.none
Examples Consider log data that contains an improperly extracted message field. This field might include information like severity, environment, response time, and other useful data, but it’s coded as a single string. You could use the substring function to extract data, but that function relies on data being in a position that doesn’t change. The parse and parse-where operators let you specify an expression without relying on position. These operators use regular expression matching by default, which you can explicitly specify by including kind = regex in a query. This capability means you can analyze your log data, define a query to extract data based on the structure (rather than position), and then complete aggregations and transformations. Assuming the overloaded message field contains a response time that’s coded as "responseTime": VALUE, the log data looks similar to this:
message: {"severity": "WARNING", "responseTime": 22.034573056640625, "environment":
"production-alerts", "status": "complete"}
You want to calculate the average response time across these logs, but the values are embedded in a string. The severity could be several different values (such as INFO, WARN, or CRIT), so you can’t guarantee the position of the response time value, meaning you can’t use the substring() function. Instead, use the parse operator to extract that value from your log data and then compute the average:
"responseTime"
| parse message with '"responseTime":' time ","
| summarize avg(time)
The previous query:
  • Matches on logs that contain "responseTime". Use single quotes ('') to escape double quotation marks.
  • Specifies a regular expression to match any data starting with "responseTime": and ending with a comma (,).
  • Names the computed data time.
  • Takes the average of time and returns the data in a table.
To complete the same calculation but display the results as a time series visualization, use the make-series operator:
"responseTime"
| parse message with "\"responseTime\":"  time ","
| make-series avg(time)
If you want results that only include "responseTime", use the parse-where operator instead of parse:
"responseTime"
| parse-where message with "\"responseTime\":"  time ","
| make-series avg(time)
Multiple operators Consider Zookeeper logs that contain a message field with myriad information:
message: 2025-07-17 18:59:47,999 [myid:] - INFO  [NIOWorkerThread-5:o.a.z.s.NIOServerCnxn@525] -
Processing mntr command from /127.0.0.1:47242
You want to extract the command name and port number used for each command from a single field. To accomplish that task, you can include multiple parse clauses in a single query:
service = "zookeeper"
| parse message with "Processing " command " command"
| parse message with "127.0.0.1:" port
| project command, port
The previous query:
  • Matches all logs for the zookeeper service.
  • Extracts part of a string starting with Processing and ending with command, and names the new field command.
  • Extracts part of a string after 127.0.0.1:, and names the new field port.
  • Projects the output of the command and port fields in a table.
The table output looks similar to this example:
commandport
mntr46420
ruok46418
ruok60410
mntr36660

project

The project operator selects columns to include or compute in a logging query. Use this operator to return data in a table with the specified fields as table columns. If you want to return data in a tabular format and also complete aggregations and sorting, use the summarize operator instead. You can also use the project() operator with the substring function to extract a substring from the source string based on a starting index character position. For example,
severity = "ERROR"
| project substring(resource.type, 0, 13)
Use the project() operator to select fields and customize columns and rows before downloading log data. Syntax
QUERY | project FIELD [, FIELD]
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
FIELDstringThe field to include in the generated table. You can specify multiple fields separated by commas.none
Examples
severity = "ERROR" and TraceId EXISTS
| project service, TraceId, request.latency

sort

The sort operator sorts your results by a specified column, in either ascending or descending order. The operator supports sorting by multiple columns, such as by both service and severity. When sorting by multiple columns, you can specify the sort order for each column. When sorting results that include an aggregation query, use an alias to sort by instead of the aggregation field name. Sorting by the alias provides better control and predictability in your queries. For example, the following query retrieves all logs that have a severity of WARNING, aggregates the results in a table, and returns the sum of the httpRequest.responseSize field, ordered by service:
severity = "WARNING"
| summarize sum(httpRequest.responseSize) by service
To sort the results, use an alias. The following example expands on the existing query and uses an alias named total, which simplifies the subsequent sort by clause:
severity = "WARNING"
| summarize total = sum(httpRequest.responseSize) by service
| sort by total
Syntax
QUERY | sort by COLUMN asc|desc [, COLUMN ...]
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
COLUMNstringSpecifies which column to sort results by. Accepted values: numeric, date, time, string.none
asc or descstringOptional. Specifies whether to sort results in ascending or descending order.asc
, COLUMNstringOptional. Specifies additional columns to sort by.none
Examples
severity = "WARNING"
| summarize count() by service, severity
| sort by count_ desc, service asc
| limit 10

summarize

The summarize operator generates a table of data that aggregates the content of the table from the input query, grouped in descending order. By default, this operator uses the count() function to count all permutations of the specified field if no AGGREGATION is specified. Input rows are arranged into groups that have the same expression. Use the by keyword to specify the columns you want to group your data by before applying any aggregations. Your query can include only one summarize operator. To group data and refine the results with subsequent clauses, use the top-nested operator. Syntax
QUERY | summarize AGGREGATION by FIELD EXPRESSION
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
AGGREGATION stringOptional. Specifies which aggregation function to use.count()
FIELDstringThe field to compute the average value for.none
EXPRESSIONstring-arrayOptional. Specifies which fields to group.none
Examples
severity = "ERROR"
| summarize sum(production) by severity, service
| limit 100

top-nested

The top-nested operator completes aggregation and value selection in a hierarchical manner. The operator partitions data based on criteria from the first top-nested clause, and then selects the top records in each partition using an aggregation, if specified. Unlike the summarize operator, you can include multiple top-nested clauses in a single query. Using a single pipe (|) separator, specify a “root” top-nested clause, and then nest additional top-nested clauses separated by a comma. Each of these clauses refine the partitions from the previous clause, creating a hierarchy of more precise groupings. For example, using the summarize operator, the following query returns the count of logs from each endpoint from the nginx-requests service in descending order:
service = "nginx-requests"
| summarize by http.response.endpoints
The summarize operator can use additional aggregation functions, but on its own returns a two-column table containing a count of the specified field. The top-nested operator is similar, but provides additional options to partition and display results. The top-nested operator creates a table with two columns for each clause. One column contains unique values from the filter EXPRESSION, and the other column shows the results obtained from the AGGREGATION calculation. Using the top-nested operator, the following query refines the results from the nginx-requests service to the top 80 endpoints, and further partitions the data by the top nine HTTP response status codes:
service = "nginx-requests"
| top-nested 80 of http.response.endpoints,
top-nested 9 of http.response.status_code
This query returns a four-column table with a row for each entry that includes:
  • The name of the endpoint from http.response.endpoints.
  • The aggregated value for the endpoint.
  • The HTTP status code from http.response.status_code.
  • The aggregated status code for the endpoint.
The number of rows returned by this query is calculated by multiplying the number of values returned for http.response.endpoints times the number of values returned for http.response.status_code. Syntax
QUERY | top-nested NUMBER of EXPRESSION by AGGREGATION asc|desc [, top-nested ...]
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
NUMBERintOptional. Specifies the number of top, distinct values to return for this hierarchy level. If omitted, returns all distinct values.none
EXPRESSIONstringA filter expression that operates on the input record to specify which value to return. Typically refers to a column from a query, or includes a calculation for a column.none
AGGREGATIONstringOptional. Specifies which aggregation function to apply to records matching the filter EXPRESSION. The result determines which top records to display.none
asc or descstringOptional. Specifies whether to sort results in ascending or descending order.desc
, top-nestedstringOptional. Specifies additional top-nested clauses to refine the returned data.none
Examples
severity = "ERROR" AND cluster_name = "production"
| top-nested 10 of service by avg(duration)
severity = "ERROR" AND cluster_name = "production"
| top-nested 5 of service, top-nested of kubernetes_namespace

Query refinements

Logs Explorer provides query refinements such as aliasing field names, matching with regular expressions, and querying array values.

Alias field names

When running a query, you can add an alias for a field name that appears in generated visualizations, such as in dashboards and in signal groups for alerts. This capability is similar to creating an alias for function names. The make-series, project, and summarize operators support field aliasing. For example, the following query creates a time chart that uses k8 as an alias for the k8s.namespace field. The generated chart uses k8 instead of the full field name.
severity = "ERROR"
| make-series by k8 = k8s.namespace
You can alias the name of a transformation operator and a field in the same query. For example, the following query uses an alias named total for the count() function, and svc for the service field. The generated chart includes results such as svc:nginx total. Without the aliases, the same result would be service:nginx count_.
severity = "ERROR"
| make-series total = count() by svc = service
When creating a monitor or including a log query in a dashboard panel, use an alias in the monitor query for any field names that contain a period, such as k8s.namespace. Fields containing periods are converted to underscores in monitors that use signals, so k8s.namespace displays as k8s_namespace in the monitor query visualization.

Regular expressions

Logs Explorer uses re2 regular expression syntax. The following regular expressions are supported, and can be used between fields and values:
OperatorDescription
=~Matches regular expression
!~Doesn’t match regular expression
Queries that use regular expression operators match only on the first 1,024 characters of a string. These operators won’t match any characters in a string that exceed 1,024 characters. To change this match limit, contact Chronosphere Support.
In the following example, consider that a field name kubernetes.cluster_name contains the following values: production-1, production-2, production-3, productionNEW, old_production. The following query matches the word production anywhere in the kubernetes.cluster_name field, so it matches all values in the field:
kubernetes.cluster_name =~ "production"
The following regular expression query includes a period and wildcard (.*), which matches the word production, followed by zero or more characters anywhere in the phrase. This query also matches all values in the field:
kubernetes.cluster_name =~ "production.*"
Using a dash and wildcard (-*) matches the word production, followed by zero or more dash characters anywhere in the phrase. This query also matches all values in the field, including productionNEW and old_production, which might not be immediately apparent:
kubernetes.cluster_name =~ "production-*"
To match only the fields containing production-, combine the dash and period characters with a wildcard and add a dollar sign ($). The following query matches only production-1, production-2, production-3, but doesn’t match either productionNEW or old_production:
kubernetes.cluster_name =~ "production-.*$"

Arrays

You can query array values up to the first level. During ingestion Observability Platform flattens multiple level arrays to one level to preserve order. If a key precedes the array index, the index must always be at the end. For example, consider the following array of objects belonging to the nginx service as they display in Logs Explorer:
tags:
    0:
	      application: a
	      events: b
	      ingestion: c
    1:
	      application: d
	      events: e
	      ingestion: f
To return results in the second array position (1) that match events = e, use the following query:
service = "nginx"
tags.events[1] = "e"

Query examples

This syntax supports nesting using parentheses so you can create complex queries. For example, the following query matches on two severity values, or where the message contains “database connection” and the mysql.table_name label starts with production:
service = "gateway" AND severity =~ "WARN|ERROR" OR (message : "database connection"
AND mysql.table_name =~ "^production.*")
The following query matches based on a service named gateway where the kubernetes.pod_id equals 3bf26945-b817-4e72-b22c-662c318af2f1 anywhere in the log payload:
service = "gateway" AND kubernetes.pod_id = "3bf26945-b817-4e72-b22c-662c318af2f1"
The following query returns all logs for a service named gateway that have a logger other than deleter, or have no logger set:
SERVICE = "gateway" AND NOT logger = "deleter"

Aggregation functions

Aggregation queries include functions that can combine their input into a new structure, or emit new events into the output stream. You can use all aggregation functions with summarize and make-series operators, with the exception of arg_max(), which can be used only with the summarize operator. Observability Platform supports the following aggregation functions you can include in log queries.

Group results

To group results in queries that include an aggregation function, use the by operator. This operator lets you group results by a specified field, which is different than the sort operator, which lets you sort results by a specified column. For example, the following query includes the summarize transformation operator to return the average for the httpRequest.responseSize field, and then groups the results by service:
service = "nginx"
| summarize avg(httpRequest.responseSize) by service

Alias function names

When running a query containing a transformation operator and an aggregation function, you can add an alias for the function name that displays in the generated visualization. This capability applies to all transformation operators and aggregation functions. For example, the following query creates a table of data that aggregates the content of the table from the input query, and displays the count by service. The column header that contains the total count is named _count.
severity = "ERROR"
| summarize count() by service
The following query returns the same data, but uses an alias named total for the count() function. The column header containing the total count is named total.
severity = "ERROR"
| summarize total = count() by service
Similarly, you can create an alias using the make-series operator:
severity = "WARN"
| make-series total = count() by service

arg_max()

The arg_max function returns the value of an expression for the row with the largest _timestamp. The expression determines which column values to return, from the row with the largest _timestamp values. By default, the function creates a row header named arg_max_EXPRESSION, where _EXPRESSION_ is the expression included as the argument. For example, if your function call is arg_max(service), the resulting column name is arg_max_service. To specify a more human-readable and intuitive column name, use an alias. Syntax
QUERY | OPERATOR arg_max(EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringOptional. The input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION This function accepts the summarize operator only.summarize
EXPRESSIONstringThe filter expression used for the aggregation calculation.none
Examples The following example returns values for service with the largest _timestamp, ordered by severity:
cluster =~ "production-*"
| summarize largest_service = arg_max(service) by severity
The following example doesn’t include a query, and instead uses the summarize operator with the arg_max function. The example also uses an alias named service_with_largest_message, which becomes the column header in the generated table.
| summarize service_with_largest_message = arg_max(message, service) by severity

array_length()

The array_length function returns the length of an array field. The function takes a field name as input and returns a numeric value representing the number of elements in the array. This function works only with fields containing numeric values. Syntax
QUERY | OPERATOR array_length(FIELD)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter.none
FIELDstringThe field to return the length of an array for.none
Examples The following example filters logs to the nginx service and completes the following transformations:
  • Adds an upstream_length column with the array length of the upstreamAddr field.
  • Selects only the service, upstreamAddr, and upstream_length columns.
  • Sorts the results by the upstream_length column in descending order.
service = "nginx"
| extend upstream_length = array_length(upstreamAddr)
| project service, upstreamAddr, upstream_length
| sort by upstream_length desc

avg()

The avg function computes the average of values of a filter EXPRESSION over events passing through the function. This function only works with fields containing numeric values. Syntax
QUERY | OPERATOR avg(EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe filter expression used for the aggregation calculation.none
Examples
service = "nginx"
| summarize avg(httpRequest.responseSize) by service

avgif()

The avgif function computes the average of values of a field where the specified filter EXPRESSION evaluates to true. This function works only with fields containing numeric values. Syntax
QUERY | OPERATOR avgif(FIELD, EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to compute the average value for.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Examples
service = "nginx"
| summarize avgif(httpRequest.responseSize, httpRequest.responseSize > 2000) by service

case()

The case function accepts a conditional expression that evaluates conditions in order, and returns the value of the first matching condition. If none of the conditions evaluate to true, the result of the else expression is returned. Specify one or more condition-value pairs as condition, value in the filter. Use this function in conjunction with the transformation operators to specify conditional expressions. Syntax
QUERY | OPERATOR COLUMN = case(EXPRESSION..., elseValue)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
COLUMNstringSpecifies which column to add computed results to.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation. Specify condition-value pairs as condition, value in the filter.none
Examples This example filters logs for the nginx service in all environments starting with prod, where the number of bytes sent is less than or equal to 50000. The filter uses the case function to categorizes bytes sent based on these conditions: small (≤1000), medium (≤5000), and large (>5000). The project operator creates three columns named buckets, kubernetes.pod_name, and environment, and then sorts the results by kubernetes.pod_name.
service = "nginx" and bytesSent <= 50000 and environment =~ "prod-*"
| extend buckets = case(bytesSent <= 1000, "Small", bytesSent <= 5000, "Medium", "Large")
| project buckets, kubernetes.pod_name, environment
| sort by kubernetes.pod_name

count()

The count function returns a count of the number of events passing through the function. Syntax
QUERY | OPERATOR count()
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
Examples
service = "checkout-service"
| make-series count() by severity
| limit 100

countif()

The countif function returns a count of rows where the specified filter EXPRESSION evaluates to true. This function ignores null values. Syntax
QUERY | OPERATOR countif(EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Examples
service = "nginx"
| summarize countif(kubernetes.pod_name =~ "nginx-*") by severity

count_distinct()

The count_distinct function returns the exact count of distinct values for a field. Use this function when you need exact results. To return approximate results, use the dcount function, which is more performant and provides suitable accuracy for most use cases. Syntax
QUERY | OPERATOR count_distinct(FIELD)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to count the values for.none
Examples
service = "nginx"
| summarize count_distinct(kubernetes.pod_name) by severity
| limit 100

count_distinctif()

The count_distinctif function combines the count_distinct function with a filter to return the exact count of distinct values for a field, where a specified condition is true. To return approximate results where a condition is true, use the dcountif function, which is more performant and provides suitable accuracy for most use cases. Syntax
QUERY | OPERATOR count_distinctif(FIELD, CONDITION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to count the values for.none
CONDITIONstringA filter expression that must evaluate to true.none
Examples
service = "nginx"
| summarize count_distinct(kubernetes.pod_name, severity="ERROR") by cluster
| limit 100

dcount()

The dcount function calculates an estimated number of distinct values of a FIELD in events passing through the function. Chronosphere recommends using this function in almost all scenarios. If you need exact results, use the count_distinct function. Syntax
QUERY | OPERATOR dcount(FIELD)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to count the values for.none
Examples
service = "nginx"
| make-series dcount(hostname) by severity

dcountif()

The dcountif function calculates an estimated number of distinct values where the specified EXPRESSION filter evaluates to true. Use this function to estimate the cardinality of large data sets.
The dcountif function trades accuracy for performance, and might return a result that varies between executions.
Syntax
QUERY | OPERATOR dcountif(FIELD, EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to compute the average value for.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Examples
service = "nginx"
| summarize dcountif(kubernetes.pod_name, kubernetes.pod_name =~ "nginx-*") by "severity"

floor()

The floor function rounds a numeric value down to the nearest integer, and returns a dynamic array of the calculation. Syntax
QUERY | OPERATOR floor(EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe filter expression to evaluate, which can be a field name, numeric value, or math expression.none
Examples The following query converts response sizes from bytes to kilobytes, and then returns the floor of the result using the floor() function. The query includes the extend() function to create a new column named rounded_response_size for the results of the floor calculation.
service = "nginx"
| extend rounded_response_size = floor(httpRequest.responseSize / 1000)
| project service, httpRequest.responseSize, rounded_response_size | limit 10

hash()

The hash function takes an input string, uses the xxHash 64 algorithm to hash it, and returns the hash as a 64-bit integer. Use this function in conjunction with the replace fields control rule to hash values, such as shortening long, repetitive strings in your data. Hashing values at ingest time avoids storing the entire field, which decreases storage costs. You can then use the hash function to query the hashed fields. Syntax
QUERY | OPERATOR hash(STRING)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter QUERY.none
STRINGstringThe string to apply the hash function to.none
Examples Use the hash function to query hashed values, such as ones that replace long error messages or stack traces. For example, return all error messages and hash the value of the message field, and sort by count_ in descending order.
severity = "ERROR"
| extend message_hash = hash("message")
| summarize count() by message_hash, service
| sort by count_ desc
| limit 20
You can also use this function with transformation operators like extend and project. For example, use a hashed value in place or the HTTP request from a service to shorten the value and improve query performance.
service = "api-gateway"
| extend url_hash = hash(httpRequest.requestUrl)
| project timestamp, service, url_hash, httpRequest.requestUrl, message
| limit 100

min()

The min function returns the minimum value of the specified field. Syntax
QUERY | OPERATOR min(FIELD)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter QUERY.none
FIELDstringThe field to return the minimum value for.none
Examples
service = "nginx"
| summarize min(httpRequest.responseSize) by kubernetes.cluster_name

minif()

The min function returns an estimate of the minimum value of the specified field where the EXPRESSION filter evaluates to true. Syntax
QUERY | OPERATOR minif(FIELD, EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to return the minimum value for.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Examples
service = "nginx"
| summarize minif(httpRequest.responseSize, httpRequest.responseSize > 100) by kubernetes.cluster_name

max()

The max function returns the maximum value of the specified field. Syntax
QUERY | OPERATOR max(FIELD)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to return the maximum value for.none
Examples
service = "nginx"
| summarize max(httpRequest.responseSize) by "kubernetes.cluster_name"

maxif()

The maxif function returns an estimate of the maximum value of the specified field where the EXPRESSION filter evaluates to true. Syntax
QUERY | OPERATOR maxif(FIELD, EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to return the minimum value for.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Examples
service = "nginx"
| summarize maxif(httpRequest.responseSize, httpRequest.responseSize > 100) by kubernetes.cluster_name

moving_average()

The moving_average function accepts an AGGREGATION containing a dynamic numeric array as input, and applies a simple moving average (SMA) filter. This function requires an aggregation with an expression that operates on a specified time interval to calculate a moving average for. The combination of the AGGREGATION and EXPRESSION defines the calculation for which this function operates. Syntax
QUERY | moving_average(AGGREGATION(EXPRESSION), NUMBER) step TIME by FIELD
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
AGGREGATION stringSpecifies which aggregation function to use.none
EXPRESSIONstringThe filter expression used for the aggregation calculation.none
NUMBERintSpecifies the size of the moving average window. Determines how many past data points are considered when calculating each average.none
TIMEtimespanOptional. Specifies the difference between specified array elements, such as 2d for two days or 15m for 15 minutes. See the timespan data type in the Microsoft Kusto documentation for more information.none
FIELDstringOptional. Specifies which field to group returned results by. Use with step by and the TIME argument.none
Examples The following example calculates a series of summations for request_size over the last minute (1m), takes a moving average of the last five (5) summations, and then divides by 1000 to convert the results to megabytes. The results are ordered in a time chart with an X-axis, sorted by severity.
| make-series moving_average(sum(request_size), 5) / 1000 step 1m by severity

percentile()

The percentile function returns the specified percentile value of a filter EXPRESSION. This function only works with fields containing numeric values. Syntax
QUERY | OPERATOR percentile(EXPRESSION, NUMBER)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe filter expression used for the aggregation calculation.none
NUMBERintSpecifies the percentile to return.none
Examples
key1 = 'value1'
| make-series percentile(duration, 95) by severity

replace_regex()

The replace_regex function replaces strings based on the results of a regular expression. The function finds all matches on an input string using a regular expression pattern, and replaces them with a specified string. Use the replace_regex function to clean, normalize, and transform string fields in log queries. Replacements can use fields with a simple regular expression, or use patterns to match on data. Syntax
QUERY | OPERATOR replace_regex(EXPRESSION, REGEX, REPLACEMENT)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe source expression used for the REGEX to operate on.none
REGEXstringSpecifies the regular expression pattern to match for the EXPRESSION.none
REPLACEMENTstringSpecifies the string to use as a replacement for any parts of the EXPRESSION that match the REGEX.none
Examples The following query uses a simple replacement that replaces all instances of [warn] with error in the message field for the nginx service:
service = "nginx"
| extend cleaned_message = replace_regex(message, '\\[warn\\]', 'error')
| project service, message, cleaned_message | limit 25
This alternate query uses a more complex regular expression to match on logs for the nginx service with a severity of "ERROR", where the message field contains a string like:
host: "COMPANY.example.com"
The query uses the replace_regex function with a capture group ([^.\s]+) to match the hostname that precedes example.com, replaces the entire message with the string matching the regular expression, and projects the original message and the extracted hostname.
service = "nginx"
AND severity = "ERROR"
AND message: "host"
AND message: ".example.com"
| extend hostname = replace_regex(message, '.*host.*?([^.\s]+)\\.example\\.com.*', '\\1')
| project message, hostname | limit 10

round()

The round function returns a numeric value that is closest to the nearest integer, with an option to specify decimal precision. If no decimal precision is specified, this function rounds to the nearest integer. Although the floor function always rounds down to the nearest integer, the round function can round values up or down, depending on the nearest integer value. Syntax
QUERY | OPERATOR round(EXPRESSION, PRECISION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe filter expression to evaluate, which can be a field name, numeric value, or math expression.none
PRECISIONintOptional. Specifies the number of decimal places to round the value to.none
Examples Use this function with a field or a math expression. For example, round(duration) rounds the value of the duration field to the nearest integer. If you specify a math expression such as round(1 + 3, 1), the function evaluates the math expression first, and then rounds the result to one decimal place. The following filter uses the round function to convert response sizes from bytes to kilobytes (dividing by 1,000), and then rounds to two decimal places. The extend function creates a new column named rounded_response_size, and the project function selects the original response size and the rounded_response_size column for comparison.
service = "nginx"
| extend rounded_response_size = round(httpRequest.responseSize / 1000, 2)
| project service, httpRequest.responseSize, rounded_response_size | limit 10

string_size()

The string_size function returns the length of a string expression as a numeric value, which represents the number of characters in the string. Syntax
QUERY | OPERATOR string_size(EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe string expression, which can be a field name, string literal, or computed value.none
Examples The following example uses the extend operator to create a column named size, and use string_size to calculate the character count of the message field.
service = "nginx"
| extend size = string_size(message) | project message, size
The following example uses the extend operator to create a column named msg_size, and use string_size to calculate the character count of the message field. The filter also uses the summarize operator to compute the average message size per service.
service = "nginx"
| extend msg_size = string_size(message)
| summarize avg(msg_size) by service

substring()

The substring function extracts a substring from the source string based on a starting index character position. Syntax
QUERY | OPERATOR by substring(FIELD, START_INDEX, LENGTH)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to extract the substring from.none
START_INDEXintIndicates the index starting character position of the requested substring. If the value is a negative number, the substring is retrieved from the end of the source FIELD.none
LENGTHintOptional. The number of characters to return from the substring.none
Examples The following query includes a colon (:) to find all logs that have a resource.type field containing a k8s substring. Then, the query groups the logs by the resource.type field, groups the logs by the first 13 characters of the k8s substring, and returns those groups.
resource.type: "k8s"
| summarize by substring(resource.type, 0, 13)
In the sidebar, expanding resource.type shows that this query matches values such as k8s_container, k8s_resources, and k8s_cluster. The following query uses the substring() function as a filter without any aggregation. This kind of query is helpful when you want to ensure that a particular substring matches a value. In this case, the query matches any logs containing writer in the kubernetes.pod_name field:
substring(kubernetes.pod_name, 3, 6) = "writer"

sum()

The sum function computes the sum of values of a filter EXPRESSION over events passing through the function. Syntax
QUERY | OPERATOR sum(EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
EXPRESSIONstringThe filter expression used for the aggregation calculation.none
Examples
kubernetes.cluster =~ "^production-*"
| summarize sum(destination.bytes) by location-country

sumif()

The sumif function computes the sum of values for the specified field where the EXPRESSION filter evaluates to true. Syntax
QUERY | OPERATOR sumif(FIELD, EXPRESSION)
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringSpecifies which transformation operator to apply to records matching the filter EXPRESSION.none
FIELDstringThe field to return the sum calculation for.none
EXPRESSIONstringThe filter expression to apply before running the aggregation calculation.none
Examples
kubernetes.cluster =~ "^production-*"
| summarize sumif(destination.bytes, destination.bytes > 1000) by location-country

sum(_payloadSize)

The sum(_payloadSize) function calculates the size of the log payload in bytes. Syntax
QUERY | OPERATOR sum(_payloadSize) by FIELD
Arguments
ArgumentTypeDescriptionDefault
QUERYstringThe input query to return data for.none
OPERATORstringThe transformation operator to apply to records matching the function.none
FIELDstringThe field to group results by.none