diff --git a/modules/cli/pages/cbepctl/set-checkpoint_param.adoc b/modules/cli/pages/cbepctl/set-checkpoint_param.adoc index 7532a39322..8a42c063c3 100644 --- a/modules/cli/pages/cbepctl/set-checkpoint_param.adoc +++ b/modules/cli/pages/cbepctl/set-checkpoint_param.adoc @@ -15,7 +15,7 @@ cbepctl [host]:11210 -b [bucket-name] set checkpoint_param [parameter] [value] == Description -This command configures a checkpoint. +This command changes checkpoint configuration parameters. == Options @@ -26,22 +26,27 @@ The following are the command options: |=== | Options | Description -| `chk_max_items` -| Max number of items allowed in a checkpoint. +| `max_checkpoints` +| The expected maximum number of checkpoints in each vBucket on a balanced system. -| `chk_period` -| Time bound (in sec.) on a checkpoint. +NOTE: This value does not act as a hard limit for a single vBucket. +The system uses it along with `checkpoint_memory_ratio` to compute `checkpoint_max_size`, which triggers checkpoint creation. -| `item_num_based_new_chk` -| True if a new checkpoint can be created based on. -The number of items in the open checkpoint. +| `checkpoint_memory_ratio` +| Maximum portion of the bucket quota that the system can allocate to checkpoints. -| `keep_closed_chks` -| True if we want to keep closed checkpoints in memory, as long as the current memory usage is below high water mark. +| `checkpoint_memory_recovery_upper_mark` +| Fraction of the checkpoint quota computed by `checkpoint_memory_ratio` that triggers an attempt to release memory from checkpoints. -| `max_checkpoints` -| Max number of checkpoints allowed per vBucket. +| `checkpoint_memory_recovery_lower_mark` +| Fraction of the checkpoint quota computed by `checkpoint_memory_ratio` that represents the target for checkpoint memory recovery. +Memory recovery stops when this target is reached. + +| `checkpoint_max_size` +| Maximum size in bytes of a single checkpoint. +Use `0` to allow ep-engine to configure this value automatically. + +| `checkpoint_destruction_tasks` +| Number of background tasks that destroy closed and unreferenced checkpoints to free memory. -| `enable_chk_merge` -| True, if merging closed checkpoints is enabled. |=== diff --git a/modules/cli/pages/cbstats/cbstats-warmup.adoc b/modules/cli/pages/cbstats/cbstats-warmup.adoc index 229f9a664b..3260deb04f 100644 --- a/modules/cli/pages/cbstats/cbstats-warmup.adoc +++ b/modules/cli/pages/cbstats/cbstats-warmup.adoc @@ -10,7 +10,7 @@ The basic syntax is: ---- -cbstats [host]:[dataport] -b [bucket_name] -p [bucket_password] raw warmup +cbstats [host]:[dataport] -b [bucket_name] -p [bucket_password] warmup ---- == Description @@ -47,21 +47,21 @@ Look for values: loading keys, loading access log, and done. | Number of failures due to duplicate keys. | Integer +| ep_warmup_estimate_time +| The time taken, measured in milliseconds, to discover the estimated number of keys that may be warmed up. +| Integer. + | ep_warmup_estimated_key_count | The estimated number of keys in database. | Integer. Default: unknown -| ep_warmup_estimate_time -| Thne estimated time in microseconds to do warmup. -| Integer. - | ep_warmup_estimated_value_count | The estimated number of key data to read based on the access log. | Integer. Default: unknown -| ep_warmup_keys_count +| ep_warmup_key_count | Number of keys warmed up. | Integer @@ -69,14 +69,6 @@ Default: unknown | Total time (in microseconds) spent by loading persisted keys. | Integer -| ep_warmup_min_items_threshold -| Enable data traffic after loading this percentage of key data. -| Integer - -| ep_warmup_min_memory_threshold -| Enable data traffic after filling this % of memory. -| Integer (%) - | ep_warmup_oom | Number of out of memory failures during warmup. | Integer @@ -119,7 +111,10 @@ The following are the command options: *Request* ---- -cbstats 10.5.2.117:11210 warmup +cbstats localhost:11210 warmup \ +-u Administrator \ +-p password \ +-b travel-sample ---- *Response* @@ -127,18 +122,16 @@ cbstats 10.5.2.117:11210 warmup Example response: ---- - ep_warmup: enabled - ep_warmup_dups: 0 - ep_warmup_estimate_time: 57546 - ep_warmup_estimated_key_count: 0 - ep_warmup_estimated_value_count: unknown - ep_warmup_key_count: 0 - ep_warmup_keys_time: 529022 - ep_warmup_min_items_threshold: 100 - ep_warmup_min_memory_threshold: 100 - ep_warmup_oom: 0 - ep_warmup_state: done - ep_warmup_thread: complete - ep_warmup_time: 529192 - ep_warmup_value_count: 0 +ep_warmup: enabled +ep_warmup_dups: 0 +ep_warmup_estimate_time: 8159 +ep_warmup_estimated_key_count: 63325 +ep_warmup_estimated_value_count: unknown +ep_warmup_key_count: 63325 +ep_warmup_keys_time: 199582 +ep_warmup_oom: 0 +ep_warmup_state: done +ep_warmup_thread: complete +ep_warmup_time: 199586 +ep_warmup_value_count: 63325 ---- diff --git a/modules/cli/pages/mcstat.adoc b/modules/cli/pages/mcstat.adoc index 3a77ecccd7..829c04781b 100644 --- a/modules/cli/pages/mcstat.adoc +++ b/modules/cli/pages/mcstat.adoc @@ -1,15 +1,11 @@ = mcstat -:description: pass:q[The `mcstat` tool provides memory-related information for a specified bucket, or for all buckets on a cluster.] +:description: pass:q[The mcstat tool provides detailed information for a node, specified bucket, or for all buckets on a cluster.] :page-topic-type: reference :page-aliases: cli:cbstats/cbstats-allocator [abstract] {description} -== Description - -The `mcstat` tool provides memory-related information for a specified bucket, or for all buckets on a cluster. - The tool is located as follows: [cols="2,3"] @@ -40,9 +36,7 @@ The `options` are as follows: | Options | Description | `-h[=statkey]`or `--help[=statkey]` -| Show the help message and exit. -If `=statkey` is not specified, the common options for the command are listed. -If `=statkey` _is_ specified, the available _statkeys_ for the command are listed instead. +| Outputs possible `statkey` values with descriptions and indications of the stat's scope and required privileges. | `-h` or `--hostname`, with the parameter `` (for IPv4), or `[address]:port` (for IPv6) | The name of the host (and optionally, the port number) to connect to. diff --git a/modules/fts/pages/fts-search-facets.adoc b/modules/fts/pages/fts-search-facets.adoc deleted file mode 100644 index 6bf4ee904e..0000000000 --- a/modules/fts/pages/fts-search-facets.adoc +++ /dev/null @@ -1,192 +0,0 @@ -= Search Facets - -Facets are aggregate information collected on a particular result set. -For any search, the user can collect additional facet information along with it. - -All the facet examples below, are for the query "[.code]``water``" on the beer-sample dataset. -FTS supports the following types of facets: - -[#term-facet] -== Term Facet - -A term facet counts how many matching documents have a particular term for a specific field. - -NOTE: When building a term facet, use the keyword analyzer. Otherwise, multi-term values get tokenized, and the user gets unexpected results. - -=== Example - -* Term Facet - computes facet on the type field which has two values: `beer` and `brewery`. -+ ----- -curl -X POST -H "Content-Type: application/json" \ -http://localhost:8094/api/index/bix/query -d \ -'{ - "size": 10, - "query": { - "boost": 1, - "query": "water" - }, - "facets": { - "type": { - "size": 5, - "field": "type" - } - } -}' ----- -+ -The result snippet below only shows the facet section for clarity. -Run the curl command to see the HTTP response containing the full results. -+ -[source,json] ----- -"facets": { - "type": { - "field": "type", - "total": 91, - "missing": 0, - "other": 0, - "terms": [ - { - "term": "beer", - "count": 70 - }, - { - "term": "brewery", - "count": 21 - } - ] - } -} ----- - -[#numeric-range-facet] -== Numeric Range Facet - -A numeric range facet works by the users defining their own buckets (numeric ranges). - -The facet then counts how many of the matching documents fall into a particular bucket for a particular field. - -=== Example - -* Numeric Range Facet - computes facet on the `abv` field with 2 buckets describing `high` (greater than 7) and `low` (less than 7). -+ ----- -curl -X POST -H "Content-Type: application/json" \ -http://localhost:8094/api/index/bix/query -d \ -'{ - "size": 10, - "query": { - "boost": 1, - "query": "water" - }, - "facets": { - "abv": { - "size": 5, - "field": "abv", - "numeric_ranges": [ - { - "name": "high", - "min": 7 - }, - { - "name": "low", - "max": 7 - } - ] - } - } -}' ----- -+ -Results: -+ -[source,json] ----- -facets": { - "abv": { - "field": "abv", - "total": 70, - "missing": 21, - "other": 0, - "numeric_ranges": [ - { - "name": "high", - "min": 7, - "count": 13 - }, - { - "name": "low", - "max": 7, - "count": 57 - } - ] - } -} ----- - -[#date-range-facet] -== Date Range Facet - -The Date Range facet is same as numeric facet, but on dates instead of numbers. - -Full text search and Bleve expect dates to be in the format specified by https://www.ietf.org/rfc/rfc3339.txt[RFC-3339^], which is a specific profile of ISO-8601 that is more restrictive. - -== Example - -* Date Range Facet - computes facet on the ‘updated’ field that has 2 values old and new. -+ - ----- -curl -XPOST -H "Content-Type: application/json" -uAdministrator:asdasd http://:8094/api/index/bix/query -d '{ -"ctl": {"timeout": 0}, -"from": 0, -"size": 0, -"query": { - "field": "country", - "term": "united" -}, - "facets": { - "types": { - "size": 10, - "field": "updated", - "date_ranges": [ - { - "name": "old", - "end": "2010-08-01" - }, - { - "name": "new", - "start": "2010-08-01" - } -] -} -} -}' ----- -+ -Results -+ -[source,json] ----- - "facets": { - "types": { - "field": "updated", - "total": 954, - "missing": 0, - "other": 0, - "date_ranges": [ - { - "name": "old", - "end": "2010-08-01T00:00:00Z", - "count": 934 - }, - { - "name": "new", - "start": "2010-08-01T00:00:00Z", - "count": 20 - } - ] - } - } ----- \ No newline at end of file diff --git a/modules/fts/pages/fts-search-response-facets.adoc b/modules/fts/pages/fts-search-response-facets.adoc deleted file mode 100644 index 63383b7ff3..0000000000 --- a/modules/fts/pages/fts-search-response-facets.adoc +++ /dev/null @@ -1,249 +0,0 @@ -[#Facets] -= Facets - -[abstract] -Facets are aggregate information collected on a particular result set. - -In Facets, you already have a search in mind, and you want to collect additional facet information along with it. - -Facet-query results may not equal the total number of documents across all buckets if: - -1. There is more than one pindex. -2. Facets_size is less than the possible values for the field. - -== Facets Results - -For each facet that you build, a `FacetResult` is returned containing the following: - -* *Field*: The name of the field the facet was built on. - -* *Total*: The total number of values encountered (if each document had one term, this should match the total number of documents in the search result) - -* *Missing*: The number of documents which do not have any value for this field - -* *Other*: The number of documents for which a value exists, but it was not in the top N number of facet buckets requested - -* *Array of Facets*: Each Facet contains the count indicating the number of items in this facet range/bucket: - -** *Term*: Terms Facets include the name of the term. - -** *Numeric Range*: Numeric Range Facets include the range for this bucket. - -** *DateTime Range*: DateTime Range Facets include the datetime range for this bucket. - -All of the facet examples given in this topic are for the query "[.code]``water``" on the beer-sample dataset. - -FTS supports the following types of facets: - -* *Term Facet* - A term facet counts up how many of the matching documents have a particular term in a particular field. -+ -Most of the time, this only makes sense for relatively low cardinality fields, like a type or tags. -+ -It would not make sense to use it on a unique field like an ID. - -* *Field*: The field over which you want to gather the facet information. - -* *Size*: The number of top categories per partition to be considered for the facet results. -+ -For example, size - 3 => facets results returns the top 3 categories across all partitions and merges them as the final result. -+ -Varying size value varies the count value of each facet and the “others” value as well. This is due to the fact that when the size is varied, some of the categories fall out of the top “n” and into the “others” category. -+ -NOTE: It is recommended to keep the size reasonably large, close to the number of unique terms to get consistent results. - -* *Numeric Range Facet*: A numeric range facet works by the user defining their own buckets (numeric ranges). -+ -The facet then counts how many of the matching documents fall into a particular bucket for a particular field. -+ -Along with the two fields from term facet, “numeric_ranges” field has to include all the numeric ranges for the faceted field. -+ -“Numeric_ranges” could possibly be an array of ranges and each entry of it must specify either min, max or both for the range. - -** *Name*: Name for the facet. - -** *Min*: The lower bound value of this range. - -** *Max*: The upper bound value of this range. - -* *Date Range Facet*: The Date Range facet is same as numeric facet, but on dates instead of numbers. -Full text search and Bleve expect dates to be in the format specified by https://www.ietf.org/rfc/rfc3339.txt[RFC-3339^], which is a specific profile of ISO-8601 that is more restrictive. -+ -Along with the two fields from term facet, “date_ranges” field has to include all the numeric ranges for the faceted field. -+ -The facet ranges go under a field named “date_ranges”. -+ -“date_ranges” could possibly be an array of ranges and each entry of it must specify either start, end or both for the range. - -** *Name*: Name for the facet. - -** *Start*: Start date for this range. - -** *End*: End date for this range. - -NOTE: Most of the time, when building a term facet, you must use the keyword analyzer. Otherwise, multi-term values are tokenized, which might cause unexpected results. - -== Example - -=== Term Facet -Computes facet on the type field which has 2 values: `beer` and `brewery`. - -[source, console] ----- -curl -X POST -H "Content-Type: application/json" \ -http://localhost:8094/api/index/bix/query -d \ -'{ - "size": 10, - "query": { - "boost": 1, - "query": "water" - }, - "facets": { - "type": { - "size": 5, - "field": "type" - } - } -}' ----- - -Result: - -The result snippet below, only shows the facet section for clarity. -Run the curl command to see the HTTP response containing the full results. - -[source,json] ----- -"facets": { - "type": { - "field": "type", - "total": 91, - "missing": 0, - "other": 0, - "terms": [ - { - "term": "beer", - "count": 70 - }, - { - "term": "brewery", - "count": 21 - } - ] - } -} ----- -=== Numeric Range Facet -Computes facet on the `abv` field with two buckets describing `high` (greater than 7) and `low` (less than 7). - -[source, console] ----- -curl -X POST -H "Content-Type: application/json" \ -http://localhost:8094/api/index/bix/query -d \ -'{ - "size": 10, - "query": { - "boost": 1, - "query": "water" - }, - "facets": { - "abv": { - "size": 5, - "field": "abv", - "numeric_ranges": [ - { - "name": "high", - "min": 7 - }, - { - "name": "low", - "max": 7 - } - ] - } - } -}' ----- - -Results: - -[source,json] ----- -facets": { - "abv": { - "field": "abv", - "total": 70, - "missing": 21, - "other": 0, - "numeric_ranges": [ - { - "name": "high", - "min": 7, - "count": 13 - }, - { - "name": "low", - "max": 7, - "count": 57 - } - ] - } -} ----- - -=== Date Range Facet -Computes facet on the ‘updated’ field that has 2 values old and new - -[source, consle] ----- -curl -XPOST -H "Content-Type: application/json" -u username:password http://:8094/api/index/bix/query -d '{ - "ctl": {"timeout": 0}, - "from": 0, - "size": 0, - "query": { - "field": "country", - "term": "united" - }, - "facets": { - "types": { - "size": 10, - "field": "updated", - "date_ranges": [ - { - "name": "old", - "end": "2010-08-01" - }, - { - "name": "new", - "start": "2010-08-01" - } - ] - } - } -}' ----- - -Results: - -[source,json] ----- -"facets": { - "types": { - "field": "updated", - "total": 954, - "missing": 0, - "other": 0, - "date_ranges": [ - { - "name": "old", - "end": "2010-08-01T00:00:00Z", - "count": 934 - }, - { - "name": "new", - "start": "2010-08-01T00:00:00Z", - "count": 20 - } - ] - } -} ----- \ No newline at end of file diff --git a/modules/install/pages/install-ports.adoc b/modules/install/pages/install-ports.adoc index d4b1deebc5..b2654b3778 100644 --- a/modules/install/pages/install-ports.adoc +++ b/modules/install/pages/install-ports.adoc @@ -77,7 +77,7 @@ The following table lists all port numbers, grouped by category of communication | _Node-to-node_ | *Unencrypted*: 4369, 8091-8094, 9100-9105, 9110-9118, 9120-9122, 9130, 9999, 11209-11210, 21100 -*Encrypted*: 9999, 11206, 11207, 18091-18094, 19102, 19130, 21150 +*Encrypted*: 9999, 9124, 11206, 11207, 18091-18094, 19102, 19130, 21150 | _Client-to-node_ | *Unencrypted*: 8091-8097, 9123, 9140 {fn-eventing-debug-port}, 11210, 11280 @@ -86,8 +86,8 @@ The following table lists all port numbers, grouped by category of communication | _XDCR (cluster-to-cluster)_ a| * Version 2 (XMEM) -** *Unencrypted*: 8091, 8092, 11210 -** *Encrypted*: 11207, 18091, 18092 +** *Unencrypted*: 8091, 11210 +** *Encrypted*: 11207, 18091 NOTE: If enforcing TLS encryption, these ports may be blocked outside of a Couchbase Server cluster but need to remain open between nodes. @@ -151,10 +151,10 @@ The following table contains a detailed description of each port used by Couchba | `capi_port` / `ssl_capi_port` | 8092 / 18092 -| Views and XDCR access +| Views access | Yes | Yes -| Version 2 +| No | `query_port` / `ssl_query_port` | 8093 / 18093 diff --git a/modules/install/pages/upgrade.adoc b/modules/install/pages/upgrade.adoc index 59115edf0f..169437a0dc 100644 --- a/modules/install/pages/upgrade.adoc +++ b/modules/install/pages/upgrade.adoc @@ -1,4 +1,5 @@ = Upgrade +:page-aliases: n1ql:n1ql_application_continuity :description: To upgrade a Couchbase-Server cluster means to upgrade the version of Couchbase Server that's running on every node. :erlang-upgrade-note: The upgrade to Erlang support in Couchbase Server 8.0 requires that you first upgrade Couchbase to version 7.2 before upgrading to version 8.0 diff --git a/modules/learn/pages/services-and-indexes/services/query-service.adoc b/modules/learn/pages/services-and-indexes/services/query-service.adoc index 8c2be40750..28787effb3 100644 --- a/modules/learn/pages/services-and-indexes/services/query-service.adoc +++ b/modules/learn/pages/services-and-indexes/services/query-service.adoc @@ -11,7 +11,8 @@ The Query Service depends on both the _Index Service_ and the _Data Service_. The architecture of the _Query Service_ is shown by the following illustration: [#query_service_architecture] -image::services-and-indexes/services/queryServiceArchitecture.png[,700,align=left] +.Query Service Architecture +image::services-and-indexes/services/queryServiceArchitecture.png[,700,align=centre] The principal components are: @@ -26,17 +27,28 @@ Other data stores are also included, such as the store for the local filesystem. == Query Execution -The sequence whereby queries are executed is shown below: +The following diagram shows the sequence of operations during query execution. [#query_sequence] -image::services-and-indexes/services/querySequence.png[,820,align=left] - -The client's {sqlpp} query is shown entering the Query Service at the left-hand side. -The Query Processor performs its *Parse* routine, to validate the submitted statement, then creates the execution *Plan*. -*Scan* operations are then performed on the relevant index, by accessing the *Index Service* or the *Search Service*. -Next, *Fetch* operations are performed by accessing the *Data Service*, and the data duly returned is used in *Join* operations. -The Query Service continues by performing additional processing, which includes *Filter*, *Aggregate*, and *Sort* operations. -Note the degree of parallelism with which operations are frequently performed, represented by the vertically aligned groups of right-pointing arrows. +.Query Execution Sequence +[plantuml,query-execution,svg] +.... +include::indexes:partial$diagrams/query-service.puml[] +.... + +When the Query Service receives the client's {sqlpp} query, it immediately passes it to the Query Processor. +The Query Processor first parses the query to validate the statement and then creates an execution plan for the request. + +The Execution Engine then begins executing the plan. +It performs Scan operations on the relevant index, using either the Index Service or the Search Service. +Next, it performs Fetch operations to get the actual data from the Data Service, and then uses this data for Join operations. + +The Query Service processes the data further by applying Filter, Aggregate, Project, and Sort operations. +These operations can run in parallel. +In the diagram, the number of boxes within an operation block represents the degree of parallelism for that operation. + +Finally, the service executes Offset and Limit operations to set the result size and starting point, and then streams the results back to the client. +For more information about each of these operations, see xref:n1ql:n1ql-language-reference/selectintro.adoc#query-execution-phases[Query Phases]. == Using {sqlpp} diff --git a/modules/manage/assets/images/manage-logging/collectInfo.png b/modules/manage/assets/images/manage-logging/collectInfo.png deleted file mode 100644 index 7a90e58b90..0000000000 Binary files a/modules/manage/assets/images/manage-logging/collectInfo.png and /dev/null differ diff --git a/modules/manage/assets/images/manage-logging/collectInformationComplete.png b/modules/manage/assets/images/manage-logging/collectInformationComplete.png index 2e73e45507..e118e35e2a 100644 Binary files a/modules/manage/assets/images/manage-logging/collectInformationComplete.png and b/modules/manage/assets/images/manage-logging/collectInformationComplete.png differ diff --git a/modules/manage/assets/images/manage-logging/collectInformationScreen.png b/modules/manage/assets/images/manage-logging/collectInformationScreen.png index b72beb7b1f..bfd9be42fd 100644 Binary files a/modules/manage/assets/images/manage-logging/collectInformationScreen.png and b/modules/manage/assets/images/manage-logging/collectInformationScreen.png differ diff --git a/modules/manage/assets/images/manage-logging/getClusterSummaryLink.png b/modules/manage/assets/images/manage-logging/getClusterSummaryLink.png deleted file mode 100644 index 0d76f94570..0000000000 Binary files a/modules/manage/assets/images/manage-logging/getClusterSummaryLink.png and /dev/null differ diff --git a/modules/manage/assets/images/manage-logging/partialRedactionSelection.png b/modules/manage/assets/images/manage-logging/partialRedactionSelection.png deleted file mode 100644 index 92489e6ad9..0000000000 Binary files a/modules/manage/assets/images/manage-logging/partialRedactionSelection.png and /dev/null differ diff --git a/modules/manage/assets/images/manage-logging/uploadToCouchbaseCheckbox.png b/modules/manage/assets/images/manage-logging/uploadToCouchbaseCheckbox.png deleted file mode 100644 index 406682e314..0000000000 Binary files a/modules/manage/assets/images/manage-logging/uploadToCouchbaseCheckbox.png and /dev/null differ diff --git a/modules/manage/pages/manage-logging/manage-logging.adoc b/modules/manage/pages/manage-logging/manage-logging.adoc index 4470afc29f..c8113c31bf 100644 --- a/modules/manage/pages/manage-logging/manage-logging.adoc +++ b/modules/manage/pages/manage-logging/manage-logging.adoc @@ -1,6 +1,7 @@ = Manage Logging -:description: pass:q[The _Logging_ facility allows a record to be maintained of important events that occur on Couchbase Server.] +:description: pass:q[The Logging facility allows a record to be maintained of important events that occur on Couchbase Server.] :page-aliases: clustersetup:logging,security:security-access-logs,clustersetup:ui-logs +:page-toclevels: 3 [abstract] {description} @@ -8,489 +9,651 @@ [#logging_overview] == Logging Overview -The Couchbase-Server _Logging_ facility records important events, and saves the details to log files, on disk. -Additionally, events of cluster-wide significance are displayed on the *Logs* screen, in Couchbase Web Console. -This may appear as follows: +The Couchbase Server records important events and saves the details to log files on disk. +You can directly view the log files on each node in the cluster. +Each operating system has its own default location for log files: + +[#log-file-locations] +[cols="1,1"] +|=== +| Operating System | Log Path + +| Linux +| `/opt/couchbase/var/lib/couchbase/logs` + +| MacOS +| `/Users//Library/Application Support/Couchbase/var/lib/couchbase/logs` + +| Windows +| `C:\Program Files\Couchbase\Server\var\lib\couchbase\logs` +(Assuming you installed Couchbase Server in the default location.) +|=== + +You can also view a summary of log events in Couchbase Server Web Console by clicking *Logs*. [#welcome] image::manage-logging/loggingScreenBasic.png[,720,align=left] -By default, on Linux systems, log files are saved to `/opt/couchbase/var/lib/couchbase/logs`; on MacOS, to `/Users/username/Library/Application Support/Couchbase/var/lib/couchbase/logs`; and on Windows, to `C:\Program Files\Couchbase\Server\var\lib\couchbase\logs`. -[#collecting_information] -== Collecting Information +[#log-file-listing] +=== Log Files -On each node within a Couchbase Server-cluster, logging is performed continuously. -_A subset_ of the results can be reviewed in the Couchbase Web Console *Logs* screen; while _all_ details are saved to the `logs` directory, as described above. -(Note that the `logs` directory may include `audit.log`. -This is a special log file, used to manage cluster-security, and is handled separately from the other log files. -The information provided throughout the remainder of this page — on collecting, uploading, redacting, and more — _does not_ apply to `audit.log`. -For information on `audit.log`, see xref:learn:security/auditing.adoc[Auditing].) +Each node in the cluster saves several different log files to the log directory. +The following table lists these files. +Unless otherwise specified, each file has a `.log` extension. -Additionally, _explicit logging_ can be performed by the user. -This allows comprehensive and fully updated information to be generated as required. -The output includes everything currently on disk, together with additional data that is gathered in real time. -Explicit logging can either be performed for all nodes in the cluster, or for one or more individual nodes. -The results are saved as zip files: each zip file contains the log-data generated for an individual node. +[cols="7,10"] +|=== +| File | Log Contents -Explicit logging can be performed by means of the Couchbase CLI utility `cbcollect_info`. -The documentation for this utility, provided -xref:cli:cbcollect-info-tool.adoc[here], includes a complete list of the log files that can be created, and a description of the contents of each. +| `audit` +| Security audit log for administrators. -Administrators with either the *Full Admin* or *Cluster Admin* role can perform explicit logging by means of Couchbase Web Console: on the *Logs* page, left-click on the [.ui]*Collect Information* tab, located near the top. -(Note that for administrators without either of these roles, this tab does not appear.) +| `analytics_access` +| Information about access attempts made to the REST/HTTP port of the Analytics Service. -[#collect_info] -image::manage-logging/collectInfo.png[,248,align=left] +| `analytics_cbas_debug` +| Debugging information about the Analytics Service. -This brings up the *Collect Information* screen: +| `analytics_dcpdebug` +| DCP-specific debugging information about the Analytics Service. -[#collect_info_screen] -image::manage-logging/collectInformationScreen.png[,720,align=left] +| `analytics_dcp_failed_ingestion` +| Information about documents that failed to be imported/ingested from the Data Service into the Analytics Service. -This allows logs and diagnostic information to be collected either from all or from selected nodes within the cluster. -It also allows, in the *Redact Logs* panel, a log redaction-level to be specified (this is described in -xref:manage:manage-logging/manage-logging.adoc#applying_redaction[Applying Redaction], below). -The *Specify custom temp directory* checkbox can be checked to specify the absolute pathname of a directory into which data is temporarily saved, during the collection process. -The *Specify custom destination directory* can be checked to specify the absolute pathname of a directory into which the completed zip files are saved. +| `analytics_debug` +| Events logged by the Analytics Service at the `debug` logging level. -The *Upload to Couchbase* checkbox is described in -xref:manage:manage-logging/manage-logging.adoc#uploading_log_files[Uploading Log Files], below. +| `analytics_error` +| Events logged by the Analytics Service at the `error` logging level. -To start the collection-process, left-click on the [.ui]*Start Collecting* button. -A notification is displayed, indicating that the collection-process is running. -A button is provided to allow the collection-process to be stopped, if this should be appropriate. -Whenever the collection-process completes for one of the nodes, a notification is displayed, and the collection-process continues, if necessary, for remaining nodes. -When the process has completed for all nodes, information is displayed as follows: +| `analytics_info` +| Events logged by the Analytics Service at the `info` logging level. -[#collect_info_complete] -image::manage-logging/collectInformationComplete.png[,720,align=left] +| `analytics_opt` +| Logs optimization information for the Analytics Service. -As this indicates, a set of log files has been created for each node in the cluster. -Each file is saved as a zip file in the stated temporary location. +| `analytics_periodic_dump` +| Periodic dumps of internal Java state for the Analytics Service. -[#uploading_log_files] -== Uploading Log Files +| `analytics_shutdown` +| Logs Analytics Service shutdown events. -Log files can be uploaded to Couchbase, for inspection by Couchbase Support. +| `analytics_trace.json` +| Highly-detailed events logged by the Analytics Service at the `trace` logging level. -For information on performing upload at the command-prompt, see xref:cli:cbcollect-info-tool.adoc[cbcollect_info]. -To upload by means of Couchbase Web Console, before starting the collection-process, check the [.ui]*Upload to Couchbase* checkbox: +| `analytics_warn` +| Events logged by the Analytics Service at the `warn` logging level. -[#upload_to_couchbase_checkbox] -image::manage-logging/uploadToCouchbaseCheckbox.png[,150,align=left] +| `babysitter` +| Troubleshooting log for the babysitter process that spawns and respawns all Couchbase Server processes. -The display changes to the following: +| `backup_service` +a| Log for Backup Service events, including `debug`, `info`, `warn`, and `error` levels. -[#upload_to_couchbase_dialog_basic] -image::manage-logging/uploadToCouchbaseDialogBasic.png[,520,align=left] +// Note: commenting out for now. +// | `cont_backup` +// | Logs events from the continuous backup service -The dialog now features an *Upload to Host* field, which contains the server-location to which the customer-data is uploaded. -Fields are also provided for *Customer Name* (required) and *Ticket Number* (optional). -The *Upload Proxy* field optionally takes the hostname of a remote system, which contains the directory specified by the pathname. -If the *Bypass Reachability Checks* checkbox is left unchecked (which is the default), an attempt is made to gather and upload the collected information without the upload specifications (that is, the upload host, customer name, and optionally, upload proxy) being pre-verified. -Otherwise, if the checkbox _is_ checked, the upload specifications are submitted for verification _before_ information is collected and attemptedly uploaded: in which case, if the upload specifications cannot be verified, the collection-operation does not proceed, and an error is flagged on the console. +| `couchdb` +| Troubleshooting log for the `couchdb` subsystem, which underlies map-reduce. -When all required information has been entered, to start information-collection, left-click on the *Start Collecting* button. -When collection and upload have been successfully completed, the URL of the uploaded zip file is displayed. +| `debug` +| Debug-level troubleshooting for the Cluster Manager. -[#receiving-upload-receipts-from-couchbase-customer-support] -=== Receiving Upload Receipts from Couchbase Customer Support -[.status]#Couchbase Server Enterprise# +| `error` +| Error-level troubleshooting log for the Cluster Manager. +| `eventing` +| Troubleshooting log for the Eventing Service. -Couchbase Customer Support offers the facility to send you an automatic notification on receipt of any log file uploaded for a support case. -This function is offered on an opt-in basis. -Contact your account manager or Couchbase Support for more information. +| `fts` +| Troubleshooting log for the Search Service. -When a Couchbase Support Engineer requests logs from a customer for which this feature is enabled, -the request will include a unique upload URL and UUID for the individual support case: +| `goxdcr` +| Troubleshooting log for XDCR source activity. -image::manage-logging/supportResponse.png[,500,align=left] +| `http_access` +| The HTTP access log records server requests (including administrator logins) to the REST API or Couchbase Web Console. +It uses common log format and contains important fields such as remote client IP, timestamp, GET/POST request and resource requested, HTTP status code, and more. -You can then use `CURL` to upload the log file using the provided URL and UUID: +| `http_access_internal` +| The internal HTTP access log records internal server requests (including administrator logins) to the REST API or Couchbase Web Console. +It uses common log format and contains important fields such as remote client IP, timestamp, GET/POST request and resource requested, HTTP status code, and more. -[source,shell] ----- -curl --upload-file [filename] https://uploads.couchbase.com/bigstuff-fle11fdb-4b1c-48e4-88fe-7fe2fb0f2019/ ----- +| `indexer` +| Troubleshooting log for the Index Service. -IMPORTANT: Remember to include the final forward slash (`/`) character at the end of the command. +| `indexer_stats` +| Log containing statistics related to the Index Service. -You can also send the file using the Couchbase Server web console: +| `info` +| Info-level troubleshooting log for the Cluster Manager. +Clients can also log informational messages to this file using the xref:rest-api:rest-client-logs.adoc[client-side error logging API]. -image::manage-logging/logUploadForAlert.png[, 500,align=left] +| `json_rpc` +| Log used by the Cluster Manager. -After you have uploaded the log files, -you will receive an acknowledgement attached to your support ticket: +| `mapreduce_errors` +| Contains JavaScript and other view-processing errors. -image::manage-logging/uploadAcknowledgement.png[,500,align=left] +| `memcached` +| Contains information about the core Memcached component, including DCP stream requests and slow operations. +You can adjust the logging for slow operations. +See <> for details. +| `metakv` +| Troubleshooting log for the `metakv` store, a cluster-wide metadata store. +| `ns_couchdb` +| Contains information about starting up the `couchdb` subsystem. -[#getting-a-cluster-summary] -== Getting a Cluster Summary +| `projector` +| Troubleshooting log for the projector process, which sends appropriate mutations from Data Service nodes to Index Service nodes. -A summary of the cluster's status can be acquired by means of the link at the lower right of the *Collect Information* panel: +| `projector_stats` +| Log containing statistics related to the projector process. -image::manage-logging/getClusterSummaryLink.png[,260,align=left] +| `prometheus` +| Log for the instance of https://prometheus.io[Prometheus^] that runs on the current node, supporting the gathering and management of Couchbase Server metrics. +See the xref:metrics-reference:metrics-reference.adoc[Metrics Reference] for more information about metrics. -This brings up the *Cluster Summary Info* dialog: +| `query` +| Troubleshooting log for the Query Service. -image::manage-logging/clusterSummaryInfoDialog.png[,420,align=left] +| `rebalance` +| A directory that contains reports about rebalances that have occurred. +This directory retains reports on up to the last 5 rebalances. +Each report's filename contains the date and time it ran. +For example, `rebalance_report_20251113T211150.json`. +See xref:rebalance-reference:rebalance-reference.adoc[Rebalance Reference] for detailed information about rebalances. -This displayed JSON document, which contains detailed status on the current configuration and status of the entire cluster, can be copied to the clipboard, by left-clicking on the *Copy to Clipboard* button, at the lower left. -This information can then be manually shared with Couchbase Support; either in addition to, or as an alternative to log-collection. +| `reports` +| Contains events and crash reports for the Erlang processes. +Erlang processes crash and restart upon an error. -[#understanding_redaction] -== Understanding Redaction +| `ssl_proxy` +| Troubleshooting log for the SSL proxy spawned by the Cluster Manager. -Optionally, log files can be _redacted_. -This means that user-data, considered to be private, is removed. -Such data includes: +| `stats` +| Contains periodic statistic dumps from the Cluster Manager. -* Key/value pairs in JSON documents -* Usernames -* Query-fields that reference key/value pairs and/or usernames -* Names and email addresses retrieved during product registration -* Extended attributes +| `trace` +| Highly-detailed troubleshooting log for the Cluster Manager. -This redaction of user-data is referred to as _partial_ redaction. -(_Full_ redaction, which will be available in a forthcoming version of Couchbase Server, additionally redacts _meta-data_.) +| `views` +| Troubleshooting log for the views engine, mainly logging the changing of partition states. -In each modified log file, hashed text (achieved with SHA1) is substituted for redacted text. -For example, the following log file fragment displays private data — a Couchbase username: +| `xdcr_target` +| Troubleshooting log for data received from XDCR sources. -[source,bash] ----- -0ms [I0] {2506} [INFO] (instance - L:421) Effective connection string: -couchbase://127.0.0.1?username=Administrator&console_log_level=5&;. -Bucket=default ----- +|=== -The redacted version of the log file might appear as follows: +NOTE: Additional log files may exist in the log directory. +These files are often empty unless you have enabled specific debugging options. +You usually only enable these settings at the request of Couchbase Support. +Some logs in preceding table do not appear in the log directory of a node where you have not enabled the associated feature or service associated with it. -[source,bash] + +[#changing-log-file-locations] +=== Changing Log File Locations + +It's possible to change the location where Couchbase Server saves log files. +However, Couchbase only supports the default log location. +Changing the log location requires manually editing a configuration file named `static_config`. +This file only controls the log location on the node where you make the change. +Therefore, to make the change in your entire cluster, you edit `static_config` on every node. +If you add new nodes to the cluster, you must remember to make the changes to their copies of the `static_config` file. + +The Couchbase Server upgrade process can overwrite the `static_config` file, losing any of your modifications. +If you change the log location, you may need to reapply your changes after an upgrade. + +If your goal is to store logs on a different filesystem, consider symbolically linking the log directory to a directory in the target filesystem instead. +Another option is to directly mount the filesystem under the default log location. +In either case, make sure the user who owns the Couchbase Server files has read and write permissions to the target filesystem. +These options are more durable than changing the log location in `static_config`. + +If neither of these options meet your needs, you can change the log location by following these steps: + +. Log into a node as `root` or the user who owns the Couchbase Server files. +You can also use `sudo` to gain the necessary permissions. +. Edit the `static_config` with your preferred text editor. +This file is located in the `etc/couchbase` directory under the Couchbase installation directory. +For example: `/opt/couchbase/etc/couchbase/static_config` on Linux systems. +. Locate the `error_logger_mf_dir` variable, and change the value to the path where you want Couchbase Server to save log files. +For example, to change the log path to `/var/logs/couchbase`, modify the entry as follows: + ++ +[source,erlang] ---- -0ms [I0] {2506} [INFO] (instance - L:421) Effective connection string: -e07a9ca6d84189c1d91dfefacb832a6491431e95. -Bucket=e16d86f91f9fd0b110be28ad00e348664b435e9e +{error_logger_mf_dir, "/var/logs/couchbase"}. ---- -Note that redaction may eliminate some parameters containing non-private data, as well as all parameters containing private. +. Stop and restart Couchbase Server. +See xref:install:startup-shutdown.adoc[] for the steps to restart Couchbase Server. +. Repeat steps 1 through 4 on each node where you want to change the log path. -Note also that redaction of log files may have one or both of the following consequences: +[#log-file-rotation] +=== Log File Rotation -* Logged issues will be found harder to diagnose, by both the user and Couchbase Support. -* Log-collection is significantly more time-consumptive, since redaction is performed at collection-time. +Couchbase Server rotates log files to prevent them from consuming too much disk space. +It keeps a limited number of past log files in compressed format for reference. +Once it reaches the limit on the number of logs files to keep, Couchbase Server deletes the oldest log file. -[#applying_redaction] -== Applying Redaction +By default, Couchbase Server rotates the `memcached` log file when it reaches 10{nbsp}MB in size. +In addition to the current uncompressed log file, it keeps 19 past logs. -Redaction of log files saved on the cluster can be applied as required, when performing _explicit logging_, by means of either `cbcollect_info` or the *Logs* facility of Couchbase Web Console. +Couchbase Server rotates other log files automatically when they reach 40{nbsp}MB. +It keeps the current version of the log, plus up to 9 compressed past logs. -For information on performing explicit logging with redaction at the command-prompt, see -xref:cli:cbcollect-info-tool.adoc[cbcollect_info]. +==== Changing Log Rotation Settings -To perform explicit logging with redaction by means of Couchbase Web Console, before starting the collection-process, access the *Redact Logs* panel, on the *Collect Information* screen. -This features two radio-buttons, labeled *No Redaction* and *Partial Redaction*. -Make sure the [.ui]*Partial Redaction* radio-button is selected. -Guidance on redaction is displayed below it: +You can change log rotation settings by editing the `static_config` configuration file. +This file only controls the log rotation settings on the node where you make the change. +It does not propagate to other nodes in the cluster. -[#partial_redaction_selection] -image::manage-logging/partialRedactionSelection.png[,682,align=left] +NOTE: Couchbase Server upgrades can overwrite `static_config`, losing any of your modifications. -Left-click on the *Start Collecting* button. -A notification explains that the collection-process is now running. -When the process has completed, a further notification appears, specifying the location (local or remote) of each created zip file. -Note that, when redaction has been specified, two zip files are provided for each node: one file containing redacted data, the other unredacted data. +To change log rotation settings, follow these steps: -[#redacting-log-files-outside-the-cluster] -== Redacting Log Files Outside the Cluster +. Log into a node as `root` or the user who owns the Couchbase Server files. +You can also use `sudo` to gain the necessary permissions. -Certain Couchbase technologies — such as `cbbackupmgr`, the SDK, connectors, and Mobile — create log files saved outside the Couchbase Cluster. -These can be redacted by means of the command-line tool `cblogredaction`. -Multiple log files can be specified simultaneously. -Each file must be specified as plain text. -Optionally, the salt to be used can be automatically generated. +. Edit the `static_config` file with your preferred text editor. +This file is in the `etc/couchbase` subdirectory of the Couchbase Server installation directory. +For example: `/opt/couchbase/etc/couchbase/static_config` on Linux systems. -For example: +. Add or edit an entry for the log file whose rotation settings you want to change. +The format for the entry is: ++ +[source,erlang] +---- +{disk_sink_opts_disk_, + [{rotation, [{size, }, + {num_files, }]}]}. +---- -[source,bash] ++ +Replace the following placeholders with the values for the log you want to rotate: ++ +-- +* ``: The name of the log file without the `.log` extension, as shown in table in <>. +For example, use `debug` for the `debug.log` file. +* ``: The size in bytes at which to rotate the log file. +* ``: The number of log files to keep, including the current log file. +-- ++ +For example, to change the rotation settings for the `debug.log` file to rotate at 10MB and keep 10 copies of the log, add the following entry: + ++ +[source,erlang] ---- -$ cblogredaction /Users/username/testlog.log -g -o /Users/username -vv -2018/07/17T11:27:06 WARNING: Automatically generating salt. This will make it difficult to cross reference logs -2018/07/17T11:27:07 DEBUG: /Users/username/testlog.log - Starting redaction file size is 19034284 bytes -2018/07/17T11:27:07 DEBUG: /Users/username/testlog.log - Log redacted using salt: COeAtexHB69hGEf3 -2018/07/17T11:27:07 INFO: /Users/username/testlog.log - Finished redacting, 50373 lines processed, 740 tags redacted, 0 lines with unmatched tags +{disk_sink_opts_disk_debug, + [{rotation, [{size, 10485760}, + {num_files, 10}]}]}. ---- -For more information, see the corresponding man page, or run the command with the `--h` (help) option. +. Stop and restart Couchbase Server. +See xref:install:startup-shutdown.adoc[] for the steps to restart Couchbase Server. -[#log-file-locations] -== Log File Locations +. Repeat steps 1 through 4 on each node where you want to change the log rotation settings. + +[#changing-log-file-levels] +=== Logging Levels + +Logging levels control the level of detail that Couchbase Server writes to its log files. +The higher the logging level, the more detailed the log entries. +However, higher logging levels also consume more disk space. +You can change logging levels for some Couchbase Server components and services. -Couchbase Server creates log files in the following locations. +==== Changing Logging Levels Through REST APIs + +Some components and services in Couchbase Server let you adjust logging levels and other log settings using REST APIs. +The following table lists these services and links to the relevant REST API settings: -[cols="1,6"] |=== -| Platform | Location +| Service or Component | REST API settings link | Description -| Linux -| [.path]_/opt/couchbase/var/lib/couchbase/logs_ +| Full-Text Search Service +| xref:fts:fts-advanced-settings-enableVerboseLogging.adoc[`enableVerboseLogging`] +| Enables or disables verbose logging for the Full-Text Search Service. -| Windows -| [.path]_C:\Program Files\Couchbase\Server\var\lib\couchbase\logs_ +| Full-Text Search Service +| xref:fts:fts-advanced-settings-slowQueryLogTimeout.adoc[`setSlowQueryLogTimeout`] +| Enables the FTS to log when a query exceeds a time threshold. -Assumes default installation location +| Index Service +| xref:index-rest-settings:index.adoc#:~:text=Indexer%20logging%20level.[`indexer.settings.log_level`] +| Sets the logging level for the Index Service. -| Mac OS X -| [.path]_/Users//Library/Application Support/Couchbase/var/lib/couchbase/logs_ -|=== +| Index Service +| xref:index-rest-settings:index.adoc:~:text=Statistics%20log%20dump%20interval.[`indexer.settings.statsLogDumpInterval`] +| Sets how often the Index Service writes its statistics to the `indexer-stats.log` file. -[#log-file-listing] -== Log File Listing +| Index Service +| xref:index-rest-settings:index.adoc:~:text=Projector%20logging%20level.[`projector.settings.log_level`] +| Sets the logging level for the Index Service's projector component that provides data to the Index Service. -The following table lists the log files to be found on Couchbase Server. -Unless otherwise specified, each file is named with the `.log` extension. +| Query Service +| xref:n1ql:n1ql-manage/query-settings.adoc#loglevel[`loglevel`] +| Sets the logging level for the Query Service. -[cols="7,10"] |=== -| File | Log Contents -| `audit` -| Security audit log for administrators. +NOTE: Couchbase Server auditing has its own log settings. +See xref:https:manage:manage-security/manage-auditing.adoc#manage-audit-logs-and-events[Manage Audit Logs and Events] for more information. -| `babysitter` -| Troubleshooting log for the babysitter process which is responsible for spawning all Couchbase Server processes and respawning them where necessary. +[#persistent-changes] +==== Change Logging Levels via a Configuration File -| `backup_service` -| Log for Backup Service; containing entries at `debug`, `info`, `warn`, and `error` levels. +It's possible to change the logging level of Couchbase Server components that do not have REST API settings for logging levels. +However, you should only do this under the direction of Couchbase Support. -| `couchdb` -| Troubleshooting log for the `couchdb` subsystem which underlies map-reduce. +The logging level for Couchbase Server's internal components is set in the `static_config` configuration file. +This file contains a series of `loglevel_` entries that set the log level of internal Couchbase Server components. +This file only controls the log levels on the node where you make the change. +If you need to make the change on multiple nodes, you must edit the file on each node. -| `debug` -| Debug-level troubleshooting for the Cluster Manager. +NOTE: Couchbase Server upgrades can overwrite `static_config`, losing any modifications you have made. -| `error` -| Error-level troubleshooting log for the Cluster Manager. +To change the logging level for a component, follow these steps: -| `eventing` -| Troubleshooting log for the Eventing Service. +. Log in as `root` or `sudo` to a user that has write permissions for the Couchbase Server installation directories. +. Edit the `static_config` file with your preferred text editor. +This file is in the `etc/couchbase` subdirectory of the Couchbase Server installation directory. +For example: `/opt/couchbase/etc/couchbase/static_config` on Linux systems. +. Edit the `loglevel_` entry to set the logging level requested by Couchbase Support. +. Save the file and exit the text editor. +. Stop and restart Couchbase Server. +See xref:install:startup-shutdown.adoc[]. +. Repeat steps 1 through 5 on each node where you need to change the logging level. -| `fts` -| Troubleshooting log for the Search Service. -| `goxdcr` -| Troubleshooting log for XDCR source activity. +[#collecting_information] +== Collect Logs -| `http_access` -| The admin access log records server requests (including administrator logins) to the REST API or Couchbase Web Console. -It is output in common log format and contains several important fields such as remote client IP, timestamp, GET/POST request and resource requested, HTTP status code, and so on. +You can collect logs to share with Couchbase Support using the command line, REST-API, or the Couchbase Server Web Console. +This process gathers the most up-to-date log information from the nodes in the cluster. +It includes additional diagnostic information not included in the log files stored in the logs directory. +Couchbase Server saves the collected log files as zip files in a temporary location on each node. -| `http_access_internal` -| The admin access log records internal server requests (including administrator logins) to the REST API or Couchbase Web Console. -It is output in common log format and contains several important fields such as remote client IP, timestamp, GET/POST request and resource requested, HTTP status code, and so on. +[#collecting-logs-using-cli] +=== Collecting Logs Using the Command Line -| `indexer` -| Troubleshooting log for the Index Service. +To collect logs, use the xref:cli:cbcollect-info-tool.adoc[`cbcollect_info`] command. -| `indexer_stats` -| Log containing statistics related to the Index Service. +To start and stop log collection and to collect log status, use: -| `info` -| Info-level troubleshooting log for the Cluster Manager. +* xref:cli:cbcli/couchbase-cli-collect-logs-start.adoc[collect-logs-start] +* xref:cli:cbcli/couchbase-cli-collect-logs-stop.adoc[collect-logs-stop] +* xref:cli:cbcli/couchbase-cli-collect-logs-status.adoc[collect-logs-status] -| `json_rpc` -| Log used by the cluster manager. +[#collecting-logs-using-rest] +=== Collecting Logs Using the REST API -| `mapreduce_errors` -| JavaScript and other view-processing errors are reported in this file. +The Logging REST API provides endpoints for retrieving log and diagnostic information. -| `memcached` -| Contains information relating to the core memcached component, including DCP stream requests and slow operations. + -It is possible to adjust the logging for slow operations. -See <> for details. +To retrieve log information, use the `/diag` and `/sasl_logs` +xref:rest-api:logs-rest-api.adoc[REST endpoints]. -| `metakv` -| Troubleshooting log for the `metakv` store, a cluster-wide metadata store. +=== Collecting Logs Using Couchbase Web Console -| `ns_couchdb` -| Contains information related to starting up the `couchdb` subsystem. +Administrators with either the *Full Admin* or *Cluster Admin* roles can collect logs using Couchbase Server Web Console. +On the *Logs* page, click the *Collect Information* tab. +Administrators without either of these roles do not see this tab. -| `projector` -| Troubleshooting log for the projector process which is responsible for sending appropriate mutations from Data Service nodes to Index Service nodes. +[#collect_info] +The *Collect Information* page lets you choose which nodes should perform explicit logging. -| `prometheus` -| Log for the instance of https://prometheus.io[Prometheus^] that runs on the current node, supporting the gathering and management of Couchbase-Server _metrics_ . -(See the xref:metrics-reference:metrics-reference.adoc[Metrics Reference], for more information.) +[#collect_info_screen] +image::manage-logging/collectInformationScreen.png[,720,align=left] -| `query` -| Troubleshooting log for the Query Service. +If there is an existing collection of log files, you can view them by clicking *Show Current Collection*. +The file names of the existing log file collection contains the date and time they were created and whether the logs are redacted. +See <> for information about redaction. -| `rebalance` -| Contains reports on rebalances that have occurred. -Up to the last _five_ reports are maintained. -Each report is named in accordance with the time it was run: for example, `rebalance_report_2020-03-17T11:10:17Z.json`. -See the xref:rebalance-reference:rebalance-reference.adoc[Rebalance Reference], for detailed information. +To collect logs: -| `reports` -| Contains progress and crash reports for the Erlang processes. -Due to the nature of Erlang, processes crash and restart upon an error. +. Select the nodes from which you want to collect logs. -| `ssl_proxy` -| Troubleshooting log for the ssl proxy spawned by the Cluster Manager. +. Decide whether you want to redact the logs. +Redacting hides sensitive information within the logs. +See <> for more information. -| `stats` -| Contains periodic statistic dumps from the Cluster Manager. +. Choose whether you want to use a custom temporary directory for Couchbase Server to use when assembling the log collection. +You may want to set a custom directory to prevent the collection from using too much disk space on the default temporary directory. +If you enable this option, enter an absolute path for the temporary directory. +The temporary directory must exist on all nodes from which you're collecting logs. -| `views` -| Troubleshooting log for the views engine, predominantly focusing on the changing of partition states. +. Decide whether you want to use a custom destination directory for the completed zip files. +As with the temporary directory, you may want to set a custom directory to prevent the collection from using too much disk space on the default destination directory. +The destination directory must exist on all nodes from which you're collecting logs. -| `xdcr_target` -| Troubleshooting log for data received from XDCR sources. +. Choose whether you want to encrypt the collected log files. +This option is selected by default. +When you enable this option, Couchbase Server encrypts the collected log files using AES encryption. +You must supply and confirm a password to encrypt the files. +If you do not want to encrypt the zip files, clear *Encrypt Collected Files*. -| `analytics_access` -| Information on access attempts made to the REST/HTTP port of the Analytics Service. +. Choose whether you want to upload the collected logs to Couchbase Support. +See <> for more information about uploading log files to Couchbase Support. -| `analytics_cbas_debug` -| Debugging information, related to the Analytics Service. +. Click *Start Collecting* to start the collection process. -| `analytics_dcpdebug` -| DCP-specific debugging information related to the Analytics Service. +Once you start the collection process, the page clears and shows you a status message letting you know that the collection process is running. +A button allows you to stop the collection process. +When the collection process completes for a node, the page updates and shows the path for the node's zip file. +When the process completes for all nodes, the page displays a message similar to the following: -| `analytics_dcp_failed_ingestion` -| Information on documents that have failed to be imported/ingested from the Data Service into the Analytics Service. +[#collect_info_complete] +image::manage-logging/collectInformationComplete.png[] -| `analytics_debug` -| Events logged by the Analytics Service at the DEBUG logging level. +After the process completes, the log files are available in the specified destination directory on each node. +If you enabled redaction, Couchbase Server creates 2 zip files for each node. +One contains the redacted data, and the other contains unredacted data. -| `analytics_error` -| Events logged by the Analytics Service at the ERROR logging level. +NOTE: Couchbase Server adds the prefix `ns_server.` to the log files from the log directory. +The additional logs it gathers do not have this prefix. -| `analytics_info` -| Events logged by the Analytics Service at the INFO logging level. +[#getting-a-cluster-summary] +=== Getting a Cluster Summary -| `analytics_shutdown` -| Information concerning the shutting down of the Analytics Service. +On the *Collect Logs & Diagnostic Information* tab, you can get a summary of the cluster's status by clicking *Get Cluster Summary*. +Clicking this link opens the *Cluster Summary Info* dialog: -| `analytics_warn` -| Events logged by the Analytics Service at the WARN logging level. +image::manage-logging/clusterSummaryInfoDialog.png[,420,align=left] -|=== +The JSON document in the dialog contains a detailed status report of your cluster's configuration and status. +You can copy this information by clicking *Copy to Clipboard*. +You can then manually share it with Couchbase Support, either in addition to or as an alternative to log collection. -[#log-file-rotation] -== Log File Rotation -The `memcached` log file is rotated when it has reached 10MB in size; twenty rotations being maintained — the current file, plus nineteen compressed rotations. -Other logs are automatically rotated after they have reached 40MB in size; ten rotations being maintained — the current file, plus nine compressed rotations. +[#understanding_redaction] +=== Log Redaction + +During the collection process, Couchbase Server scans the log files for private data. +In some log files, it marks sensitive data with the tags `` and ``. +When you select the *Partial Redaction* option on the *Collect Information* tab or use the `--log-redaction-level=partial` argument with the `cbcollect_info` command, Couchbase Server redacts the marked data in the collected log files. -To provide custom rotation-settings for each component, add the following to the `static_config` file: +Sensitive log data includes: +* Key/value pairs in JSON documents +* Usernames +* Query fields that reference key/value pairs or usernames +* Names and email addresses retrieved during product registration +* Extended attributes + +When redacting, Couchbase Server replaces the content of the `` and `` tags with a SHA-1 hashed version of the data. + +For example, the following fragment from the `debug.log` displays 2 pieces of private data: a Couchbase username and a document ID: + +[source,log] ---- -{disk_sink_opts_disk_debug, - [{rotation, [{size, 10485760}, - {num_files, 10}]}]}. +[ns_server:debug,2025-11-26T18:47:35.841Z,ns_1@node1.example.com.:ns_audit<0.728.0>:ns_audit + :handle_call:178]Audit read_doc: [{local,{[{ip,<<"172.18.0.2">>},{port,8091}]}}, + {remote,{[{ip,<<"192.168.65.1">>},{port,26569}]}}, + {sessionid,<<"0591ad71ffeb715b208cf7547aa5c97b47a927bb">>}, + {real_userid,{[{domain,builtin}, + {user,<<"Administrator">>}]}}, + {timestamp,<<"2025-11-26T18:47:35.841Z">>}, + {doc_id,<<"hotel_16530">>}, + {bucket_name,<<"travel-sample">>}] ---- -This rotates the `debug.log` at 10MB, and keeps ten copies of the log: the current log and nine compressed logs. +Couchbase Server redacts the private data to the following: -Log rotation settings can be changed. -Note, however, that this is not advised; and that only the default log rotation settings are supported by Couchbase. +[source,log] +---- +[ns_server:debug,2025-11-26T18:47:35.841Z,ns_1@node1.example.com.:ns_audit<0.728.0>:ns_audit:handle_call:178]Audit read_doc: [{local,{[{ip,<<"172.18.0.2">>},{port,8091}]}}, + {remote,{[{ip,<<"192.168.65.1">>},{port,26569}]}}, + {sessionid,<<"0591ad71ffeb715b208cf7547aa5c97b47a927bb">>}, + {real_userid,{[{domain,builtin}, + {user,<<"74e98f5bafb73c078b6fde92f6d34497b2b87b54">>}]}}, + {timestamp,<<"2025-11-26T18:47:35.841Z">>}, + {doc_id,<<"845186cedd5008b3789f344b4fb8430b0015a307">>}, + {bucket_name,<<"travel-sample">>}] +---- -[#changing-log-file-locations] -== Changing Log File Locations +Redaction may eliminate some parameters containing non-private data as well as all parameters containing private data. -The default log location on Linux systems is [.path]_/opt/couchbase/var/lib/couchbase/logs_. -The location can be changed. -Note, however, that this is not advised; and that only the default log location is supported by Couchbase. +Some log files do not use the `` and `` tags to mark sensitive data. +Couchbase Server determines what information is sensitive based on the log file's content. +Couchbase Server redacts sensitive information from these files without relying on tags. -To change the location, proceed as follows: +For example, the `http_access.log` (stored in the zip with the filename `ns_server.http_access.log`) contains HTTP requests that include the username making a request as well as sensitive data in URL parameters. +The following log file fragment displays private data: the Administrator's username and the username of a user account that the Administrator is creating: -. Log in as `root` or `sudo` and navigate to the directory where Couchbase Server is installed. -For example: `/opt/couchbase/etc/couchbase/static_config`. -. Edit the [.path]_static_config_ file: change the `error_logger_mf_dir` variable, specifying a different directory. -For example: `{error_logger_mf_dir, "/home/user/cb/opt/couchbase/var/lib/couchbase/logs"}` -. Stop and restart Couchbase Server. See xref:install:startup-shutdown.adoc[Startup and Shutdown]. +[source,log] +---- +172.18.0.2 - Administrator [26/Nov/2025:16:18:00 +0000] "PUT /settings/rbac/users/local/ + query_manage_global_functions HTTP/1.1" 200 2 - "python-requests/2.31.0" 15 +---- -[#changing-log-file-levels] -== Changing Log File Levels +Couchbase Server redacts the usernames: -The default logging level for all log files is _debug_, except for `couchdb`, which is set to _info_. -Logging levels can be changed. -Note, however, that this is not advised; and that only the default logging levels are supported by Couchbase. +[source,log] +---- +172.18.0.2 - 929456b78c13ccbb87832adfef1cb1b86cbfaa8a [26/Nov/2025:16:18:00 +0000] "PUT + /settings/rbac/users/local/8dc9486da4800ed710af4fcb86190e414593a583 HTTP/1.1" 200 2 - + "python-requests/2.31.0" 15 +---- -Either _persistent_ or _dynamic_ changes can be made to logging levels. +Redacting log files may have these consequences: -[#persistent-changes] -=== Persistent Changes +* Diagnosing logged issues becomes harder for both the user and Couchbase Support. +* Collecting log data takes longer because Couchbase Server must redact the logs during collection. + +[#applying_redaction] +=== Applying Redaction -_Persistent_ means that changes continue to be implemented, should a Couchbase Server reboot occur. -To make a persistent change on Linux systems, proceed as follows: +You can apply redaction to collected logs using either the `cbcollect_info` command line tool or the Couchbase Server Web Console. -. Log in as `root` or `sudo`, and navigate to the directory where you installed Couchbase. -For example: `/opt/couchbase/etc/couchbase/static_config`. -. Edit the [.path]_static_config_ file and change the desired log component. -(Parameters with the `loglevel_` prefix establish logging levels.) -. Stop and restart Couchbase Server. See xref:install:startup-shutdown.adoc[Startup and Shutdown]. +For information about redacting logs you collect using the command-line tool, see +xref:cli:cbcollect-info-tool.adoc[]. -[#dynamic-changes] -=== Dynamic Changes +To redact logs when collecting information using the Couchbase Web Console, select *Partial Redaction* under the *Redact Logs* section. +Selecting this option displays some additional information about redaction. -_Dynamic_ means that if a Couchbase Server reboot occurs, the changed logging levels revert to the default. -To make a dynamic change, execute a [.cmd]`curl POST` command, using the following syntax: +When the information collection process completes, the tab shows you the location of the zip file containing the redacted log files. +When you enable redaction, Couchbase Server creates 2 zip files for each node. +One contains the redacted data, and the other contains unredacted data. +If you chose to upload the logs to Couchbase Support, Couchbase Server uploads only the zip file containing the redacted log files. ----- -curl -X POST -u adminName:adminPassword HOST:PORT/diag/eval \ - -d 'ale:set_loglevel(,).' ----- +[#redacting-log-files-outside-the-cluster] +=== Redact Log Files Created Outside the Cluster -* `log_component`: The default log level (except `couchdb`) is `debug`; for example `ns_server`. -The available loggers are `ns_server`, `couchdb`, `user`, `Menelaus`, `ns_doctor`, `stats`, `rebalance`, `cluster`, views, `mapreduce_errors` , xdcr and `error_logger`. -* `logging_level`: The available log levels are `debug`, `info`, `warn`, and `error`. -+ +Some Couchbase features, such as `cbbackupmgr`, the SDK, connectors, and Mobile, create log files saved outside the Couchbase cluster. +You can redact these files using the `cblogredaction` command-line tool. +You can supply multiple log files for it to redact in a single command line. +Each log file must be in plain text. +You can optionally generate the salt `cbbackupmgr` uses to generate the SHA-1 hashes of sensitive information automatically. + +For example: + +[source,bash] ---- -curl -X POST -u Administrator:password http://127.0.0.1:8091/diag/eval \ - -d 'ale:set_loglevel(ns_server,error).' +$ cblogredaction /Users/username/testlog.log -g -o /Users/username -vv +2018/07/17T11:27:06 WARNING: Automatically generating salt. This will make it difficult to cross reference logs +2018/07/17T11:27:07 DEBUG: /Users/username/testlog.log - Starting redaction file size is 19034284 bytes +2018/07/17T11:27:07 DEBUG: /Users/username/testlog.log - Log redacted using salt: COeAtexHB69hGEf3 +2018/07/17T11:27:07 INFO: /Users/username/testlog.log - Finished redacting, 50373 lines processed, 740 tags redacted, 0 lines with unmatched tags ---- -[#collecting-logs-using-cli] -== Collecting Logs Using the CLI +For more Information about this tool, see xref:cli:cbcli/cblogredaction.adoc[]. -To collect logs, use the CLI command -xref:cli:cbcollect-info-tool.adoc[cbcollect_info]. +[#uploading_log_files] +=== Uploading Log Files -To start and stop log-collection, and to collect log-status, use: +You can upload collected log files to Couchbase for inspection by Couchbase Support. +To learn how to upload using the command line, see xref:cli:cbcollect-info-tool.adoc[]. -* xref:cli:cbcli/couchbase-cli-collect-logs-start.adoc[collect-logs-start] -* xref:cli:cbcli/couchbase-cli-collect-logs-stop.adoc[collect-logs-stop] -* xref:cli:cbcli/couchbase-cli-collect-logs-status.adoc[collect-logs-status] +To upload using Couchbase Web Console, click *Upload to Couchbase* before starting the collection process. +When you do, several new fields appear in the dialog: -[#collecting-logs-using-rest] -== Collecting Logs Using the REST API +[#upload_to_couchbase_dialog_basic] +image::manage-logging/uploadToCouchbaseDialogBasic.png[,520,align=left] -The Logging REST API provides the endpoints for retrieving log and diagnostic information. -To retrieve log information use the `/diag` and `/sasl_logs` -xref:rest-api:logs-rest-api.adoc[REST endpoints]. +*Upload to Host* sets the host to which Couchbase Server uploads your log files. +You must provide your *Customer Name* and optionally a *Ticket Number*. +Use the optional *Upload Proxy* field to set the proxy host and port number if your cluster cannot directly connect to the upload host. +Its format is the same as the `curl` command's https://curl.se/docs/manpage.html#-x[`-x` option^]. +You usually supply a URL such as `http://proxy.example.com:8080`, although you can provide just a hostname or IP address with a port number. + +If you select *Bypass Reachability Checks*, Couchbase Server does not verify that it can connect to the upload host and, if provided, the proxy before collecting and uploading logs. +It's cleared by default, which has Couchbase Server verify its ability to connect to the upload destination before collecting logs. + +After entering all required information, click *Start Collecting* to start information collection. +If the collection and upload succeed, the page displays the URL of the uploaded zip file. + +[#receiving-upload-receipts-from-couchbase-customer-support] +=== Receiving Upload Receipts from Couchbase Customer Support +[.status]#Couchbase Server Enterprise# + +Couchbase Customer Support can send you an automatic notification when they receive any log file uploaded for a support case. +This function is opt-in. +Contact your account manager or Couchbase Support for more information. + +When a Couchbase Support Engineer requests logs from a customer with this feature enabled, +the request includes a unique upload URL and UUID for the support case: + +image::manage-logging/supportResponse.png[,500,align=left] + +You can use `curl` to upload the log file using the provided URL and UUID: + +[source,shell] +---- +curl --upload-file [filename] https://uploads.couchbase.com/bigstuff-fle11fdb-4b1c-48e4-88fe-7fe2fb0f2019/ +---- + +IMPORTANT: Include the final forward slash (`/`) character at the end of the command. + +You can also send the file using the Couchbase Server web console: + +image::manage-logging/logUploadForAlert.png[, 500,align=left] + +After you upload the log files, +you receive an acknowledgement attached to your support ticket: + +image::manage-logging/uploadAcknowledgement.png[,500,align=left] [#adjust-threshold-slow-op-logging] == Adjusting Threshold for Logging Slow Operations -It is possible to examine and/or alter the logging threshold for slow-running operations. -This is done using the `mcctl` command that comes packaged with the Couchbase server installation. -The command only gets or sets information for the node it's run on. +You can examine and alter the logging threshold for slow-running operations. +Use the `mcctl` command packaged with the Couchbase Server installation. +The command only gets or sets information for the node where you run it. === Getting Threshold Details -The current settings are retrieved by using the `mcctl` cli to execute the `get sla` command: + +Retrieve the current settings using the `mcctl` CLI to execute the `get sla` command: [IMPORTANT] ==== -These settings only apply to the nodes _where the changes are made._ +These settings only apply to the nodes where you make the changes. -You must implement the changes on each node to ensure they are applied across the cluster. +You must implement the changes on each node to apply them across the cluster. You must also configure the node to run the `data service`. ==== @@ -514,11 +677,11 @@ get sla } ---- -The JSON message returned gives details of the operation being logged and the threshold time that will cause a timing message to be logged. +The JSON message gives details about the operation being logged and the threshold time that causes a timing message to be logged. === Setting the Threshold -The `mcctl` command line interface is also used to set the thresholds for the memcahe operations: +Use the `mcctl` command line interface to set thresholds for Memcached operations: .Set logging threshold example [source, bash] @@ -527,11 +690,12 @@ The `mcctl` command line interface is also used to set the thresholds for the me set sla '{"version":1, "DELETE_BUCKET":{"slow":"100 ms"}}' ---- -In this example, the threshold for the `DELETE_BUCKET` operation is being set to 100ms. If a bucket deletion operation takes longer than this, then an message will be logged. +In this example, the threshold for the `DELETE_BUCKET` operation is set to 100{nbsp}ms. +If a bucket deletion operation takes longer than this, a message is logged. [TIP] ==== -As an added minor convenience, the time interval can also be specified without a space: +You can also specify the time interval without a space: [source, bash, subs=quotes] ---- @@ -540,7 +704,7 @@ set sla '{"version":1, "DELETE_BUCKET":{"slow":"#100ms#"}}' ---- ==== -It is also possible to set the threshold for all the op-codes in a single command by using the `default` code: +You can set the threshold for all op-codes in a single command using the `default` code: .Set all thresholds to 100 ms. [source, bash] @@ -552,7 +716,7 @@ set sla '{"version":1, "default":{"slow":"100 ms"}}' [sidebar] .Time units in threshold settings **** -A number of different time units can be used when setting the thresholds: +You can use different time units when setting thresholds: [horizontal] *ns*:: nanoseconds @@ -570,11 +734,12 @@ set sla '{"version":1, "DELETE_BUCKET":{"slow":"1 m"}}' ---- **** -Setting the threshold values is non-persistent: when the node is restarted, the thresholds are reset to their default values. +Setting threshold values is non-persistent. +When the node restarts, thresholds reset to their default values. === Setting Threshold Defaults -The default values are loaded from the file: `/opt/couchbase/etc/couchbase/kv/opcode-attributes.json` when the node is started. +The default values load from the file `/opt/couchbase/etc/couchbase/kv/opcode-attributes.json` when the node starts. [source,json] ---- @@ -602,8 +767,8 @@ The default values are loaded from the file: `/opt/couchbase/etc/couchbase/kv/op } ---- -These values can be overriden by creating another file in the `/opt/couchbase/etc/couchbase/kv/opcode-attributes.d` directory. -The easiest way to do this is to copy the existing settings file into the directory, making sure that there isn't an existing file in the directory: +You can override these values by creating another file in the `/opt/couchbase/etc/couchbase/kv/opcode-attributes.d` directory. +Copy the existing settings file into the directory, making sure there is no existing file in the directory: [source, bash] ---- @@ -612,8 +777,7 @@ cd /opt/couchbase/etc/couchbase/kv/ cp opcode-attributes.json opcode-attributes.d ---- - Edit `/opt/couchbase/etc/couchbase/kv/opcode-attributes.d/opcode-attributes.json` with the new settings. -NOTE: These settings only apply to the node where the changes are made. -To change the threshold across the cluster, then all the configurations must be applied to each node. +NOTE: These settings only apply to the node where you make the changes. +To change the threshold across the cluster, apply all configurations to each node. diff --git a/modules/manage/pages/manage-nodes/modify-services-and-rebalance.adoc b/modules/manage/pages/manage-nodes/modify-services-and-rebalance.adoc index ac6f62ba1d..b3bc7cbd67 100644 --- a/modules/manage/pages/manage-nodes/modify-services-and-rebalance.adoc +++ b/modules/manage/pages/manage-nodes/modify-services-and-rebalance.adoc @@ -22,6 +22,18 @@ NOTE: You cannot add or remove the Data Service (kv) using this method. Adding or removing of the Data Service on an existing node is supported only through adding or removing nodes. For more information about adding or removing the Data Service on an existing node, see xref:manage:manage-nodes/manage-data-service-and-rebalance.adoc[Adding or Removing the Data Service on Existing Nodes]. +[WARNING] +==== +When you modify (add or remove) services on existing nodes in a cluster, rebalance is triggered immediately to apply the changes. +Removing a service instance reduces the cluster's capacity for that service. +For certain services, such as the Index Service, removing the service may result in loss of replicas or entire indexes if no replicas exist, +which can cause queries to fail. + +Removing all instances of a service deletes all data and metadata associated with that service, which means effectively removing the service from the cluster. +For example, removing the last Index Service node deletes all indexes. +For the Backup Service, physical backup repositories outside the cluster remain, but the Backup Service metadata about those repositories is deleted. +==== + == Prerequisites Before modifying non-data services on nodes, make sure you have the following: @@ -153,4 +165,4 @@ From the previous example, if `node3` was not running the index service, the fol ERROR: Node ns_1@node3 does not provide the index service ---- -For more information about the `couchbase-cli rebalance` command, see xref:cli:cbcli/couchbase-cli-rebalance.adoc[rebalance]. \ No newline at end of file +For more information about the `couchbase-cli rebalance` command, see xref:cli:cbcli/couchbase-cli-rebalance.adoc[rebalance]. diff --git a/modules/manage/pages/manage-security/configure-client-certificates.adoc b/modules/manage/pages/manage-security/configure-client-certificates.adoc index 0d5d80114b..1966c16a0c 100644 --- a/modules/manage/pages/manage-security/configure-client-certificates.adoc +++ b/modules/manage/pages/manage-security/configure-client-certificates.adoc @@ -354,6 +354,18 @@ Note that in this example, although only the `Common Name` is being used to esta For example, by adding `-ext "san=email:john.smith@mail.com"` to the certificate signing-request used in the current step, the email-address `john.smith@mail.com` could be established as the basis for an alternative username to be submitted for authentication. See xref:learn:security/certificates.adoc#identity-encoding-in-client-certificates[Specifying Usernames for Client-Certificate Authentication], for more information. + +. Create an extensions file. +(`extendedKeyUsage = clientAuth` means that the certificate will be used for client authentication): ++ +[source,bash] +---- +cat > client.ext <:8091/settings/passwordPolicy -u : - -d minlength= + -d minLength= -d enforceUppercase=[ true | false ] -d enforceLowercase=[ true | false ] -d enforceDigits=[ true | false ] diff --git a/modules/rest-api/pages/rest-set-up-services-existing-nodes.adoc b/modules/rest-api/pages/rest-set-up-services-existing-nodes.adoc index bbf5e342e2..abf8b51043 100644 --- a/modules/rest-api/pages/rest-set-up-services-existing-nodes.adoc +++ b/modules/rest-api/pages/rest-set-up-services-existing-nodes.adoc @@ -26,6 +26,18 @@ Then trigger a rebalance operation. * For information about using the `POST /controller/rebalance` REST API to rebalance after node additions and removals, after a graceful failover and recovery, and after a hard failover, see xref:rest-api:rest-cluster-rebalance.adoc[Rebalancing the Cluster]. ==== +[WARNING] +==== +When you modify (add or remove) services on existing nodes in a cluster, rebalance is triggered immediately to apply the changes. +Removing a service instance reduces the cluster's capacity for that service. +For certain services, such as the Index Service, removing the service may result in loss of replicas or entire indexes if no replicas exist, +which can cause queries to fail. + +Removing all instances of a service deletes all data and metadata associated with that service, which means effectively removing the service from the cluster. +For example, removing the last Index Service node deletes all indexes. +For the Backup Service, physical backup repositories outside the cluster remain, but the Backup Service metadata about those repositories is deleted. +==== + === curl Syntax You must specify all known nodes in the cluster and the necessary cluster services topology. diff --git a/modules/rest-api/pages/rest-set-up-services.adoc b/modules/rest-api/pages/rest-set-up-services.adoc index 96cafd4ed5..60855fd693 100644 --- a/modules/rest-api/pages/rest-set-up-services.adoc +++ b/modules/rest-api/pages/rest-set-up-services.adoc @@ -50,7 +50,15 @@ Note that during the process of provisioning a single-node cluster, `username` a == Examples The following example establishes data paths for the Data, Index, and Eventing Services. -Commas in the list of service-names have been encoded. + +--- +curl -X POST -H "Content-Type: application/json" http://10.144.220.101:8091/node/controller/setupServices \ +-d '{"services":"kv,n1ql,index,eventing"}' \ +-u Administrator:password +--- + +Or you may choose to URI encode the parameters to ensure the string is not malformed by transport layers. +The below example applies URI encoding to the commas. ---- curl -X POST http://10.144.220.101:8091/node/controller/setupServices \ diff --git a/modules/rest-api/pages/rest-statistics-multiple.adoc b/modules/rest-api/pages/rest-statistics-multiple.adoc index 3b6495f132..40cc4e785f 100644 --- a/modules/rest-api/pages/rest-statistics-multiple.adoc +++ b/modules/rest-api/pages/rest-statistics-multiple.adoc @@ -56,13 +56,13 @@ Each object takes the following form: { "label": , "value": , - "operator": "=" | "!=" | "=~" | "~=" + "operator": "=" | "!=" | "=~" | "!~" | "any" | "not_any" } ---- The value of the key `label`, `label_name`, must be a string that specifies how the metric is identified: for example, `name`, or `proc`. The value of the key `value`, `label_val`, must be a string that is the actual name used to identify the metric: for example, `sys_cpu_utilization_rate`. -The value of the key `"operator"` must be `=`, `!=`, `=~`, or `~=`. +The value of the key `"operator"` must be `=` | `!=` | `=~` | `!~` | `any` | `not_any`. * `applyFunctions.` Can be any of the functions described in the section xref:rest-api:rest-statistics-single.adoc#function[function], on the page xref:rest-api:rest-statistics-single.adoc[Getting a Single Statistic]. diff --git a/modules/sdk/pages/sdk-doctor.adoc b/modules/sdk/pages/sdk-doctor.adoc index 1b88f9e8f0..d7282077d4 100644 --- a/modules/sdk/pages/sdk-doctor.adoc +++ b/modules/sdk/pages/sdk-doctor.adoc @@ -69,7 +69,7 @@ Run SDK doctor with your credentials, and the cluster and bucket: $ ./sdk-doctor-macos diagnose couchbases://e9718149-af24-4bc4-b496-53149cdb7966.dp.cloud.couchbase.com/travel-sample -u username -p password ---- -You should see results like there (truncated here): +You should see results like these (truncated here): [source,console] ---- $ ./sdk-doctor-macos diagnose couchbases://e9718149-af24-4bc4-b496-53149cdb7966.dp.cloud.couchbase.com/travel-sample -u username -p 2KZZb3pap89£$$%\* @@ -122,7 +122,7 @@ Note: Diagnostics can only provide accurate results when your cluster Found multiple issues, see listing above. ---- -A full example can be found on our xref:3.5@java-sdk:howtos:troubleshooting-cloud-connections.adoc#validating-connectivity-with-sdk-doctor[Troubleshooting Cloud Connections] page. +A full example can be found on our xref:java-sdk:howtos:troubleshooting-cloud-connections.adoc#validating-connectivity-with-sdk-doctor[Troubleshooting Cloud Connections] page for each SDK. == Limitations