You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1424 lines
53 KiB

feat: decommission feature for pools (#14012) ``` λ mc admin decommission start alias/ http://minio{1...2}/data{1...4} ``` ``` λ mc admin decommission status alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │ │ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘ ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB] Time Remaining: 4 hours (started 3 hours ago) ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} ERROR: This pool is not scheduled for decommissioning currently. ``` ``` λ mc admin decommission cancel alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │ └─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘ ``` > NOTE: Canceled decommission will not make the pool active again, since we might have > Potentially partial duplicate content on the other pools, to avoid this scenario be > very sure to start decommissioning as a planned activity. ``` λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4} ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘ ```
4 years ago
feat: decommission feature for pools (#14012) ``` λ mc admin decommission start alias/ http://minio{1...2}/data{1...4} ``` ``` λ mc admin decommission status alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │ │ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘ ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB] Time Remaining: 4 hours (started 3 hours ago) ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} ERROR: This pool is not scheduled for decommissioning currently. ``` ``` λ mc admin decommission cancel alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │ └─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘ ``` > NOTE: Canceled decommission will not make the pool active again, since we might have > Potentially partial duplicate content on the other pools, to avoid this scenario be > very sure to start decommissioning as a planned activity. ``` λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4} ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘ ```
4 years ago
feat: decommission feature for pools (#14012) ``` λ mc admin decommission start alias/ http://minio{1...2}/data{1...4} ``` ``` λ mc admin decommission status alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │ │ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘ ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB] Time Remaining: 4 hours (started 3 hours ago) ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} ERROR: This pool is not scheduled for decommissioning currently. ``` ``` λ mc admin decommission cancel alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │ └─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘ ``` > NOTE: Canceled decommission will not make the pool active again, since we might have > Potentially partial duplicate content on the other pools, to avoid this scenario be > very sure to start decommissioning as a planned activity. ``` λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4} ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘ ```
4 years ago
feat: decommission feature for pools (#14012) ``` λ mc admin decommission start alias/ http://minio{1...2}/data{1...4} ``` ``` λ mc admin decommission status alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │ │ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘ ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB] Time Remaining: 4 hours (started 3 hours ago) ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} ERROR: This pool is not scheduled for decommissioning currently. ``` ``` λ mc admin decommission cancel alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │ └─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘ ``` > NOTE: Canceled decommission will not make the pool active again, since we might have > Potentially partial duplicate content on the other pools, to avoid this scenario be > very sure to start decommissioning as a planned activity. ``` λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4} ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘ ```
4 years ago
feat: decommission feature for pools (#14012) ``` λ mc admin decommission start alias/ http://minio{1...2}/data{1...4} ``` ``` λ mc admin decommission status alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Active │ │ 2nd │ http://minio{3...4}/data{1...4} │ 329 GiB (used) / 421 GiB (total) │ Active │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────┘ ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} Progress: ===================> [1GiB/sec] [15%] [4TiB/50TiB] Time Remaining: 4 hours (started 3 hours ago) ``` ``` λ mc admin decommission status alias/ http://minio{1...2}/data{1...4} ERROR: This pool is not scheduled for decommissioning currently. ``` ``` λ mc admin decommission cancel alias/ ┌─────┬─────────────────────────────────┬──────────────────────────────────┬──────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining │ └─────┴─────────────────────────────────┴──────────────────────────────────┴──────────┘ ``` > NOTE: Canceled decommission will not make the pool active again, since we might have > Potentially partial duplicate content on the other pools, to avoid this scenario be > very sure to start decommissioning as a planned activity. ``` λ mc admin decommission cancel alias/ http://minio{1...2}/data{1...4} ┌─────┬─────────────────────────────────┬──────────────────────────────────┬────────────────────┐ │ ID │ Pools │ Capacity │ Status │ │ 1st │ http://minio{1...2}/data{1...4} │ 439 GiB (used) / 561 GiB (total) │ Draining(Canceled) │ └─────┴─────────────────────────────────┴──────────────────────────────────┴────────────────────┘ ```
4 years ago
  1. // Copyright (c) 2015-2021 MinIO, Inc.
  2. //
  3. // This file is part of MinIO Object Storage stack
  4. //
  5. // This program is free software: you can redistribute it and/or modify
  6. // it under the terms of the GNU Affero General Public License as published by
  7. // the Free Software Foundation, either version 3 of the License, or
  8. // (at your option) any later version.
  9. //
  10. // This program is distributed in the hope that it will be useful
  11. // but WITHOUT ANY WARRANTY; without even the implied warranty of
  12. // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
  13. // GNU Affero General Public License for more details.
  14. //
  15. // You should have received a copy of the GNU Affero General Public License
  16. // along with this program. If not, see <http://www.gnu.org/licenses/>.
  17. package cmd
  18. import (
  19. "bytes"
  20. "context"
  21. "encoding/gob"
  22. "encoding/hex"
  23. "encoding/json"
  24. "errors"
  25. "fmt"
  26. "io"
  27. "net/http"
  28. "net/url"
  29. "strconv"
  30. "strings"
  31. "sync"
  32. "sync/atomic"
  33. "time"
  34. "github.com/dustin/go-humanize"
  35. "github.com/klauspost/compress/zstd"
  36. "github.com/minio/madmin-go/v3"
  37. "github.com/minio/madmin-go/v3/logger/log"
  38. "github.com/minio/minio/internal/bucket/bandwidth"
  39. "github.com/minio/minio/internal/event"
  40. "github.com/minio/minio/internal/grid"
  41. xioutil "github.com/minio/minio/internal/ioutil"
  42. "github.com/minio/minio/internal/logger"
  43. "github.com/minio/minio/internal/pubsub"
  44. "github.com/minio/mux"
  45. )
  46. // To abstract a node over network.
  47. type peerRESTServer struct{}
  48. var (
  49. // Types & Wrappers
  50. aoBucketInfo = grid.NewArrayOf[*BucketInfo](func() *BucketInfo { return &BucketInfo{} })
  51. aoMetricsGroup = grid.NewArrayOf[*MetricV2](func() *MetricV2 { return &MetricV2{} })
  52. madminBgHealState = grid.NewJSONPool[madmin.BgHealState]()
  53. madminHealResultItem = grid.NewJSONPool[madmin.HealResultItem]()
  54. madminCPUs = grid.NewJSONPool[madmin.CPUs]()
  55. madminMemInfo = grid.NewJSONPool[madmin.MemInfo]()
  56. madminNetInfo = grid.NewJSONPool[madmin.NetInfo]()
  57. madminOSInfo = grid.NewJSONPool[madmin.OSInfo]()
  58. madminPartitions = grid.NewJSONPool[madmin.Partitions]()
  59. madminProcInfo = grid.NewJSONPool[madmin.ProcInfo]()
  60. madminRealtimeMetrics = grid.NewJSONPool[madmin.RealtimeMetrics]()
  61. madminServerProperties = grid.NewJSONPool[madmin.ServerProperties]()
  62. madminStorageInfo = grid.NewJSONPool[madmin.StorageInfo]()
  63. madminSysConfig = grid.NewJSONPool[madmin.SysConfig]()
  64. madminSysErrors = grid.NewJSONPool[madmin.SysErrors]()
  65. madminSysServices = grid.NewJSONPool[madmin.SysServices]()
  66. // Request -> Response RPC calls
  67. deleteBucketMetadataRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerDeleteBucketMetadata, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  68. deleteBucketRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerDeleteBucket, grid.NewMSS, grid.NewNoPayload)
  69. deletePolicyRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerDeletePolicy, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  70. deleteSvcActRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerDeleteServiceAccount, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  71. deleteUserRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerDeleteUser, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  72. getAllBucketStatsRPC = grid.NewSingleHandler[*grid.MSS, *BucketStatsMap](grid.HandlerGetAllBucketStats, grid.NewMSS, func() *BucketStatsMap { return &BucketStatsMap{} })
  73. getBackgroundHealStatusRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.BgHealState]](grid.HandlerBackgroundHealStatus, grid.NewMSS, madminBgHealState.NewJSON)
  74. getBandwidthRPC = grid.NewSingleHandler[*grid.URLValues, *bandwidth.BucketBandwidthReport](grid.HandlerGetBandwidth, grid.NewURLValues, func() *bandwidth.BucketBandwidthReport { return &bandwidth.BucketBandwidthReport{} })
  75. getBucketStatsRPC = grid.NewSingleHandler[*grid.MSS, *BucketStats](grid.HandlerGetBucketStats, grid.NewMSS, func() *BucketStats { return &BucketStats{} })
  76. getCPUsHandler = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.CPUs]](grid.HandlerGetCPUs, grid.NewMSS, madminCPUs.NewJSON)
  77. getLastDayTierStatsRPC = grid.NewSingleHandler[*grid.MSS, *DailyAllTierStats](grid.HandlerGetLastDayTierStats, grid.NewMSS, func() *DailyAllTierStats { return &DailyAllTierStats{} })
  78. getLocksRPC = grid.NewSingleHandler[*grid.MSS, *localLockMap](grid.HandlerGetLocks, grid.NewMSS, func() *localLockMap { return &localLockMap{} })
  79. getMemInfoRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.MemInfo]](grid.HandlerGetMemInfo, grid.NewMSS, madminMemInfo.NewJSON)
  80. getMetacacheListingRPC = grid.NewSingleHandler[*listPathOptions, *metacache](grid.HandlerGetMetacacheListing, func() *listPathOptions { return &listPathOptions{} }, func() *metacache { return &metacache{} })
  81. getMetricsRPC = grid.NewSingleHandler[*grid.URLValues, *grid.JSON[madmin.RealtimeMetrics]](grid.HandlerGetMetrics, grid.NewURLValues, madminRealtimeMetrics.NewJSON)
  82. getNetInfoRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.NetInfo]](grid.HandlerGetNetInfo, grid.NewMSS, madminNetInfo.NewJSON)
  83. getOSInfoRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.OSInfo]](grid.HandlerGetOSInfo, grid.NewMSS, madminOSInfo.NewJSON)
  84. getPartitionsRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.Partitions]](grid.HandlerGetPartitions, grid.NewMSS, madminPartitions.NewJSON)
  85. getPeerBucketMetricsRPC = grid.NewSingleHandler[*grid.MSS, *grid.Array[*MetricV2]](grid.HandlerGetPeerBucketMetrics, grid.NewMSS, aoMetricsGroup.New)
  86. getPeerMetricsRPC = grid.NewSingleHandler[*grid.MSS, *grid.Array[*MetricV2]](grid.HandlerGetPeerMetrics, grid.NewMSS, aoMetricsGroup.New)
  87. getResourceMetricsRPC = grid.NewSingleHandler[*grid.MSS, *grid.Array[*MetricV2]](grid.HandlerGetResourceMetrics, grid.NewMSS, aoMetricsGroup.New)
  88. getProcInfoRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.ProcInfo]](grid.HandlerGetProcInfo, grid.NewMSS, madminProcInfo.NewJSON)
  89. getSRMetricsRPC = grid.NewSingleHandler[*grid.MSS, *SRMetricsSummary](grid.HandlerGetSRMetrics, grid.NewMSS, func() *SRMetricsSummary { return &SRMetricsSummary{} })
  90. getSysConfigRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.SysConfig]](grid.HandlerGetSysConfig, grid.NewMSS, madminSysConfig.NewJSON)
  91. getSysErrorsRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.SysErrors]](grid.HandlerGetSysErrors, grid.NewMSS, madminSysErrors.NewJSON)
  92. getSysServicesRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.SysServices]](grid.HandlerGetSysServices, grid.NewMSS, madminSysServices.NewJSON)
  93. headBucketRPC = grid.NewSingleHandler[*grid.MSS, *VolInfo](grid.HandlerHeadBucket, grid.NewMSS, func() *VolInfo { return &VolInfo{} })
  94. healBucketRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.HealResultItem]](grid.HandlerHealBucket, grid.NewMSS, madminHealResultItem.NewJSON)
  95. listBucketsRPC = grid.NewSingleHandler[*BucketOptions, *grid.Array[*BucketInfo]](grid.HandlerListBuckets, func() *BucketOptions { return &BucketOptions{} }, aoBucketInfo.New)
  96. loadBucketMetadataRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadBucketMetadata, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  97. loadGroupRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadGroup, grid.NewMSS, grid.NewNoPayload)
  98. loadPolicyMappingRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadPolicyMapping, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  99. loadPolicyRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadPolicy, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  100. loadRebalanceMetaRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadRebalanceMeta, grid.NewMSS, grid.NewNoPayload)
  101. loadSvcActRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadServiceAccount, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  102. loadTransitionTierConfigRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadTransitionTierConfig, grid.NewMSS, grid.NewNoPayload)
  103. loadUserRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerLoadUser, grid.NewMSS, grid.NewNoPayload).IgnoreNilConn()
  104. localStorageInfoRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.StorageInfo]](grid.HandlerStorageInfo, grid.NewMSS, madminStorageInfo.NewJSON)
  105. makeBucketRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerMakeBucket, grid.NewMSS, grid.NewNoPayload)
  106. reloadPoolMetaRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerReloadPoolMeta, grid.NewMSS, grid.NewNoPayload)
  107. reloadSiteReplicationConfigRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerReloadSiteReplicationConfig, grid.NewMSS, grid.NewNoPayload)
  108. serverInfoRPC = grid.NewSingleHandler[*grid.MSS, *grid.JSON[madmin.ServerProperties]](grid.HandlerServerInfo, grid.NewMSS, madminServerProperties.NewJSON)
  109. signalServiceRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerSignalService, grid.NewMSS, grid.NewNoPayload)
  110. stopRebalanceRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerStopRebalance, grid.NewMSS, grid.NewNoPayload)
  111. updateMetacacheListingRPC = grid.NewSingleHandler[*metacache, *metacache](grid.HandlerUpdateMetacacheListing, func() *metacache { return &metacache{} }, func() *metacache { return &metacache{} })
  112. cleanupUploadIDCacheMetaRPC = grid.NewSingleHandler[*grid.MSS, grid.NoPayload](grid.HandlerClearUploadID, grid.NewMSS, grid.NewNoPayload)
  113. // STREAMS
  114. // Set an output capacity of 100 for consoleLog and listenRPC
  115. // There is another buffer that will buffer events.
  116. consoleLogRPC = grid.NewStream[*grid.MSS, grid.NoPayload, *grid.Bytes](grid.HandlerConsoleLog, grid.NewMSS, nil, grid.NewBytes).WithOutCapacity(100)
  117. listenRPC = grid.NewStream[*grid.URLValues, grid.NoPayload, *grid.Bytes](grid.HandlerListen, grid.NewURLValues, nil, grid.NewBytes).WithOutCapacity(100)
  118. )
  119. // GetLocksHandler - returns list of lock from the server.
  120. func (s *peerRESTServer) GetLocksHandler(_ *grid.MSS) (*localLockMap, *grid.RemoteErr) {
  121. res := globalLockServer.DupLockMap()
  122. return &res, nil
  123. }
  124. // DeletePolicyHandler - deletes a policy on the server.
  125. func (s *peerRESTServer) DeletePolicyHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  126. objAPI := newObjectLayerFn()
  127. if objAPI == nil {
  128. return np, grid.NewRemoteErr(errServerNotInitialized)
  129. }
  130. policyName := mss.Get(peerRESTPolicy)
  131. if policyName == "" {
  132. return np, grid.NewRemoteErr(errors.New("policyName is missing"))
  133. }
  134. if err := globalIAMSys.DeletePolicy(context.Background(), policyName, false); err != nil {
  135. return np, grid.NewRemoteErr(err)
  136. }
  137. return
  138. }
  139. // LoadPolicyHandler - reloads a policy on the server.
  140. func (s *peerRESTServer) LoadPolicyHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  141. objAPI := newObjectLayerFn()
  142. if objAPI == nil {
  143. return np, grid.NewRemoteErr(errServerNotInitialized)
  144. }
  145. policyName := mss.Get(peerRESTPolicy)
  146. if policyName == "" {
  147. return np, grid.NewRemoteErr(errors.New("policyName is missing"))
  148. }
  149. if err := globalIAMSys.LoadPolicy(context.Background(), objAPI, policyName); err != nil {
  150. return np, grid.NewRemoteErr(err)
  151. }
  152. return
  153. }
  154. // LoadPolicyMappingHandler - reloads a policy mapping on the server.
  155. func (s *peerRESTServer) LoadPolicyMappingHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  156. objAPI := newObjectLayerFn()
  157. if objAPI == nil {
  158. return np, grid.NewRemoteErr(errServerNotInitialized)
  159. }
  160. userOrGroup := mss.Get(peerRESTUserOrGroup)
  161. if userOrGroup == "" {
  162. return np, grid.NewRemoteErr(errors.New("user-or-group is missing"))
  163. }
  164. userType, err := strconv.Atoi(mss.Get(peerRESTUserType))
  165. if err != nil {
  166. return np, grid.NewRemoteErr(fmt.Errorf("user-type `%s` is invalid: %w", mss.Get(peerRESTUserType), err))
  167. }
  168. isGroup := mss.Get(peerRESTIsGroup) == "true"
  169. if err := globalIAMSys.LoadPolicyMapping(context.Background(), objAPI, userOrGroup, IAMUserType(userType), isGroup); err != nil {
  170. return np, grid.NewRemoteErr(err)
  171. }
  172. return
  173. }
  174. // DeleteServiceAccountHandler - deletes a service account on the server.
  175. func (s *peerRESTServer) DeleteServiceAccountHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  176. objAPI := newObjectLayerFn()
  177. if objAPI == nil {
  178. return np, grid.NewRemoteErr(errServerNotInitialized)
  179. }
  180. accessKey := mss.Get(peerRESTUser)
  181. if accessKey == "" {
  182. return np, grid.NewRemoteErr(errors.New("service account name is missing"))
  183. }
  184. if err := globalIAMSys.DeleteServiceAccount(context.Background(), accessKey, false); err != nil {
  185. return np, grid.NewRemoteErr(err)
  186. }
  187. return
  188. }
  189. // LoadServiceAccountHandler - reloads a service account on the server.
  190. func (s *peerRESTServer) LoadServiceAccountHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  191. objAPI := newObjectLayerFn()
  192. if objAPI == nil {
  193. return np, grid.NewRemoteErr(errServerNotInitialized)
  194. }
  195. accessKey := mss.Get(peerRESTUser)
  196. if accessKey == "" {
  197. return np, grid.NewRemoteErr(errors.New("service account name is missing"))
  198. }
  199. if err := globalIAMSys.LoadServiceAccount(context.Background(), accessKey); err != nil {
  200. return np, grid.NewRemoteErr(err)
  201. }
  202. return
  203. }
  204. // DeleteUserHandler - deletes a user on the server.
  205. func (s *peerRESTServer) DeleteUserHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  206. objAPI := newObjectLayerFn()
  207. if objAPI == nil {
  208. return np, grid.NewRemoteErr(errServerNotInitialized)
  209. }
  210. accessKey := mss.Get(peerRESTUser)
  211. if accessKey == "" {
  212. return np, grid.NewRemoteErr(errors.New("username is missing"))
  213. }
  214. if err := globalIAMSys.DeleteUser(context.Background(), accessKey, false); err != nil {
  215. return np, grid.NewRemoteErr(err)
  216. }
  217. return
  218. }
  219. // LoadUserHandler - reloads a user on the server.
  220. func (s *peerRESTServer) LoadUserHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  221. objAPI := newObjectLayerFn()
  222. if objAPI == nil {
  223. return np, grid.NewRemoteErr(errServerNotInitialized)
  224. }
  225. accessKey := mss.Get(peerRESTUser)
  226. if accessKey == "" {
  227. return np, grid.NewRemoteErr(errors.New("username is missing"))
  228. }
  229. temp, err := strconv.ParseBool(mss.Get(peerRESTUserTemp))
  230. if err != nil {
  231. return np, grid.NewRemoteErr(err)
  232. }
  233. userType := regUser
  234. if temp {
  235. userType = stsUser
  236. }
  237. if err = globalIAMSys.LoadUser(context.Background(), objAPI, accessKey, userType); err != nil {
  238. return np, grid.NewRemoteErr(err)
  239. }
  240. return
  241. }
  242. // LoadGroupHandler - reloads group along with members list.
  243. func (s *peerRESTServer) LoadGroupHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  244. objAPI := newObjectLayerFn()
  245. if objAPI == nil {
  246. return np, grid.NewRemoteErr(errServerNotInitialized)
  247. }
  248. group := mss.Get(peerRESTGroup)
  249. if group == "" {
  250. return np, grid.NewRemoteErr(errors.New("group is missing"))
  251. }
  252. err := globalIAMSys.LoadGroup(context.Background(), objAPI, group)
  253. if err != nil {
  254. return np, grid.NewRemoteErr(err)
  255. }
  256. return
  257. }
  258. // StartProfilingHandler - Issues the start profiling command.
  259. func (s *peerRESTServer) StartProfilingHandler(w http.ResponseWriter, r *http.Request) {
  260. if !s.IsValid(w, r) {
  261. s.writeErrorResponse(w, errors.New("Invalid request"))
  262. return
  263. }
  264. vars := mux.Vars(r)
  265. profiles := strings.Split(vars[peerRESTProfiler], ",")
  266. if len(profiles) == 0 {
  267. s.writeErrorResponse(w, errors.New("profiler name is missing"))
  268. return
  269. }
  270. globalProfilerMu.Lock()
  271. defer globalProfilerMu.Unlock()
  272. if globalProfiler == nil {
  273. globalProfiler = make(map[string]minioProfiler, 10)
  274. }
  275. // Stop profiler of all types if already running
  276. for k, v := range globalProfiler {
  277. for _, p := range profiles {
  278. if p == k {
  279. v.Stop()
  280. delete(globalProfiler, k)
  281. }
  282. }
  283. }
  284. for _, profiler := range profiles {
  285. prof, err := startProfiler(profiler)
  286. if err != nil {
  287. s.writeErrorResponse(w, err)
  288. return
  289. }
  290. globalProfiler[profiler] = prof
  291. }
  292. }
  293. // DownloadProfilingDataHandler - returns profiled data.
  294. func (s *peerRESTServer) DownloadProfilingDataHandler(w http.ResponseWriter, r *http.Request) {
  295. if !s.IsValid(w, r) {
  296. s.writeErrorResponse(w, errors.New("Invalid request"))
  297. return
  298. }
  299. ctx := newContext(r, w, "DownloadProfiling")
  300. profileData, err := getProfileData()
  301. if err != nil {
  302. s.writeErrorResponse(w, err)
  303. return
  304. }
  305. peersLogIf(ctx, gob.NewEncoder(w).Encode(profileData))
  306. }
  307. func (s *peerRESTServer) LocalStorageInfoHandler(mss *grid.MSS) (*grid.JSON[madmin.StorageInfo], *grid.RemoteErr) {
  308. objLayer := newObjectLayerFn()
  309. if objLayer == nil {
  310. return nil, grid.NewRemoteErr(errServerNotInitialized)
  311. }
  312. metrics, err := strconv.ParseBool(mss.Get(peerRESTMetrics))
  313. if err != nil {
  314. return nil, grid.NewRemoteErr(err)
  315. }
  316. info := objLayer.LocalStorageInfo(context.Background(), metrics)
  317. return madminStorageInfo.NewJSONWith(&info), nil
  318. }
  319. // ServerInfoHandler - returns Server Info
  320. func (s *peerRESTServer) ServerInfoHandler(params *grid.MSS) (*grid.JSON[madmin.ServerProperties], *grid.RemoteErr) {
  321. r := http.Request{Host: globalLocalNodeName}
  322. metrics, err := strconv.ParseBool(params.Get(peerRESTMetrics))
  323. if err != nil {
  324. return nil, grid.NewRemoteErr(err)
  325. }
  326. info := getLocalServerProperty(globalEndpoints, &r, metrics)
  327. return madminServerProperties.NewJSONWith(&info), nil
  328. }
  329. // GetCPUsHandler - returns CPU info.
  330. func (s *peerRESTServer) GetCPUsHandler(_ *grid.MSS) (*grid.JSON[madmin.CPUs], *grid.RemoteErr) {
  331. info := madmin.GetCPUs(context.Background(), globalLocalNodeName)
  332. return madminCPUs.NewJSONWith(&info), nil
  333. }
  334. // GetNetInfoHandler - returns network information.
  335. func (s *peerRESTServer) GetNetInfoHandler(_ *grid.MSS) (*grid.JSON[madmin.NetInfo], *grid.RemoteErr) {
  336. info := madmin.GetNetInfo(globalLocalNodeName, globalInternodeInterface)
  337. return madminNetInfo.NewJSONWith(&info), nil
  338. }
  339. // GetPartitionsHandler - returns disk partition information.
  340. func (s *peerRESTServer) GetPartitionsHandler(_ *grid.MSS) (*grid.JSON[madmin.Partitions], *grid.RemoteErr) {
  341. info := madmin.GetPartitions(context.Background(), globalLocalNodeName)
  342. return madminPartitions.NewJSONWith(&info), nil
  343. }
  344. // GetOSInfoHandler - returns operating system's information.
  345. func (s *peerRESTServer) GetOSInfoHandler(_ *grid.MSS) (*grid.JSON[madmin.OSInfo], *grid.RemoteErr) {
  346. info := madmin.GetOSInfo(context.Background(), globalLocalNodeName)
  347. return madminOSInfo.NewJSONWith(&info), nil
  348. }
  349. // GetProcInfoHandler - returns this MinIO process information.
  350. func (s *peerRESTServer) GetProcInfoHandler(_ *grid.MSS) (*grid.JSON[madmin.ProcInfo], *grid.RemoteErr) {
  351. info := madmin.GetProcInfo(context.Background(), globalLocalNodeName)
  352. return madminProcInfo.NewJSONWith(&info), nil
  353. }
  354. // GetMemInfoHandler - returns memory information.
  355. func (s *peerRESTServer) GetMemInfoHandler(_ *grid.MSS) (*grid.JSON[madmin.MemInfo], *grid.RemoteErr) {
  356. info := madmin.GetMemInfo(context.Background(), globalLocalNodeName)
  357. return madminMemInfo.NewJSONWith(&info), nil
  358. }
  359. // GetMetricsHandler - returns server metrics.
  360. func (s *peerRESTServer) GetMetricsHandler(v *grid.URLValues) (*grid.JSON[madmin.RealtimeMetrics], *grid.RemoteErr) {
  361. values := v.Values()
  362. var types madmin.MetricType
  363. if t, _ := strconv.ParseUint(values.Get(peerRESTMetricsTypes), 10, 64); t != 0 {
  364. types = madmin.MetricType(t)
  365. } else {
  366. types = madmin.MetricsAll
  367. }
  368. diskMap := make(map[string]struct{})
  369. for _, disk := range values[peerRESTDisk] {
  370. diskMap[disk] = struct{}{}
  371. }
  372. hostMap := make(map[string]struct{})
  373. for _, host := range values[peerRESTHost] {
  374. hostMap[host] = struct{}{}
  375. }
  376. info := collectLocalMetrics(types, collectMetricsOpts{
  377. disks: diskMap,
  378. hosts: hostMap,
  379. jobID: values.Get(peerRESTJobID),
  380. depID: values.Get(peerRESTDepID),
  381. })
  382. return madminRealtimeMetrics.NewJSONWith(&info), nil
  383. }
  384. // GetSysConfigHandler - returns system config information.
  385. // (only the config that are of concern to minio)
  386. func (s *peerRESTServer) GetSysConfigHandler(_ *grid.MSS) (*grid.JSON[madmin.SysConfig], *grid.RemoteErr) {
  387. info := madmin.GetSysConfig(context.Background(), globalLocalNodeName)
  388. return madminSysConfig.NewJSONWith(&info), nil
  389. }
  390. // GetSysServicesHandler - returns system services information.
  391. // (only the services that are of concern to minio)
  392. func (s *peerRESTServer) GetSysServicesHandler(_ *grid.MSS) (*grid.JSON[madmin.SysServices], *grid.RemoteErr) {
  393. info := madmin.GetSysServices(context.Background(), globalLocalNodeName)
  394. return madminSysServices.NewJSONWith(&info), nil
  395. }
  396. // GetSysErrorsHandler - returns system level errors
  397. func (s *peerRESTServer) GetSysErrorsHandler(_ *grid.MSS) (*grid.JSON[madmin.SysErrors], *grid.RemoteErr) {
  398. info := madmin.GetSysErrors(context.Background(), globalLocalNodeName)
  399. return madminSysErrors.NewJSONWith(&info), nil
  400. }
  401. // DeleteBucketMetadataHandler - Delete in memory bucket metadata
  402. func (s *peerRESTServer) DeleteBucketMetadataHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  403. bucketName := mss.Get(peerRESTBucket)
  404. if bucketName == "" {
  405. return np, grid.NewRemoteErr(errors.New("Bucket name is missing"))
  406. }
  407. globalReplicationStats.Load().Delete(bucketName)
  408. globalBucketMetadataSys.Remove(bucketName)
  409. globalBucketTargetSys.Delete(bucketName)
  410. globalEventNotifier.RemoveNotification(bucketName)
  411. globalBucketConnStats.delete(bucketName)
  412. globalBucketHTTPStats.delete(bucketName)
  413. if localMetacacheMgr != nil {
  414. localMetacacheMgr.deleteBucketCache(bucketName)
  415. }
  416. return
  417. }
  418. // GetAllBucketStatsHandler - fetches bucket replication stats for all buckets from this peer.
  419. func (s *peerRESTServer) GetAllBucketStatsHandler(mss *grid.MSS) (*BucketStatsMap, *grid.RemoteErr) {
  420. replicationStats := globalReplicationStats.Load().GetAll()
  421. bucketStatsMap := make(map[string]BucketStats, len(replicationStats))
  422. for k, v := range replicationStats {
  423. bucketStatsMap[k] = BucketStats{
  424. ReplicationStats: v,
  425. ProxyStats: globalReplicationStats.Load().getProxyStats(k),
  426. }
  427. }
  428. return &BucketStatsMap{Stats: bucketStatsMap, Timestamp: time.Now()}, nil
  429. }
  430. // GetBucketStatsHandler - fetches current in-memory bucket stats, currently only
  431. // returns BucketStats, that currently includes ReplicationStats.
  432. func (s *peerRESTServer) GetBucketStatsHandler(vars *grid.MSS) (*BucketStats, *grid.RemoteErr) {
  433. bucketName := vars.Get(peerRESTBucket)
  434. if bucketName == "" {
  435. return nil, grid.NewRemoteErrString("Bucket name is missing")
  436. }
  437. st := globalReplicationStats.Load()
  438. if st == nil {
  439. return &BucketStats{}, nil
  440. }
  441. bs := BucketStats{
  442. ReplicationStats: st.Get(bucketName),
  443. QueueStats: ReplicationQueueStats{Nodes: []ReplQNodeStats{st.getNodeQueueStats(bucketName)}},
  444. ProxyStats: st.getProxyStats(bucketName),
  445. }
  446. return &bs, nil
  447. }
  448. // GetSRMetricsHandler - fetches current in-memory replication stats at site level from this peer
  449. func (s *peerRESTServer) GetSRMetricsHandler(mss *grid.MSS) (*SRMetricsSummary, *grid.RemoteErr) {
  450. objAPI := newObjectLayerFn()
  451. if objAPI == nil {
  452. return nil, grid.NewRemoteErr(errServerNotInitialized)
  453. }
  454. if st := globalReplicationStats.Load(); st != nil {
  455. sm := st.getSRMetricsForNode()
  456. return &sm, nil
  457. }
  458. return &SRMetricsSummary{}, nil
  459. }
  460. // LoadBucketMetadataHandler - reloads in memory bucket metadata
  461. func (s *peerRESTServer) LoadBucketMetadataHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  462. bucketName := mss.Get(peerRESTBucket)
  463. if bucketName == "" {
  464. return np, grid.NewRemoteErr(errors.New("Bucket name is missing"))
  465. }
  466. objAPI := newObjectLayerFn()
  467. if objAPI == nil {
  468. return np, grid.NewRemoteErr(errServerNotInitialized)
  469. }
  470. meta, err := loadBucketMetadata(context.Background(), objAPI, bucketName)
  471. if err != nil {
  472. return np, grid.NewRemoteErr(err)
  473. }
  474. globalBucketMetadataSys.Set(bucketName, meta)
  475. if meta.notificationConfig != nil {
  476. globalEventNotifier.AddRulesMap(bucketName, meta.notificationConfig.ToRulesMap())
  477. }
  478. if meta.bucketTargetConfig != nil {
  479. globalBucketTargetSys.UpdateAllTargets(bucketName, meta.bucketTargetConfig)
  480. }
  481. return
  482. }
  483. func (s *peerRESTServer) GetMetacacheListingHandler(opts *listPathOptions) (*metacache, *grid.RemoteErr) {
  484. resp := localMetacacheMgr.getBucket(context.Background(), opts.Bucket).findCache(*opts)
  485. return &resp, nil
  486. }
  487. func (s *peerRESTServer) UpdateMetacacheListingHandler(req *metacache) (*metacache, *grid.RemoteErr) {
  488. cache, err := localMetacacheMgr.updateCacheEntry(*req)
  489. if err != nil {
  490. return nil, grid.NewRemoteErr(err)
  491. }
  492. return &cache, nil
  493. }
  494. // PutBucketNotificationHandler - Set bucket policy.
  495. func (s *peerRESTServer) PutBucketNotificationHandler(w http.ResponseWriter, r *http.Request) {
  496. if !s.IsValid(w, r) {
  497. s.writeErrorResponse(w, errors.New("Invalid request"))
  498. return
  499. }
  500. vars := mux.Vars(r)
  501. bucketName := vars[peerRESTBucket]
  502. if bucketName == "" {
  503. s.writeErrorResponse(w, errors.New("Bucket name is missing"))
  504. return
  505. }
  506. var rulesMap event.RulesMap
  507. if r.ContentLength < 0 {
  508. s.writeErrorResponse(w, errInvalidArgument)
  509. return
  510. }
  511. err := gob.NewDecoder(r.Body).Decode(&rulesMap)
  512. if err != nil {
  513. s.writeErrorResponse(w, err)
  514. return
  515. }
  516. globalEventNotifier.AddRulesMap(bucketName, rulesMap)
  517. }
  518. // HealthHandler - returns true of health
  519. func (s *peerRESTServer) HealthHandler(w http.ResponseWriter, r *http.Request) {
  520. s.IsValid(w, r)
  521. }
  522. // VerifyBinary - verifies the downloaded binary is in-tact
  523. func (s *peerRESTServer) VerifyBinaryHandler(w http.ResponseWriter, r *http.Request) {
  524. if !s.IsValid(w, r) {
  525. s.writeErrorResponse(w, errors.New("Invalid request"))
  526. return
  527. }
  528. if r.ContentLength < 0 {
  529. s.writeErrorResponse(w, errInvalidArgument)
  530. return
  531. }
  532. u, err := url.Parse(r.Form.Get(peerRESTURL))
  533. if err != nil {
  534. s.writeErrorResponse(w, err)
  535. return
  536. }
  537. sha256Sum, err := hex.DecodeString(r.Form.Get(peerRESTSha256Sum))
  538. if err != nil {
  539. s.writeErrorResponse(w, err)
  540. return
  541. }
  542. releaseInfo := r.Form.Get(peerRESTReleaseInfo)
  543. lrTime, err := releaseInfoToReleaseTime(releaseInfo)
  544. if err != nil {
  545. s.writeErrorResponse(w, err)
  546. return
  547. }
  548. if lrTime.Sub(currentReleaseTime) <= 0 {
  549. s.writeErrorResponse(w, fmt.Errorf("server is running the latest version: %s", Version))
  550. return
  551. }
  552. zr, err := zstd.NewReader(r.Body)
  553. if err != nil {
  554. s.writeErrorResponse(w, err)
  555. return
  556. }
  557. defer zr.Close()
  558. if err = verifyBinary(u, sha256Sum, releaseInfo, getMinioMode(), zr); err != nil {
  559. s.writeErrorResponse(w, err)
  560. return
  561. }
  562. }
  563. // CommitBinary - overwrites the current binary with the new one.
  564. func (s *peerRESTServer) CommitBinaryHandler(w http.ResponseWriter, r *http.Request) {
  565. if !s.IsValid(w, r) {
  566. s.writeErrorResponse(w, errors.New("Invalid request"))
  567. return
  568. }
  569. if err := commitBinary(); err != nil {
  570. s.writeErrorResponse(w, err)
  571. return
  572. }
  573. }
  574. var errUnsupportedSignal = fmt.Errorf("unsupported signal")
  575. func waitingDrivesNode() map[string]madmin.DiskMetrics {
  576. globalLocalDrivesMu.RLock()
  577. localDrives := cloneDrives(globalLocalDrivesMap)
  578. globalLocalDrivesMu.RUnlock()
  579. errs := make([]error, len(localDrives))
  580. infos := make([]DiskInfo, len(localDrives))
  581. for i, drive := range localDrives {
  582. infos[i], errs[i] = drive.DiskInfo(GlobalContext, DiskInfoOptions{})
  583. }
  584. infoMaps := make(map[string]madmin.DiskMetrics)
  585. for i := range infos {
  586. if infos[i].Metrics.TotalWaiting >= 1 && errors.Is(errs[i], errFaultyDisk) {
  587. infoMaps[infos[i].Endpoint] = madmin.DiskMetrics{
  588. TotalWaiting: infos[i].Metrics.TotalWaiting,
  589. }
  590. }
  591. }
  592. return infoMaps
  593. }
  594. // SignalServiceHandler - signal service handler.
  595. func (s *peerRESTServer) SignalServiceHandler(vars *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  596. signalString := vars.Get(peerRESTSignal)
  597. if signalString == "" {
  598. return np, grid.NewRemoteErrString("signal name is missing")
  599. }
  600. si, err := strconv.Atoi(signalString)
  601. if err != nil {
  602. return np, grid.NewRemoteErr(err)
  603. }
  604. // Wait until the specified time before executing the signal.
  605. if t := vars.Get(peerRESTExecAt); t != "" {
  606. execAt, err := time.Parse(time.RFC3339Nano, vars.Get(peerRESTExecAt))
  607. if err != nil {
  608. logger.LogIf(GlobalContext, "signalservice", err)
  609. execAt = time.Now().Add(restartUpdateDelay)
  610. }
  611. if d := time.Until(execAt); d > 0 {
  612. time.Sleep(d)
  613. }
  614. }
  615. signal := serviceSignal(si)
  616. switch signal {
  617. case serviceRestart, serviceStop:
  618. dryRun := vars.Get("dry-run") == "true" // This is only supported for `restart/stop`
  619. waitingDisks := waitingDrivesNode()
  620. if len(waitingDisks) > 0 {
  621. buf, err := json.Marshal(waitingDisks)
  622. if err != nil {
  623. return np, grid.NewRemoteErr(err)
  624. }
  625. return np, grid.NewRemoteErrString(string(buf))
  626. }
  627. if !dryRun {
  628. globalServiceSignalCh <- signal
  629. }
  630. case serviceFreeze:
  631. freezeServices()
  632. case serviceUnFreeze:
  633. unfreezeServices()
  634. case serviceReloadDynamic:
  635. objAPI := newObjectLayerFn()
  636. if objAPI == nil {
  637. return np, grid.NewRemoteErr(errServerNotInitialized)
  638. }
  639. srvCfg, err := getValidConfig(objAPI)
  640. if err != nil {
  641. return np, grid.NewRemoteErr(err)
  642. }
  643. subSys := vars.Get(peerRESTSubSys)
  644. // Apply dynamic values.
  645. ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
  646. defer cancel()
  647. if subSys == "" {
  648. err = applyDynamicConfig(ctx, objAPI, srvCfg)
  649. } else {
  650. err = applyDynamicConfigForSubSys(ctx, objAPI, srvCfg, subSys)
  651. }
  652. if err != nil {
  653. return np, grid.NewRemoteErr(err)
  654. }
  655. default:
  656. return np, grid.NewRemoteErr(errUnsupportedSignal)
  657. }
  658. return np, nil
  659. }
  660. // ListenHandler sends http trace messages back to peer rest client
  661. func (s *peerRESTServer) ListenHandler(ctx context.Context, v *grid.URLValues, out chan<- *grid.Bytes) *grid.RemoteErr {
  662. values := v.Values()
  663. defer v.Recycle()
  664. var prefix string
  665. if len(values[peerRESTListenPrefix]) > 1 {
  666. return grid.NewRemoteErrString("invalid request (peerRESTListenPrefix)")
  667. }
  668. globalAPIConfig.getRequestsPoolCapacity()
  669. if len(values[peerRESTListenPrefix]) == 1 {
  670. if err := event.ValidateFilterRuleValue(values[peerRESTListenPrefix][0]); err != nil {
  671. return grid.NewRemoteErr(err)
  672. }
  673. prefix = values[peerRESTListenPrefix][0]
  674. }
  675. var suffix string
  676. if len(values[peerRESTListenSuffix]) > 1 {
  677. return grid.NewRemoteErrString("invalid request (peerRESTListenSuffix)")
  678. }
  679. if len(values[peerRESTListenSuffix]) == 1 {
  680. if err := event.ValidateFilterRuleValue(values[peerRESTListenSuffix][0]); err != nil {
  681. return grid.NewRemoteErr(err)
  682. }
  683. suffix = values[peerRESTListenSuffix][0]
  684. }
  685. pattern := event.NewPattern(prefix, suffix)
  686. var eventNames []event.Name
  687. var mask pubsub.Mask
  688. for _, ev := range values[peerRESTListenEvents] {
  689. eventName, err := event.ParseName(ev)
  690. if err != nil {
  691. return grid.NewRemoteErr(err)
  692. }
  693. mask.MergeMaskable(eventName)
  694. eventNames = append(eventNames, eventName)
  695. }
  696. rulesMap := event.NewRulesMap(eventNames, pattern, event.TargetID{ID: mustGetUUID()})
  697. // Listen Publisher uses nonblocking publish and hence does not wait for slow subscribers.
  698. // Use buffered channel to take care of burst sends or slow w.Write()
  699. ch := make(chan event.Event, globalAPIConfig.getRequestsPoolCapacity())
  700. err := globalHTTPListen.Subscribe(mask, ch, ctx.Done(), func(ev event.Event) bool {
  701. if ev.S3.Bucket.Name != "" && values.Get(peerRESTListenBucket) != "" {
  702. if ev.S3.Bucket.Name != values.Get(peerRESTListenBucket) {
  703. return false
  704. }
  705. }
  706. return rulesMap.MatchSimple(ev.EventName, ev.S3.Object.Key)
  707. })
  708. if err != nil {
  709. return grid.NewRemoteErr(err)
  710. }
  711. // Process until remote disconnects.
  712. // Blocks on upstream (out) congestion.
  713. // We have however a dynamic downstream buffer (ch).
  714. buf := bytes.NewBuffer(grid.GetByteBuffer())
  715. enc := json.NewEncoder(buf)
  716. tmpEvt := struct{ Records []event.Event }{[]event.Event{{}}}
  717. for {
  718. select {
  719. case <-ctx.Done():
  720. grid.PutByteBuffer(buf.Bytes())
  721. return nil
  722. case ev := <-ch:
  723. buf.Reset()
  724. tmpEvt.Records[0] = ev
  725. if err := enc.Encode(tmpEvt); err != nil {
  726. peersLogOnceIf(ctx, err, "event: Encode failed")
  727. continue
  728. }
  729. out <- grid.NewBytesWithCopyOf(buf.Bytes())
  730. }
  731. }
  732. }
  733. // TraceHandler sends http trace messages back to peer rest client
  734. func (s *peerRESTServer) TraceHandler(ctx context.Context, payload []byte, _ <-chan []byte, out chan<- []byte) *grid.RemoteErr {
  735. var traceOpts madmin.ServiceTraceOpts
  736. err := json.Unmarshal(payload, &traceOpts)
  737. if err != nil {
  738. return grid.NewRemoteErr(err)
  739. }
  740. var wg sync.WaitGroup
  741. // Trace Publisher uses nonblocking publish and hence does not wait for slow subscribers.
  742. // Use buffered channel to take care of burst sends or slow w.Write()
  743. err = globalTrace.SubscribeJSON(traceOpts.TraceTypes(), out, ctx.Done(), func(entry madmin.TraceInfo) bool {
  744. return shouldTrace(entry, traceOpts)
  745. }, &wg)
  746. if err != nil {
  747. return grid.NewRemoteErr(err)
  748. }
  749. // Publish bootstrap events that have already occurred before client could subscribe.
  750. if traceOpts.TraceTypes().Contains(madmin.TraceBootstrap) {
  751. go globalBootstrapTracer.Publish(ctx, globalTrace)
  752. }
  753. // Wait for remote to cancel and SubscribeJSON to exit.
  754. wg.Wait()
  755. return nil
  756. }
  757. func (s *peerRESTServer) BackgroundHealStatusHandler(_ *grid.MSS) (*grid.JSON[madmin.BgHealState], *grid.RemoteErr) {
  758. state, ok := getLocalBackgroundHealStatus(context.Background(), newObjectLayerFn())
  759. if !ok {
  760. return nil, grid.NewRemoteErr(errServerNotInitialized)
  761. }
  762. return madminBgHealState.NewJSONWith(&state), nil
  763. }
  764. // ReloadSiteReplicationConfigHandler - reloads site replication configuration from the disks
  765. func (s *peerRESTServer) ReloadSiteReplicationConfigHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  766. objAPI := newObjectLayerFn()
  767. if objAPI == nil {
  768. return np, grid.NewRemoteErr(errServerNotInitialized)
  769. }
  770. peersLogIf(context.Background(), globalSiteReplicationSys.Init(context.Background(), objAPI))
  771. return
  772. }
  773. func (s *peerRESTServer) ReloadPoolMetaHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  774. objAPI := newObjectLayerFn()
  775. if objAPI == nil {
  776. return np, grid.NewRemoteErr(errServerNotInitialized)
  777. }
  778. pools, ok := objAPI.(*erasureServerPools)
  779. if !ok {
  780. return
  781. }
  782. if err := pools.ReloadPoolMeta(context.Background()); err != nil {
  783. return np, grid.NewRemoteErr(err)
  784. }
  785. return
  786. }
  787. func (s *peerRESTServer) HandlerClearUploadID(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  788. objAPI := newObjectLayerFn()
  789. if objAPI == nil {
  790. return np, grid.NewRemoteErr(errServerNotInitialized)
  791. }
  792. pools, ok := objAPI.(*erasureServerPools)
  793. if !ok {
  794. return
  795. }
  796. // No need to return errors, this is not a highly strict operation.
  797. uploadID := mss.Get(peerRESTUploadID)
  798. if uploadID != "" {
  799. pools.ClearUploadID(uploadID)
  800. }
  801. return
  802. }
  803. func (s *peerRESTServer) StopRebalanceHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  804. objAPI := newObjectLayerFn()
  805. if objAPI == nil {
  806. return np, grid.NewRemoteErr(errServerNotInitialized)
  807. }
  808. pools, ok := objAPI.(*erasureServerPools)
  809. if !ok {
  810. return np, grid.NewRemoteErr(errors.New("not a pooled setup"))
  811. }
  812. pools.StopRebalance()
  813. return
  814. }
  815. func (s *peerRESTServer) LoadRebalanceMetaHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  816. objAPI := newObjectLayerFn()
  817. if objAPI == nil {
  818. return np, grid.NewRemoteErr(errServerNotInitialized)
  819. }
  820. pools, ok := objAPI.(*erasureServerPools)
  821. if !ok {
  822. return np, grid.NewRemoteErr(errors.New("not a pooled setup"))
  823. }
  824. startRebalance, err := strconv.ParseBool(mss.Get(peerRESTStartRebalance))
  825. if err != nil {
  826. return np, grid.NewRemoteErr(err)
  827. }
  828. if err := pools.loadRebalanceMeta(context.Background()); err != nil {
  829. return np, grid.NewRemoteErr(err)
  830. }
  831. if startRebalance {
  832. go pools.StartRebalance()
  833. }
  834. return
  835. }
  836. func (s *peerRESTServer) LoadTransitionTierConfigHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  837. objAPI := newObjectLayerFn()
  838. if objAPI == nil {
  839. return np, grid.NewRemoteErr(errServerNotInitialized)
  840. }
  841. go func() {
  842. err := globalTierConfigMgr.Reload(context.Background(), newObjectLayerFn())
  843. if err != nil {
  844. peersLogIf(context.Background(), fmt.Errorf("Failed to reload remote tier config %s", err))
  845. }
  846. }()
  847. return
  848. }
  849. // ConsoleLogHandler sends console logs of this node back to peer rest client
  850. func (s *peerRESTServer) ConsoleLogHandler(ctx context.Context, params *grid.MSS, out chan<- *grid.Bytes) *grid.RemoteErr {
  851. mask, err := strconv.Atoi(params.Get(peerRESTLogMask))
  852. if err != nil {
  853. mask = int(madmin.LogMaskAll)
  854. }
  855. ch := make(chan log.Info, 1000)
  856. err = globalConsoleSys.Subscribe(ch, ctx.Done(), "", 0, madmin.LogMask(mask), nil)
  857. if err != nil {
  858. return grid.NewRemoteErr(err)
  859. }
  860. var buf bytes.Buffer
  861. enc := json.NewEncoder(&buf)
  862. for {
  863. select {
  864. case entry, ok := <-ch:
  865. if !ok {
  866. return grid.NewRemoteErrString("console log channel closed")
  867. }
  868. if !entry.SendLog("", madmin.LogMask(mask)) {
  869. continue
  870. }
  871. buf.Reset()
  872. if err := enc.Encode(entry); err != nil {
  873. return grid.NewRemoteErr(err)
  874. }
  875. out <- grid.NewBytesWithCopyOf(buf.Bytes())
  876. case <-ctx.Done():
  877. return grid.NewRemoteErr(ctx.Err())
  878. }
  879. }
  880. }
  881. func (s *peerRESTServer) writeErrorResponse(w http.ResponseWriter, err error) {
  882. w.WriteHeader(http.StatusForbidden)
  883. w.Write([]byte(err.Error()))
  884. }
  885. // IsValid - To authenticate and verify the time difference.
  886. func (s *peerRESTServer) IsValid(w http.ResponseWriter, r *http.Request) bool {
  887. if err := storageServerRequestValidate(r); err != nil {
  888. s.writeErrorResponse(w, err)
  889. return false
  890. }
  891. return true
  892. }
  893. // GetBandwidth gets the bandwidth for the buckets requested.
  894. func (s *peerRESTServer) GetBandwidth(params *grid.URLValues) (*bandwidth.BucketBandwidthReport, *grid.RemoteErr) {
  895. buckets := params.Values().Get("buckets")
  896. selectBuckets := bandwidth.SelectBuckets(buckets)
  897. return globalBucketMonitor.GetReport(selectBuckets), nil
  898. }
  899. func (s *peerRESTServer) GetResourceMetrics(_ *grid.MSS) (*grid.Array[*MetricV2], *grid.RemoteErr) {
  900. res := make([]*MetricV2, 0, len(resourceMetricsGroups))
  901. populateAndPublish(resourceMetricsGroups, func(m MetricV2) bool {
  902. if m.VariableLabels == nil {
  903. m.VariableLabels = make(map[string]string, 1)
  904. }
  905. m.VariableLabels[serverName] = globalLocalNodeName
  906. res = append(res, &m)
  907. return true
  908. })
  909. return aoMetricsGroup.NewWith(res), nil
  910. }
  911. // GetPeerMetrics gets the metrics to be federated across peers.
  912. func (s *peerRESTServer) GetPeerMetrics(_ *grid.MSS) (*grid.Array[*MetricV2], *grid.RemoteErr) {
  913. res := make([]*MetricV2, 0, len(peerMetricsGroups))
  914. populateAndPublish(peerMetricsGroups, func(m MetricV2) bool {
  915. if m.VariableLabels == nil {
  916. m.VariableLabels = make(map[string]string, 1)
  917. }
  918. m.VariableLabels[serverName] = globalLocalNodeName
  919. res = append(res, &m)
  920. return true
  921. })
  922. return aoMetricsGroup.NewWith(res), nil
  923. }
  924. // GetPeerBucketMetrics gets the metrics to be federated across peers.
  925. func (s *peerRESTServer) GetPeerBucketMetrics(_ *grid.MSS) (*grid.Array[*MetricV2], *grid.RemoteErr) {
  926. res := make([]*MetricV2, 0, len(bucketPeerMetricsGroups))
  927. populateAndPublish(bucketPeerMetricsGroups, func(m MetricV2) bool {
  928. if m.VariableLabels == nil {
  929. m.VariableLabels = make(map[string]string, 1)
  930. }
  931. m.VariableLabels[serverName] = globalLocalNodeName
  932. res = append(res, &m)
  933. return true
  934. })
  935. return aoMetricsGroup.NewWith(res), nil
  936. }
  937. func (s *peerRESTServer) SpeedTestHandler(w http.ResponseWriter, r *http.Request) {
  938. if !s.IsValid(w, r) {
  939. s.writeErrorResponse(w, errors.New("invalid request"))
  940. return
  941. }
  942. objAPI := newObjectLayerFn()
  943. if objAPI == nil {
  944. s.writeErrorResponse(w, errServerNotInitialized)
  945. return
  946. }
  947. sizeStr := r.Form.Get(peerRESTSize)
  948. durationStr := r.Form.Get(peerRESTDuration)
  949. concurrentStr := r.Form.Get(peerRESTConcurrent)
  950. storageClass := r.Form.Get(peerRESTStorageClass)
  951. bucketName := r.Form.Get(peerRESTBucket)
  952. enableSha256 := r.Form.Get(peerRESTEnableSha256) == "true"
  953. enableMultipart := r.Form.Get(peerRESTEnableMultipart) == "true"
  954. u, ok := globalIAMSys.GetUser(r.Context(), r.Form.Get(peerRESTAccessKey))
  955. if !ok {
  956. s.writeErrorResponse(w, errAuthentication)
  957. return
  958. }
  959. size, err := strconv.Atoi(sizeStr)
  960. if err != nil {
  961. size = 64 * humanize.MiByte
  962. }
  963. concurrent, err := strconv.Atoi(concurrentStr)
  964. if err != nil {
  965. concurrent = 32
  966. }
  967. duration, err := time.ParseDuration(durationStr)
  968. if err != nil {
  969. duration = time.Second * 10
  970. }
  971. done := keepHTTPResponseAlive(w)
  972. result, err := selfSpeedTest(r.Context(), speedTestOpts{
  973. objectSize: size,
  974. concurrency: concurrent,
  975. duration: duration,
  976. storageClass: storageClass,
  977. bucketName: bucketName,
  978. enableSha256: enableSha256,
  979. enableMultipart: enableMultipart,
  980. creds: u.Credentials,
  981. })
  982. if err != nil {
  983. result.Error = err.Error()
  984. }
  985. done(nil)
  986. peersLogIf(r.Context(), gob.NewEncoder(w).Encode(result))
  987. }
  988. // GetLastDayTierStatsHandler - returns per-tier stats in the last 24hrs for this server
  989. func (s *peerRESTServer) GetLastDayTierStatsHandler(_ *grid.MSS) (*DailyAllTierStats, *grid.RemoteErr) {
  990. if objAPI := newObjectLayerFn(); objAPI == nil || globalTransitionState == nil {
  991. return nil, grid.NewRemoteErr(errServerNotInitialized)
  992. }
  993. result := globalTransitionState.getDailyAllTierStats()
  994. return &result, nil
  995. }
  996. func (s *peerRESTServer) DriveSpeedTestHandler(w http.ResponseWriter, r *http.Request) {
  997. if !s.IsValid(w, r) {
  998. s.writeErrorResponse(w, errors.New("invalid request"))
  999. return
  1000. }
  1001. objAPI := newObjectLayerFn()
  1002. if objAPI == nil {
  1003. s.writeErrorResponse(w, errServerNotInitialized)
  1004. return
  1005. }
  1006. serial := r.Form.Get("serial") == "true"
  1007. blockSizeStr := r.Form.Get("blocksize")
  1008. fileSizeStr := r.Form.Get("filesize")
  1009. blockSize, err := strconv.ParseUint(blockSizeStr, 10, 64)
  1010. if err != nil {
  1011. blockSize = 4 * humanize.MiByte // default value
  1012. }
  1013. fileSize, err := strconv.ParseUint(fileSizeStr, 10, 64)
  1014. if err != nil {
  1015. fileSize = 1 * humanize.GiByte // default value
  1016. }
  1017. opts := madmin.DriveSpeedTestOpts{
  1018. Serial: serial,
  1019. BlockSize: blockSize,
  1020. FileSize: fileSize,
  1021. }
  1022. done := keepHTTPResponseAlive(w)
  1023. result := driveSpeedTest(r.Context(), opts)
  1024. done(nil)
  1025. peersLogIf(r.Context(), gob.NewEncoder(w).Encode(result))
  1026. }
  1027. // GetReplicationMRFHandler - returns replication MRF for bucket
  1028. func (s *peerRESTServer) GetReplicationMRFHandler(w http.ResponseWriter, r *http.Request) {
  1029. if !s.IsValid(w, r) {
  1030. s.writeErrorResponse(w, errors.New("invalid request"))
  1031. return
  1032. }
  1033. vars := mux.Vars(r)
  1034. bucketName := vars[peerRESTBucket]
  1035. ctx := newContext(r, w, "GetReplicationMRF")
  1036. re, err := globalReplicationPool.Get().getMRF(ctx, bucketName)
  1037. if err != nil {
  1038. s.writeErrorResponse(w, err)
  1039. return
  1040. }
  1041. enc := gob.NewEncoder(w)
  1042. for m := range re {
  1043. if err := enc.Encode(m); err != nil {
  1044. s.writeErrorResponse(w, errors.New("Encoding mrf failed: "+err.Error()))
  1045. return
  1046. }
  1047. }
  1048. }
  1049. // DevNull - everything goes to io.Discard
  1050. func (s *peerRESTServer) DevNull(w http.ResponseWriter, r *http.Request) {
  1051. if !s.IsValid(w, r) {
  1052. s.writeErrorResponse(w, errors.New("invalid request"))
  1053. return
  1054. }
  1055. globalNetPerfRX.Connect()
  1056. defer globalNetPerfRX.Disconnect()
  1057. connectTime := time.Now()
  1058. ctx := newContext(r, w, "DevNull")
  1059. for {
  1060. n, err := io.CopyN(xioutil.Discard, r.Body, 128*humanize.KiByte)
  1061. atomic.AddUint64(&globalNetPerfRX.RX, uint64(n))
  1062. if err != nil && err != io.EOF {
  1063. // If there is a disconnection before globalNetPerfMinDuration (we give a margin of error of 1 sec)
  1064. // would mean the network is not stable. Logging here will help in debugging network issues.
  1065. if time.Since(connectTime) < (globalNetPerfMinDuration - time.Second) {
  1066. peersLogIf(ctx, err)
  1067. }
  1068. }
  1069. if err != nil {
  1070. break
  1071. }
  1072. }
  1073. }
  1074. // NetSpeedTestHandlers - perform network speedtest
  1075. func (s *peerRESTServer) NetSpeedTestHandler(w http.ResponseWriter, r *http.Request) {
  1076. if !s.IsValid(w, r) {
  1077. s.writeErrorResponse(w, errors.New("invalid request"))
  1078. return
  1079. }
  1080. durationStr := r.Form.Get(peerRESTDuration)
  1081. duration, err := time.ParseDuration(durationStr)
  1082. if err != nil || duration.Seconds() == 0 {
  1083. duration = time.Second * 10
  1084. }
  1085. result := netperf(r.Context(), duration.Round(time.Second))
  1086. peersLogIf(r.Context(), gob.NewEncoder(w).Encode(result))
  1087. }
  1088. func (s *peerRESTServer) HealBucketHandler(mss *grid.MSS) (np *grid.JSON[madmin.HealResultItem], nerr *grid.RemoteErr) {
  1089. bucket := mss.Get(peerS3Bucket)
  1090. if isMinioMetaBucket(bucket) {
  1091. return np, grid.NewRemoteErr(errInvalidArgument)
  1092. }
  1093. bucketDeleted := mss.Get(peerS3BucketDeleted) == "true"
  1094. res, err := healBucketLocal(context.Background(), bucket, madmin.HealOpts{
  1095. Remove: bucketDeleted,
  1096. })
  1097. if err != nil {
  1098. return np, grid.NewRemoteErr(err)
  1099. }
  1100. return madminHealResultItem.NewJSONWith(&res), nil
  1101. }
  1102. func (s *peerRESTServer) ListBucketsHandler(opts *BucketOptions) (*grid.Array[*BucketInfo], *grid.RemoteErr) {
  1103. buckets, err := listBucketsLocal(context.Background(), *opts)
  1104. if err != nil {
  1105. return nil, grid.NewRemoteErr(err)
  1106. }
  1107. res := aoBucketInfo.New()
  1108. for i := range buckets {
  1109. bucket := buckets[i]
  1110. res.Append(&bucket)
  1111. }
  1112. return res, nil
  1113. }
  1114. // HeadBucketHandler implements peer BucketInfo call, returns bucket create date.
  1115. func (s *peerRESTServer) HeadBucketHandler(mss *grid.MSS) (info *VolInfo, nerr *grid.RemoteErr) {
  1116. bucket := mss.Get(peerS3Bucket)
  1117. if isMinioMetaBucket(bucket) {
  1118. return info, grid.NewRemoteErr(errInvalidArgument)
  1119. }
  1120. bucketDeleted := mss.Get(peerS3BucketDeleted) == "true"
  1121. bucketInfo, err := getBucketInfoLocal(context.Background(), bucket, BucketOptions{
  1122. Deleted: bucketDeleted,
  1123. })
  1124. if err != nil {
  1125. return info, grid.NewRemoteErr(err)
  1126. }
  1127. return &VolInfo{
  1128. Name: bucketInfo.Name,
  1129. Created: bucketInfo.Created,
  1130. Deleted: bucketInfo.Deleted, // needed for site replication
  1131. }, nil
  1132. }
  1133. // DeleteBucketHandler implements peer delete bucket call.
  1134. func (s *peerRESTServer) DeleteBucketHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  1135. bucket := mss.Get(peerS3Bucket)
  1136. if isMinioMetaBucket(bucket) {
  1137. return np, grid.NewRemoteErr(errInvalidArgument)
  1138. }
  1139. forceDelete := mss.Get(peerS3BucketForceDelete) == "true"
  1140. err := deleteBucketLocal(context.Background(), bucket, DeleteBucketOptions{
  1141. Force: forceDelete,
  1142. })
  1143. if err != nil {
  1144. return np, grid.NewRemoteErr(err)
  1145. }
  1146. return np, nil
  1147. }
  1148. // MakeBucketHandler implements peer create bucket call.
  1149. func (s *peerRESTServer) MakeBucketHandler(mss *grid.MSS) (np grid.NoPayload, nerr *grid.RemoteErr) {
  1150. bucket := mss.Get(peerS3Bucket)
  1151. if isMinioMetaBucket(bucket) {
  1152. return np, grid.NewRemoteErr(errInvalidArgument)
  1153. }
  1154. forceCreate := mss.Get(peerS3BucketForceCreate) == "true"
  1155. err := makeBucketLocal(context.Background(), bucket, MakeBucketOptions{
  1156. ForceCreate: forceCreate,
  1157. })
  1158. if err != nil {
  1159. return np, grid.NewRemoteErr(err)
  1160. }
  1161. return np, nil
  1162. }
  1163. // registerPeerRESTHandlers - register peer rest router.
  1164. func registerPeerRESTHandlers(router *mux.Router, gm *grid.Manager) {
  1165. h := func(f http.HandlerFunc) http.HandlerFunc {
  1166. return collectInternodeStats(httpTraceHdrs(f))
  1167. }
  1168. server := &peerRESTServer{}
  1169. subrouter := router.PathPrefix(peerRESTPrefix).Subrouter()
  1170. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodHealth).HandlerFunc(h(server.HealthHandler))
  1171. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodVerifyBinary).HandlerFunc(h(server.VerifyBinaryHandler)).Queries(restQueries(peerRESTURL, peerRESTSha256Sum, peerRESTReleaseInfo)...)
  1172. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodCommitBinary).HandlerFunc(h(server.CommitBinaryHandler))
  1173. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodGetReplicationMRF).HandlerFunc(httpTraceHdrs(server.GetReplicationMRFHandler)).Queries(restQueries(peerRESTBucket)...)
  1174. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodStartProfiling).HandlerFunc(h(server.StartProfilingHandler)).Queries(restQueries(peerRESTProfiler)...)
  1175. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodDownloadProfilingData).HandlerFunc(h(server.DownloadProfilingDataHandler))
  1176. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodSpeedTest).HandlerFunc(h(server.SpeedTestHandler))
  1177. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodDriveSpeedTest).HandlerFunc(h(server.DriveSpeedTestHandler))
  1178. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodNetperf).HandlerFunc(h(server.NetSpeedTestHandler))
  1179. subrouter.Methods(http.MethodPost).Path(peerRESTVersionPrefix + peerRESTMethodDevNull).HandlerFunc(h(server.DevNull))
  1180. logger.FatalIf(consoleLogRPC.RegisterNoInput(gm, server.ConsoleLogHandler), "unable to register handler")
  1181. logger.FatalIf(deleteBucketMetadataRPC.Register(gm, server.DeleteBucketMetadataHandler), "unable to register handler")
  1182. logger.FatalIf(deleteBucketRPC.Register(gm, server.DeleteBucketHandler), "unable to register handler")
  1183. logger.FatalIf(deletePolicyRPC.Register(gm, server.DeletePolicyHandler), "unable to register handler")
  1184. logger.FatalIf(deleteSvcActRPC.Register(gm, server.DeleteServiceAccountHandler), "unable to register handler")
  1185. logger.FatalIf(deleteUserRPC.Register(gm, server.DeleteUserHandler), "unable to register handler")
  1186. logger.FatalIf(getAllBucketStatsRPC.Register(gm, server.GetAllBucketStatsHandler), "unable to register handler")
  1187. logger.FatalIf(getBackgroundHealStatusRPC.Register(gm, server.BackgroundHealStatusHandler), "unable to register handler")
  1188. logger.FatalIf(getBandwidthRPC.Register(gm, server.GetBandwidth), "unable to register handler")
  1189. logger.FatalIf(getBucketStatsRPC.Register(gm, server.GetBucketStatsHandler), "unable to register handler")
  1190. logger.FatalIf(getCPUsHandler.Register(gm, server.GetCPUsHandler), "unable to register handler")
  1191. logger.FatalIf(getLastDayTierStatsRPC.Register(gm, server.GetLastDayTierStatsHandler), "unable to register handler")
  1192. logger.FatalIf(getLocksRPC.Register(gm, server.GetLocksHandler), "unable to register handler")
  1193. logger.FatalIf(getMemInfoRPC.Register(gm, server.GetMemInfoHandler), "unable to register handler")
  1194. logger.FatalIf(getMetacacheListingRPC.Register(gm, server.GetMetacacheListingHandler), "unable to register handler")
  1195. logger.FatalIf(getMetricsRPC.Register(gm, server.GetMetricsHandler), "unable to register handler")
  1196. logger.FatalIf(getNetInfoRPC.Register(gm, server.GetNetInfoHandler), "unable to register handler")
  1197. logger.FatalIf(getOSInfoRPC.Register(gm, server.GetOSInfoHandler), "unable to register handler")
  1198. logger.FatalIf(getPartitionsRPC.Register(gm, server.GetPartitionsHandler), "unable to register handler")
  1199. logger.FatalIf(getPeerBucketMetricsRPC.Register(gm, server.GetPeerBucketMetrics), "unable to register handler")
  1200. logger.FatalIf(getPeerMetricsRPC.Register(gm, server.GetPeerMetrics), "unable to register handler")
  1201. logger.FatalIf(getProcInfoRPC.Register(gm, server.GetProcInfoHandler), "unable to register handler")
  1202. logger.FatalIf(getResourceMetricsRPC.Register(gm, server.GetResourceMetrics), "unable to register handler")
  1203. logger.FatalIf(getSRMetricsRPC.Register(gm, server.GetSRMetricsHandler), "unable to register handler")
  1204. logger.FatalIf(getSysConfigRPC.Register(gm, server.GetSysConfigHandler), "unable to register handler")
  1205. logger.FatalIf(getSysErrorsRPC.Register(gm, server.GetSysErrorsHandler), "unable to register handler")
  1206. logger.FatalIf(getSysServicesRPC.Register(gm, server.GetSysServicesHandler), "unable to register handler")
  1207. logger.FatalIf(headBucketRPC.Register(gm, server.HeadBucketHandler), "unable to register handler")
  1208. logger.FatalIf(healBucketRPC.Register(gm, server.HealBucketHandler), "unable to register handler")
  1209. logger.FatalIf(listBucketsRPC.Register(gm, server.ListBucketsHandler), "unable to register handler")
  1210. logger.FatalIf(listenRPC.RegisterNoInput(gm, server.ListenHandler), "unable to register handler")
  1211. logger.FatalIf(loadBucketMetadataRPC.Register(gm, server.LoadBucketMetadataHandler), "unable to register handler")
  1212. logger.FatalIf(loadGroupRPC.Register(gm, server.LoadGroupHandler), "unable to register handler")
  1213. logger.FatalIf(loadPolicyMappingRPC.Register(gm, server.LoadPolicyMappingHandler), "unable to register handler")
  1214. logger.FatalIf(loadPolicyRPC.Register(gm, server.LoadPolicyHandler), "unable to register handler")
  1215. logger.FatalIf(loadRebalanceMetaRPC.Register(gm, server.LoadRebalanceMetaHandler), "unable to register handler")
  1216. logger.FatalIf(loadSvcActRPC.Register(gm, server.LoadServiceAccountHandler), "unable to register handler")
  1217. logger.FatalIf(loadTransitionTierConfigRPC.Register(gm, server.LoadTransitionTierConfigHandler), "unable to register handler")
  1218. logger.FatalIf(loadUserRPC.Register(gm, server.LoadUserHandler), "unable to register handler")
  1219. logger.FatalIf(localStorageInfoRPC.Register(gm, server.LocalStorageInfoHandler), "unable to register handler")
  1220. logger.FatalIf(makeBucketRPC.Register(gm, server.MakeBucketHandler), "unable to register handler")
  1221. logger.FatalIf(reloadPoolMetaRPC.Register(gm, server.ReloadPoolMetaHandler), "unable to register handler")
  1222. logger.FatalIf(reloadSiteReplicationConfigRPC.Register(gm, server.ReloadSiteReplicationConfigHandler), "unable to register handler")
  1223. logger.FatalIf(serverInfoRPC.Register(gm, server.ServerInfoHandler), "unable to register handler")
  1224. logger.FatalIf(signalServiceRPC.Register(gm, server.SignalServiceHandler), "unable to register handler")
  1225. logger.FatalIf(stopRebalanceRPC.Register(gm, server.StopRebalanceHandler), "unable to register handler")
  1226. logger.FatalIf(updateMetacacheListingRPC.Register(gm, server.UpdateMetacacheListingHandler), "unable to register handler")
  1227. logger.FatalIf(cleanupUploadIDCacheMetaRPC.Register(gm, server.HandlerClearUploadID), "unable to register handler")
  1228. logger.FatalIf(gm.RegisterStreamingHandler(grid.HandlerTrace, grid.StreamHandler{
  1229. Handle: server.TraceHandler,
  1230. Subroute: "",
  1231. OutCapacity: 100000,
  1232. InCapacity: 0,
  1233. }), "unable to register handler")
  1234. }