diff --git a/.gitignore b/.gitignore
index 7fd11c11..0d8e7b86 100644
--- a/.gitignore
+++ b/.gitignore
@@ -63,16 +63,6 @@ schema.json
docs/logs/
logs/
dist/
-build/
-
-# --- allow SDK docs build/ dirs (override rule above) ---
-!docs/sdk/v0.53/build/
-!docs/sdk/v0.53/build/**
-!docs/sdk/v0.50/build/
-!docs/sdk/v0.50/build/**
-!docs/sdk/v0.47/build/
-!docs/sdk/v0.47/build/**
-# --------------------------------------------------------
.yarn/
# Keep most scripts ignored, but include versioning tools
diff --git a/.prettierignore b/.prettierignore
new file mode 100644
index 00000000..aa54b0bd
--- /dev/null
+++ b/.prettierignore
@@ -0,0 +1,10 @@
+# Mintlify MDX files - Prettier breaks MDX component formatting
+**/*.mdx
+
+# Node modules
+node_modules/
+
+# Build output
+.next/
+dist/
+build/
diff --git a/assets/images-for-sdk-next/build/architecture/bankv2.png b/assets/images-for-sdk-next/build/architecture/bankv2.png
new file mode 100644
index 00000000..4123dbf5
Binary files /dev/null and b/assets/images-for-sdk-next/build/architecture/bankv2.png differ
diff --git a/assets/images-for-sdk-next/build/building-modules/transaction_flow.svg b/assets/images-for-sdk-next/build/building-modules/transaction_flow.svg
new file mode 100644
index 00000000..93bb940a
--- /dev/null
+++ b/assets/images-for-sdk-next/build/building-modules/transaction_flow.svg
@@ -0,0 +1,48 @@
+
\ No newline at end of file
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-begin_block.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-begin_block.png
new file mode 100644
index 00000000..745d4a5a
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-begin_block.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-checktx.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-checktx.png
new file mode 100644
index 00000000..38b217ac
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-checktx.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-commit.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-commit.png
new file mode 100644
index 00000000..b23c7312
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-commit.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-deliver_tx.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-deliver_tx.png
new file mode 100644
index 00000000..f0a54b4e
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-deliver_tx.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-initchain.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-initchain.png
new file mode 100644
index 00000000..167b4fad
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-initchain.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-prepareproposal.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-prepareproposal.png
new file mode 100644
index 00000000..146e804b
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-prepareproposal.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state-processproposal.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state-processproposal.png
new file mode 100644
index 00000000..fb601237
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state-processproposal.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/baseapp_state.png b/assets/images-for-sdk-next/learn/advanced/baseapp_state.png
new file mode 100644
index 00000000..5cf54fdb
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/baseapp_state.png differ
diff --git a/assets/images-for-sdk-next/learn/advanced/blockprocessing-1.png b/assets/images-for-sdk-next/learn/advanced/blockprocessing-1.png
new file mode 100644
index 00000000..d4167f33
Binary files /dev/null and b/assets/images-for-sdk-next/learn/advanced/blockprocessing-1.png differ
diff --git a/assets/images-for-sdk-next/learn/intro/main-components.png b/assets/images-for-sdk-next/learn/intro/main-components.png
new file mode 100644
index 00000000..fa82eb9b
Binary files /dev/null and b/assets/images-for-sdk-next/learn/intro/main-components.png differ
diff --git a/copy-of-sdk-docs/build/_category_.json b/copy-of-sdk-docs/build/_category_.json
new file mode 100644
index 00000000..9f308823
--- /dev/null
+++ b/copy-of-sdk-docs/build/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Build",
+ "position": 0,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/abci/00-introduction.md b/copy-of-sdk-docs/build/abci/00-introduction.md
new file mode 100644
index 00000000..fa648be0
--- /dev/null
+++ b/copy-of-sdk-docs/build/abci/00-introduction.md
@@ -0,0 +1,51 @@
+# Introduction
+
+## What is ABCI?
+
+ABCI, Application Blockchain Interface is the interface between CometBFT and the application. More information about ABCI can be found [here](https://docs.cometbft.com/v0.38/spec/abci/). CometBFT version 0.38 included a new version of ABCI (called ABCI 2.0) which added several new methods.
+
+The 5 methods introduced in ABCI 2.0 are:
+
+* `PrepareProposal`
+* `ProcessProposal`
+* `ExtendVote`
+* `VerifyVoteExtension`
+* `FinalizeBlock`
+
+
+## The Flow
+
+## PrepareProposal
+
+Based on validator voting power, CometBFT chooses a block proposer and calls `PrepareProposal` on the block proposer's application (Cosmos SDK). The selected block proposer is responsible for collecting outstanding transactions from the mempool, adhering to the application's specifications. The application can enforce custom transaction ordering and incorporate additional transactions, potentially generated from vote extensions in the previous block.
+
+To perform this manipulation on the application side, a custom handler must be implemented. By default, the Cosmos SDK provides `PrepareProposalHandler`, used in conjunction with an application specific mempool. A custom handler can be written by an application developer, if a noop handler is provided, all transactions are considered valid.
+
+Please note that vote extensions will only be available on the following height in which vote extensions are enabled. More information about vote extensions can be found [here](https://docs.cosmos.network/main/build/abci/vote-extensions).
+
+After creating the proposal, the proposer returns it to CometBFT.
+
+PrepareProposal CAN be non-deterministic.
+
+## ProcessProposal
+
+This method allows validators to perform application-specific checks on the block proposal and is called on all validators. This is an important step in the consensus process, as it ensures that the block is valid and meets the requirements of the application. For example, validators could check that the block contains all the required transactions or that the block does not create any invalid state transitions.
+
+The implementation of `ProcessProposal` MUST be deterministic.
+
+## ExtendVote and VerifyVoteExtensions
+
+These methods allow applications to extend the voting process by requiring validators to perform additional actions beyond simply validating blocks.
+
+If vote extensions are enabled, `ExtendVote` will be called on every validator and each one will return its vote extension which is in practice a bunch of bytes. As mentioned above this data (vote extension) can only be retrieved in the next block height during `PrepareProposal`. Additionally, this data can be arbitrary, but in the provided tutorials, it serves as an oracle or proof of transactions in the mempool. Essentially, vote extensions are processed and injected as transactions. Examples of use-cases for vote extensions include prices for a price oracle or encryption shares for an encrypted transaction mempool. `ExtendVote` CAN be non-deterministic.
+
+`VerifyVoteExtensions` is performed on every validator multiple times in order to verify other validators' vote extensions. This check is performed to validate the integrity and validity of the vote extensions preventing malicious or invalid vote extensions.
+
+Additionally, applications must keep the vote extension data concise as it can degrade the performance of their chain, see testing results [here](https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed).
+
+`VerifyVoteExtensions` MUST be deterministic.
+
+
+## FinalizeBlock
+
+`FinalizeBlock` is then called and is responsible for updating the state of the blockchain and making the block available to users.
diff --git a/copy-of-sdk-docs/build/abci/01-prepare-proposal.md b/copy-of-sdk-docs/build/abci/01-prepare-proposal.md
new file mode 100644
index 00000000..b1c6eb8a
--- /dev/null
+++ b/copy-of-sdk-docs/build/abci/01-prepare-proposal.md
@@ -0,0 +1,45 @@
+# Prepare Proposal
+
+`PrepareProposal` handles construction of the block, meaning that when a proposer
+is preparing to propose a block, it requests the application to evaluate a
+`RequestPrepareProposal`, which contains a series of transactions from CometBFT's
+mempool. At this point, the application has complete control over the proposal.
+It can modify, delete, and inject transactions from its own app-side mempool into
+the proposal or even ignore all the transactions altogether. What the application
+does with the transactions provided to it by `RequestPrepareProposal` has no
+effect on CometBFT's mempool.
+
+Note, that the application defines the semantics of the `PrepareProposal` and it
+MAY be non-deterministic and is only executed by the current block proposer.
+
+Now, reading mempool twice in the previous sentence is confusing, lets break it down.
+CometBFT has a mempool that handles gossiping transactions to other nodes
+in the network. The order of these transactions is determined by CometBFT's mempool,
+using FIFO as the sole ordering mechanism. It's worth noting that the priority mempool
+in Comet was removed or deprecated.
+However, since the application is able to fully inspect
+all transactions, it can provide greater control over transaction ordering.
+Allowing the application to handle ordering enables the application to define how
+it would like the block constructed.
+
+The Cosmos SDK defines the `DefaultProposalHandler` type, which provides applications with
+`PrepareProposal` and `ProcessProposal` handlers. If you decide to implement your
+own `PrepareProposal` handler, you must ensure that the transactions
+selected DO NOT exceed the maximum block gas (if set) and the maximum bytes provided
+by `req.MaxBytes`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci_utils.go
+```
+
+This default implementation can be overridden by the application developer in
+favor of a custom implementation in [`app_di.go`](../building-apps/01-app-go-di.md):
+
+```go
+prepareOpt := func(app *baseapp.BaseApp) {
+ abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app)
+ app.SetPrepareProposal(abciPropHandler.PrepareProposalHandler())
+}
+
+baseAppOptions = append(baseAppOptions, prepareOpt)
+```
diff --git a/copy-of-sdk-docs/build/abci/02-process-proposal.md b/copy-of-sdk-docs/build/abci/02-process-proposal.md
new file mode 100644
index 00000000..221aa66d
--- /dev/null
+++ b/copy-of-sdk-docs/build/abci/02-process-proposal.md
@@ -0,0 +1,32 @@
+# Process Proposal
+
+`ProcessProposal` handles the validation of a proposal from `PrepareProposal`,
+which also includes a block header. After a block has been proposed,
+the other validators have the right to accept or reject that block. The validator in the
+default implementation of `PrepareProposal` runs basic validity checks on each
+transaction.
+
+Note, `ProcessProposal` MUST be deterministic. Non-deterministic behaviors will cause apphash mismatches.
+This means that if `ProcessProposal` panics or fails and we reject, all honest validator
+processes should reject (i.e., prevote nil). If so, CometBFT will start a new round with a new block proposal and the same cycle will happen with `PrepareProposal`
+and `ProcessProposal` for the new proposal.
+
+Here is the implementation of the default implementation:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci_utils.go#L219-L226
+```
+
+Like `PrepareProposal`, this implementation is the default and can be modified by
+the application developer in [`app_di.go`](../building-apps/01-app-go-di.md). If you decide to implement
+your own `ProcessProposal` handler, you must ensure that the transactions
+provided in the proposal DO NOT exceed the maximum block gas and `maxtxbytes` (if set).
+
+```go
+processOpt := func(app *baseapp.BaseApp) {
+ abciPropHandler := baseapp.NewDefaultProposalHandler(mempool, app)
+ app.SetProcessProposal(abciPropHandler.ProcessProposalHandler())
+}
+
+baseAppOptions = append(baseAppOptions, processOpt)
+```
diff --git a/copy-of-sdk-docs/build/abci/03-vote-extensions.md b/copy-of-sdk-docs/build/abci/03-vote-extensions.md
new file mode 100644
index 00000000..a57395e3
--- /dev/null
+++ b/copy-of-sdk-docs/build/abci/03-vote-extensions.md
@@ -0,0 +1,122 @@
+# Vote Extensions
+
+:::note Synopsis
+This section describes how the application can define and use vote extensions
+defined in ABCI++.
+:::
+
+## Extend Vote
+
+ABCI 2.0 (colloquially called ABCI++) allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic, and the data returned can be unique to the
+validator process. The Cosmos SDK defines [`baseapp.ExtendVoteHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/abci.go#L32):
+
+```go
+type ExtendVoteHandler func(Context, *abci.ExtendVoteRequest) (*abci.ExtendVoteResponse, error)
+```
+
+An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler`
+`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during
+the `ExtendVote` ABCI method. Note, if an application decides to implement
+`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote
+extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote)
+for more details.
+
+There are many decentralized censorship-resistant use cases for vote extensions.
+For example, a validator may want to submit prices for a price oracle or encryption
+shares for an encrypted transaction mempool. Note, an application should be careful
+to consider the size of the vote extensions as they could increase latency in block
+production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed)
+for more details.
+
+Click [here](https://docs.cosmos.network/main/build/abci/vote-extensions) if you would like a walkthrough of how to implement vote extensions.
+
+
+## Verify Vote Extension
+
+Similar to extending a vote, an application can also verify vote extensions from
+other validators when validating their pre-commits. For a given vote extension,
+this process MUST be deterministic. The Cosmos SDK defines [`sdk.VerifyVoteExtensionHandler`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/types/abci.go#L29-L31):
+
+```go
+type VerifyVoteExtensionHandler func(Context, *abci.VerifyVoteExtensionRequest) (*abci.VerifyVoteExtensionResponse, error)
+```
+
+An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler`
+`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called
+during the `VerifyVoteExtension` ABCI method. If an application defines a vote
+extension handler, it should also define a verification handler. Note, not all
+validators will share the same view of what vote extensions they verify depending
+on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension)
+for more details.
+
+Additionally, please keep in mind that performance can be degraded if vote extensions are too big (https://docs.cometbft.com/v0.38/qa/cometbft-qa-38#vote-extensions-testbed), so we highly recommend a size validation in `VerifyVoteExtensions`.
+
+
+## Vote Extension Propagation
+
+The agreed upon vote extensions at height `H` are provided to the proposing validator
+at height `H+1` during `PrepareProposal`. As a result, the vote extensions are
+not natively provided or exposed to the remaining validators during `ProcessProposal`.
+As a result, if an application requires that the agreed upon vote extensions from
+height `H` are available to all validators at `H+1`, the application must propagate
+these vote extensions manually in the block proposal itself. This can be done by
+"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal`
+is just a slice of byte slices.
+
+`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so
+any injected vote extensions will safely be ignored in `FinalizeBlock`. For more
+details on propagation, see the [ABCI++ 2.0 ADR](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-064-abci-2.0.md#vote-extension-propagation--verification).
+
+### Recovery of injected Vote Extensions
+
+As stated before, vote extensions can be injected into a block proposal (along with
+other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock
+hook to allow applications to recover vote extensions, perform any necessary
+computation on them, and then store the results in the cached store. These results
+will be available to the application during the subsequent `FinalizeBlock` call.
+
+An example of how a pre-FinalizeBlock hook could look like is shown below:
+
+```go
+app.SetPreBlocker(func(ctx sdk.Context, req *abci.FinalizeBlockRequest) error {
+ allVEs := []VE{} // store all parsed vote extensions here
+ for _, tx := range req.Txs {
+ // define a custom function that tries to parse the tx as a vote extension
+ ve, ok := parseVoteExtension(tx)
+ if !ok {
+ continue
+ }
+
+ allVEs = append(allVEs, ve)
+ }
+
+ // perform any necessary computation on the vote extensions and store the result
+ // in the cached store
+ result := compute(allVEs)
+ err := storeVEResult(ctx, result)
+ if err != nil {
+ return err
+ }
+
+ return nil
+})
+
+```
+
+Then, in an app's module, the application can retrieve the result of the computation
+of vote extensions from the cached store:
+
+```go
+func (k Keeper) BeginBlocker(ctx context.Context) error {
+ // retrieve the result of the computation of vote extensions from the cached store
+ result, err := k.GetVEResult(ctx)
+ if err != nil {
+ return err
+ }
+
+ // use the result of the computation of vote extensions
+ k.setSomething(result)
+
+ return nil
+}
+```
diff --git a/copy-of-sdk-docs/build/abci/04-checktx.md b/copy-of-sdk-docs/build/abci/04-checktx.md
new file mode 100644
index 00000000..081d6fd2
--- /dev/null
+++ b/copy-of-sdk-docs/build/abci/04-checktx.md
@@ -0,0 +1,50 @@
+# CheckTx
+
+CheckTx is called by the `BaseApp` when comet receives a transaction from a client, over the p2p network or RPC. The CheckTx method is responsible for validating the transaction and returning an error if the transaction is invalid.
+
+```mermaid
+graph TD
+ subgraph SDK[Cosmos SDK]
+ B[Baseapp]
+ A[AnteHandlers]
+ B <-->|Validate TX| A
+ end
+ C[CometBFT] <-->|CheckTx|SDK
+ U((User)) -->|Submit TX| C
+ N[P2P] -->|Receive TX| C
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/31c604762a434c7b676b6a89897ecbd7c4653a23/baseapp/abci.go#L350-L390
+```
+
+## CheckTx Handler
+
+`CheckTxHandler` allows users to extend the logic of `CheckTx`. `CheckTxHandler` is called by passing context and the transaction bytes received through ABCI. It is required that the handler returns deterministic results given the same transaction bytes.
+
+:::note
+we return the raw decoded transaction here to avoid decoding it twice.
+:::
+
+```go
+type CheckTxHandler func(ctx sdk.Context, tx []byte) (Tx, error)
+```
+
+Setting a custom `CheckTxHandler` is optional. It can be done from your app.go file:
+
+```go
+func NewSimApp(
+ logger log.Logger,
+ db corestore.KVStoreWithBatch,
+ traceStore io.Writer,
+ loadLatest bool,
+ appOpts servertypes.AppOptions,
+ baseAppOptions ...func(*baseapp.BaseApp),
+) *SimApp {
+ ...
+ // Create ChecktxHandler
+ checktxHandler := abci.NewCustomCheckTxHandler(...)
+ app.SetCheckTxHandler(checktxHandler)
+ ...
+}
+```
diff --git a/copy-of-sdk-docs/build/abci/_category_.json b/copy-of-sdk-docs/build/abci/_category_.json
new file mode 100644
index 00000000..d4ebb80c
--- /dev/null
+++ b/copy-of-sdk-docs/build/abci/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "ABCI",
+ "position": 2,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/architecture/PROCESS.md b/copy-of-sdk-docs/build/architecture/PROCESS.md
new file mode 100644
index 00000000..5ba1b86c
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/PROCESS.md
@@ -0,0 +1,58 @@
+# ADR Creation Process
+
+1. Copy the `adr-template.md` file. Use the following filename pattern: `adr-next_number-title.md`
+2. Create a draft Pull Request if you want to get early feedback.
+3. Make sure the context and solution are clear and well documented.
+4. Add an entry to the list in the [README](./README.md) file.
+5. Create a Pull Request to propose a new ADR.
+
+## What is an ADR?
+
+An ADR is a document that documents an implementation and design that may or may not have been discussed in an RFC. While an RFC is meant to replace synchronous communication in a distributed environment, an ADR is meant to document an already made decision. An ADR won't come with much of a communication overhead because the discussion was recorded in an RFC or a synchronous discussion. If the consensus came from a synchronous discussion, then a short excerpt should be added to the ADR to explain the goals.
+
+## ADR life cycle
+
+ADR creation is an **iterative** process. Instead of having a high amount of communication overhead, an ADR is used when there is already a decision made and implementation details need to be added. The ADR should document what the collective consensus for the specific issue is and how to solve it.
+
+1. Every ADR should start with either an RFC or a discussion where consensus has been met.
+
+2. Once consensus is met, a GitHub Pull Request (PR) is created with a new document based on the `adr-template.md`.
+
+3. If a _proposed_ ADR is merged, then it should clearly document outstanding issues either in ADR document notes or in a GitHub Issue.
+
+4. The PR SHOULD always be merged. In the case of a faulty ADR, we still prefer to merge it with a _rejected_ status. The only time the ADR SHOULD NOT be merged is if the author abandons it.
+
+5. Merged ADRs SHOULD NOT be pruned.
+
+### ADR status
+
+Status has two components:
+
+```text
+{CONSENSUS STATUS} {IMPLEMENTATION STATUS}
+```
+
+IMPLEMENTATION STATUS is either `Implemented` or `Not Implemented`.
+
+#### Consensus Status
+
+```text
+DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEDED by ADR-xxx
+ \ |
+ \ |
+ v v
+ ABANDONED
+```
+
+* `DRAFT`: [optional] an ADR which is a work in progress, not ready for a general review. This is to present an early work and get early feedback in a Draft Pull Request form.
+* `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached an agreement yet.
+* `LAST CALL `: [optional] Notify that we are close to accepting updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached, and we still want to give it a time to let the community react or analyze.
+* `ACCEPTED`: an ADR that represents a currently implemented or to be implemented architecture design.
+* `REJECTED`: ADR can go from PROPOSED or ACCEPTED to rejected if the consensus among project stakeholders will decide so.
+* `SUPERSEDED by ADR-xxx`: an ADR that has been superseded by a new ADR.
+* `ABANDONED`: the ADR is no longer pursued by the original authors.
+
+## Language used in ADR
+
+* The context/background should be written in the present tense.
+* Avoid using the first person.
diff --git a/copy-of-sdk-docs/build/architecture/README.md b/copy-of-sdk-docs/build/architecture/README.md
new file mode 100644
index 00000000..e75d9e25
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/README.md
@@ -0,0 +1,96 @@
+---
+sidebar_position: 1
+---
+
+# Architecture Decision Records (ADR)
+
+This is a location to record all high-level architecture decisions in the Cosmos-SDK.
+
+An Architectural Decision (**AD**) is a software design choice that addresses a functional or non-functional requirement that is architecturally significant.
+An Architecturally Significant Requirement (**ASR**) is a requirement that has a measurable effect on a software system’s architecture and quality.
+An Architectural Decision Record (**ADR**) captures a single AD, such as is often done when writing personal notes or meeting minutes; the collection of ADRs created and maintained in a project constitute its decision log. All these are within the topic of Architectural Knowledge Management (AKM).
+
+You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
+
+## Rationale
+
+ADRs are intended to be the primary mechanism for proposing new feature designs and new processes, for collecting community input on an issue, and for documenting the design decisions.
+An ADR should provide:
+
+* Context on the relevant goals and the current state
+* Proposed changes to achieve the goals
+* Summary of pros and cons
+* References
+* Changelog
+
+Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and
+justification for a change in architecture, or for the architecture of something
+new. The spec is a much more compressed and streamlined summary of everything as
+it stands today.
+
+If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
+
+## Creating a new ADR
+
+Read about the [PROCESS](./PROCESS.md).
+
+### Use RFC 2119 Keywords
+
+When writing ADRs, follow the same best practices for writing RFCs. When writing RFCs, key words are used to signify the requirements in the specification. These words are often capitalized: "MUST," "MUST NOT," "REQUIRED," "SHALL," "SHALL NOT," "SHOULD," "SHOULD NOT," "RECOMMENDED," "MAY," and "OPTIONAL." They are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
+
+## ADR Table of Contents
+
+### Accepted
+
+* [ADR 002: SDK Documentation Structure](./adr-002-docs-structure.md)
+* [ADR 004: Split Denomination Keys](./adr-004-split-denomination-keys.md)
+* [ADR 006: Secret Store Replacement](./adr-006-secret-store-replacement.md)
+* [ADR 009: Evidence Module](./adr-009-evidence-module.md)
+* [ADR 010: Modular AnteHandler](./adr-010-modular-antehandler.md)
+* [ADR 019: Protocol Buffer State Encoding](./adr-019-protobuf-state-encoding.md)
+* [ADR 020: Protocol Buffer Transaction Encoding](./adr-020-protobuf-transaction-encoding.md)
+* [ADR 021: Protocol Buffer Query Encoding](./adr-021-protobuf-query-encoding.md)
+* [ADR 023: Protocol Buffer Naming and Versioning](./adr-023-protobuf-naming.md)
+* [ADR 029: Fee Grant Module](./adr-029-fee-grant-module.md)
+* [ADR 030: Message Authorization Module](./adr-030-authz-module.md)
+* [ADR 031: Protobuf Msg Services](./adr-031-msg-service.md)
+* [ADR 055: ORM](./adr-055-orm.md)
+* [ADR 058: Auto-Generated CLI](./adr-058-auto-generated-cli.md)
+* [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md)
+* [ADR 061: Liquid Staking](./adr-061-liquid-staking.md)
+
+### Proposed
+
+* [ADR 003: Dynamic Capability Store](./adr-003-dynamic-capability-store.md)
+* [ADR 011: Generalize Genesis Accounts](./adr-011-generalize-genesis-accounts.md)
+* [ADR 012: State Accessors](./adr-012-state-accessors.md)
+* [ADR 013: Metrics](./adr-013-metrics.md)
+* [ADR 016: Validator Consensus Key Rotation](./adr-016-validator-consensus-key-rotation.md)
+* [ADR 017: Historical Header Module](./adr-017-historical-header-module.md)
+* [ADR 018: Extendable Voting Periods](./adr-018-extendable-voting-period.md)
+* [ADR 022: Custom baseapp panic handling](./adr-022-custom-panic-handling.md)
+* [ADR 024: Coin Metadata](./adr-024-coin-metadata.md)
+* [ADR 027: Deterministic Protobuf Serialization](./adr-027-deterministic-protobuf-serialization.md)
+* [ADR 028: Public Key Addresses](./adr-028-public-key-addresses.md)
+* [ADR 032: Typed Events](./adr-032-typed-events.md)
+* [ADR 033: Inter-module RPC](./adr-033-protobuf-inter-module-comm.md)
+* [ADR 035: Rosetta API Support](./adr-035-rosetta-api-support.md)
+* [ADR 037: Governance Split Votes](./adr-037-gov-split-vote.md)
+* [ADR 038: State Listening](./adr-038-state-listening.md)
+* [ADR 039: Epoched Staking](./adr-039-epoched-staking.md)
+* [ADR 040: Storage and SMT State Commitments](./adr-040-storage-and-smt-state-commitments.md)
+* [ADR 046: Module Params](./adr-046-module-params.md)
+* [ADR 054: Semver Compatible SDK Modules](./adr-054-semver-compatible-modules.md)
+* [ADR 057: App Wiring](./adr-057-app-wiring.md)
+* [ADR 059: Test Scopes](./adr-059-test-scopes.md)
+* [ADR 062: Collections State Layer](./adr-062-collections-state-layer.md)
+* [ADR 063: Core Module API](./adr-063-core-module-api.md)
+* [ADR 065: Store V2](./adr-065-store-v2.md)
+* [ADR 076: Transaction Malleability Risk Review and Recommendations](./adr-076-tx-malleability.md)
+
+### Draft
+
+* [ADR 044: Guidelines for Updating Protobuf Definitions](./adr-044-protobuf-updates-guidelines.md)
+* [ADR 047: Extend Upgrade Plan](./adr-047-extend-upgrade-plan.md)
+* [ADR 053: Go Module Refactoring](./adr-053-go-module-refactoring.md)
+* [ADR 068: Preblock](./adr-068-preblock.md)
diff --git a/copy-of-sdk-docs/build/architecture/_category_.json b/copy-of-sdk-docs/build/architecture/_category_.json
new file mode 100644
index 00000000..e0b1907a
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "ADRs",
+ "position": 6,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/architecture/adr-002-docs-structure.md b/copy-of-sdk-docs/build/architecture/adr-002-docs-structure.md
new file mode 100644
index 00000000..5819151f
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-002-docs-structure.md
@@ -0,0 +1,86 @@
+# ADR 002: SDK Documentation Structure
+
+## Context
+
+There is a need for a scalable structure of the Cosmos SDK documentation. Current documentation includes a lot of non-related Cosmos SDK material, is difficult to maintain and hard to follow as a user.
+
+Ideally, we would have:
+
+* All docs related to dev frameworks or tools live in their respective github repos (sdk repo would contain sdk docs, hub repo would contain hub docs, lotion repo would contain lotion docs, etc.)
+* All other docs (faqs, whitepaper, high-level material about Cosmos) would live on the website.
+
+## Decision
+
+Re-structure the `/docs` folder of the Cosmos SDK github repo as follows:
+
+```text
+docs/
+├── README
+├── intro/
+├── concepts/
+│ ├── baseapp
+│ ├── types
+│ ├── store
+│ ├── server
+│ ├── modules/
+│ │ ├── keeper
+│ │ ├── handler
+│ │ ├── cli
+│ ├── gas
+│ └── commands
+├── clients/
+│ ├── lite/
+│ ├── service-providers
+├── modules/
+├── spec/
+├── translations/
+└── architecture/
+```
+
+The files in each sub-folders do not matter and will likely change. What matters is the sectioning:
+
+* `README`: Landing page of the docs.
+* `intro`: Introductory material. Goal is to have a short explainer of the Cosmos SDK and then channel people to the resource they need. The [Cosmos SDK tutorial](https://github.com/cosmos/sdk-application-tutorial/) will be highlighted, as well as the `godocs`.
+* `concepts`: Contains high-level explanations of the abstractions of the Cosmos SDK. It does not contain specific code implementation and does not need to be updated often. **It is not an API specification of the interfaces**. API spec is the `godoc`.
+* `clients`: Contains specs and info about the various Cosmos SDK clients.
+* `spec`: Contains specs of modules, and others.
+* `modules`: Contains links to `godocs` and the spec of the modules.
+* `architecture`: Contains architecture-related docs like the present one.
+* `translations`: Contains different translations of the documentation.
+
+Website docs sidebar will only include the following sections:
+
+* `README`
+* `intro`
+* `concepts`
+* `clients`
+
+`architecture` need not be displayed on the website.
+
+## Status
+
+Accepted
+
+## Consequences
+
+### Positive
+
+* Much clearer organisation of the Cosmos SDK docs.
+* The `/docs` folder now only contains Cosmos SDK and gaia related material. Later, it will only contain Cosmos SDK related material.
+* Developers only have to update `/docs` folder when they open a PR (and not `/examples` for example).
+* Easier for developers to find what they need to update in the docs thanks to reworked architecture.
+* Cleaner vuepress build for website docs.
+* Will help build an executable doc (cf https://github.com/cosmos/cosmos-sdk/issues/2611)
+
+### Neutral
+
+* We need to move a bunch of deprecated stuff to `/_attic` folder.
+* We need to integrate content in `docs/sdk/docs/core` in `concepts`.
+* We need to move all the content that currently lives in `docs` and does not fit in new structure (like `lotion`, intro material, whitepaper) to the website repository.
+* Update `DOCS_README.md`
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/issues/1460
+* https://github.com/cosmos/cosmos-sdk/pull/2695
+* https://github.com/cosmos/cosmos-sdk/issues/2611
diff --git a/copy-of-sdk-docs/build/architecture/adr-003-dynamic-capability-store.md b/copy-of-sdk-docs/build/architecture/adr-003-dynamic-capability-store.md
new file mode 100644
index 00000000..f9ddd364
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-003-dynamic-capability-store.md
@@ -0,0 +1,344 @@
+# ADR 3: Dynamic Capability Store
+
+## Changelog
+
+* 12 December 2019: Initial version
+* 02 April 2020: Memory Store Revisions
+
+## Context
+
+Full implementation of the [IBC specification](https://github.com/cosmos/ibc) requires the ability to create and authenticate object-capability keys at runtime (i.e., during transaction execution),
+as described in [ICS 5](https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#technical-specification). In the IBC specification, capability keys are created for each newly initialised
+port & channel, and are used to authenticate future usage of the port or channel. Since channels and potentially ports can be initialised during transaction execution, the state machine must be able to create
+object-capability keys at this time.
+
+At present, the Cosmos SDK does not have the ability to do this. Object-capability keys are currently pointers (memory addresses) of `StoreKey` structs created at application initialisation in `app.go` ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L132))
+and passed to Keepers as fixed arguments ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L160)). Keepers cannot create or store capability keys during transaction execution — although they could call `NewKVStoreKey` and take the memory address
+of the returned struct, storing this in the Merklised store would result in a consensus fault, since the memory address will be different on each machine (this is intentional — were this not the case, the keys would be predictable and couldn't serve as object capabilities).
+
+Keepers need a way to keep a private map of store keys which can be altered during transaction execution, along with a suitable mechanism for regenerating the unique memory addresses (capability keys) in this map whenever the application is started or restarted, along with a mechanism to revert capability creation on tx failure.
+This ADR proposes such an interface & mechanism.
+
+## Decision
+
+The Cosmos SDK will include a new `CapabilityKeeper` abstraction, which is responsible for provisioning,
+tracking, and authenticating capabilities at runtime. During application initialisation in `app.go`,
+the `CapabilityKeeper` will be hooked up to modules through unique function references
+(by calling `ScopeToModule`, defined below) so that it can identify the calling module when later
+invoked.
+
+When the initial state is loaded from disk, the `CapabilityKeeper`'s `Initialise` function will create
+new capability keys for all previously allocated capability identifiers (allocated during execution of
+past transactions and assigned to particular modes), and keep them in a memory-only store while the
+chain is running.
+
+The `CapabilityKeeper` will include a persistent `KVStore`, a `MemoryStore`, and an in-memory map.
+The persistent `KVStore` tracks which capability is owned by which modules.
+The `MemoryStore` stores a forward mapping that map from module name, capability tuples to capability names and
+a reverse mapping that map from module name, capability name to the capability index.
+Since we cannot marshal the capability into a `KVStore` and unmarshal without changing the memory location of the capability,
+the reverse mapping in the KVStore will simply map to an index. This index can then be used as a key in the ephemeral
+go-map to retrieve the capability at the original memory location.
+
+The `CapabilityKeeper` will define the following types & functions:
+
+The `Capability` is similar to `StoreKey`, but has a globally unique `Index()` instead of
+a name. A `String()` method is provided for debugging.
+
+A `Capability` is simply a struct, the address of which is taken for the actual capability.
+
+```go
+type Capability struct {
+ index uint64
+}
+```
+
+A `CapabilityKeeper` contains a persistent store key, memory store key, and mapping of allocated module names.
+
+```go
+type CapabilityKeeper struct {
+ persistentKey StoreKey
+ memKey StoreKey
+ capMap map[uint64]*Capability
+ moduleNames map[string]interface{}
+ sealed bool
+}
+```
+
+The `CapabilityKeeper` provides the ability to create *scoped* sub-keepers which are tied to a
+particular module name. These `ScopedCapabilityKeeper`s must be created at application initialisation
+and passed to modules, which can then use them to claim capabilities they receive and retrieve
+capabilities which they own by name, in addition to creating new capabilities & authenticating capabilities
+passed by other modules.
+
+```go
+type ScopedCapabilityKeeper struct {
+ persistentKey StoreKey
+ memKey StoreKey
+ capMap map[uint64]*Capability
+ moduleName string
+}
+```
+
+`ScopeToModule` is used to create a scoped sub-keeper with a particular name, which must be unique.
+It MUST be called before `InitialiseAndSeal`.
+
+```go
+func (ck CapabilityKeeper) ScopeToModule(moduleName string) ScopedCapabilityKeeper {
+ if k.sealed {
+ panic("cannot scope to module via a sealed capability keeper")
+ }
+
+ if _, ok := k.scopedModules[moduleName]; ok {
+ panic(fmt.Sprintf("cannot create multiple scoped keepers for the same module name: %s", moduleName))
+ }
+
+ k.scopedModules[moduleName] = struct{}{}
+
+ return ScopedKeeper{
+ cdc: k.cdc,
+ storeKey: k.storeKey,
+ memKey: k.memKey,
+ capMap: k.capMap,
+ module: moduleName,
+ }
+}
+```
+
+`InitialiseAndSeal` MUST be called exactly once, after loading the initial state and creating all
+necessary `ScopedCapabilityKeeper`s, in order to populate the memory store with newly-created
+capability keys in accordance with the keys previously claimed by particular modules and prevent the
+creation of any new `ScopedCapabilityKeeper`s.
+
+```go
+func (ck CapabilityKeeper) InitialiseAndSeal(ctx Context) {
+ if ck.sealed {
+ panic("capability keeper is sealed")
+ }
+
+ persistentStore := ctx.KVStore(ck.persistentKey)
+ map := ctx.KVStore(ck.memKey)
+
+ // initialise memory store for all names in persistent store
+ for index, value := range persistentStore.Iter() {
+ capability = &CapabilityKey{index: index}
+
+ for moduleAndCapability := range value {
+ moduleName, capabilityName := moduleAndCapability.Split("/")
+ memStore.Set(moduleName + "/fwd/" + capability, capabilityName)
+ memStore.Set(moduleName + "/rev/" + capabilityName, index)
+
+ ck.capMap[index] = capability
+ }
+ }
+
+ ck.sealed = true
+}
+```
+
+`NewCapability` can be called by any module to create a new unique, unforgeable object-capability
+reference. The newly created capability is automatically persisted; the calling module need not
+call `ClaimCapability`.
+
+```go
+func (sck ScopedCapabilityKeeper) NewCapability(ctx Context, name string) (Capability, error) {
+ // check name not taken in memory store
+ if capStore.Get("rev/" + name) != nil {
+ return nil, errors.New("name already taken")
+ }
+
+ // fetch the current index
+ index := persistentStore.Get("index")
+
+ // create a new capability
+ capability := &CapabilityKey{index: index}
+
+ // set persistent store
+ persistentStore.Set(index, Set.singleton(sck.moduleName + "/" + name))
+
+ // update the index
+ index++
+ persistentStore.Set("index", index)
+
+ // set forward mapping in memory store from capability to name
+ memStore.Set(sck.moduleName + "/fwd/" + capability, name)
+
+ // set reverse mapping in memory store from name to index
+ memStore.Set(sck.moduleName + "/rev/" + name, index)
+
+ // set the in-memory mapping from index to capability pointer
+ capMap[index] = capability
+
+ // return the newly created capability
+ return capability
+}
+```
+
+`AuthenticateCapability` can be called by any module to check that a capability
+does in fact correspond to a particular name (the name can be untrusted user input)
+with which the calling module previously associated it.
+
+```go
+func (sck ScopedCapabilityKeeper) AuthenticateCapability(name string, capability Capability) bool {
+ // return whether forward mapping in memory store matches name
+ return memStore.Get(sck.moduleName + "/fwd/" + capability) === name
+}
+```
+
+`ClaimCapability` allows a module to claim a capability key which it has received from another module
+so that future `GetCapability` calls will succeed.
+
+`ClaimCapability` MUST be called if a module which receives a capability wishes to access it by name
+in the future. Capabilities are multi-owner, so if multiple modules have a single `Capability` reference,
+they will all own it.
+
+```go
+func (sck ScopedCapabilityKeeper) ClaimCapability(ctx Context, capability Capability, name string) error {
+ persistentStore := ctx.KVStore(sck.persistentKey)
+
+ // set forward mapping in memory store from capability to name
+ memStore.Set(sck.moduleName + "/fwd/" + capability, name)
+
+ // set reverse mapping in memory store from name to capability
+ memStore.Set(sck.moduleName + "/rev/" + name, capability)
+
+ // update owner set in persistent store
+ owners := persistentStore.Get(capability.Index())
+ owners.add(sck.moduleName + "/" + name)
+ persistentStore.Set(capability.Index(), owners)
+}
+```
+
+`GetCapability` allows a module to fetch a capability which it has previously claimed by name.
+The module is not allowed to retrieve capabilities which it does not own.
+
+```go
+func (sck ScopedCapabilityKeeper) GetCapability(ctx Context, name string) (Capability, error) {
+ // fetch the index of capability using reverse mapping in memstore
+ index := memStore.Get(sck.moduleName + "/rev/" + name)
+
+ // fetch capability from go-map using index
+ capability := capMap[index]
+
+ // return the capability
+ return capability
+}
+```
+
+`ReleaseCapability` allows a module to release a capability which it had previously claimed. If no
+more owners exist, the capability will be deleted globally.
+
+```go
+func (sck ScopedCapabilityKeeper) ReleaseCapability(ctx Context, capability Capability) err {
+ persistentStore := ctx.KVStore(sck.persistentKey)
+
+ name := capStore.Get(sck.moduleName + "/fwd/" + capability)
+ if name == nil {
+ return error("capability not owned by module")
+ }
+
+ // delete forward mapping in memory store
+ memoryStore.Delete(sck.moduleName + "/fwd/" + capability, name)
+
+ // delete reverse mapping in memory store
+ memoryStore.Delete(sck.moduleName + "/rev/" + name, capability)
+
+ // update owner set in persistent store
+ owners := persistentStore.Get(capability.Index())
+ owners.remove(sck.moduleName + "/" + name)
+ if owners.size() > 0 {
+ // there are still other owners, keep the capability around
+ persistentStore.Set(capability.Index(), owners)
+ } else {
+ // no more owners, delete the capability
+ persistentStore.Delete(capability.Index())
+ delete(capMap[capability.Index()])
+ }
+}
+```
+
+### Usage patterns
+
+#### Initialisation
+
+Any modules which use dynamic capabilities must be provided a `ScopedCapabilityKeeper` in `app.go`:
+
+```go
+ck := NewCapabilityKeeper(persistentKey, memoryKey)
+mod1Keeper := NewMod1Keeper(ck.ScopeToModule("mod1"), ....)
+mod2Keeper := NewMod2Keeper(ck.ScopeToModule("mod2"), ....)
+
+// other initialisation logic ...
+
+// load initial state...
+
+ck.InitialiseAndSeal(initialContext)
+```
+
+#### Creating, passing, claiming and using capabilities
+
+Consider the case where `mod1` wants to create a capability, associate it with a resource (e.g. an IBC channel) by name, then pass it to `mod2` which will use it later:
+
+Module 1 would have the following code:
+
+```go
+capability := scopedCapabilityKeeper.NewCapability(ctx, "resourceABC")
+mod2Keeper.SomeFunction(ctx, capability, args...)
+```
+
+`SomeFunction`, running in module 2, could then claim the capability:
+
+```go
+func (k Mod2Keeper) SomeFunction(ctx Context, capability Capability) {
+ k.sck.ClaimCapability(ctx, capability, "resourceABC")
+ // other logic...
+}
+```
+
+Later on, module 2 can retrieve that capability by name and pass it to module 1, which will authenticate it against the resource:
+
+```go
+func (k Mod2Keeper) SomeOtherFunction(ctx Context, name string) {
+ capability := k.sck.GetCapability(ctx, name)
+ mod1.UseResource(ctx, capability, "resourceABC")
+}
+```
+
+Module 1 will then check that this capability key is authenticated to use the resource before allowing module 2 to use it:
+
+```go
+func (k Mod1Keeper) UseResource(ctx Context, capability Capability, resource string) {
+ if !k.sck.AuthenticateCapability(name, capability) {
+ return errors.New("unauthenticated")
+ }
+ // do something with the resource
+}
+```
+
+If module 2 passed the capability key to module 3, module 3 could then claim it and call module 1 just like module 2 did
+(in which case module 1, module 2, and module 3 would all be able to use this capability).
+
+## Status
+
+Proposed.
+
+## Consequences
+
+### Positive
+
+* Dynamic capability support.
+* Allows CapabilityKeeper to return same capability pointer from go-map while reverting any writes to the persistent `KVStore` and in-memory `MemoryStore` on tx failure.
+
+### Negative
+
+* Requires an additional keeper.
+* Some overlap with existing `StoreKey` system (in the future they could be combined, since this is a superset functionality-wise).
+* Requires an extra level of indirection in the reverse mapping, since MemoryStore must map to index which must then be used as key in a go map to retrieve the actual capability
+
+### Neutral
+
+(none known)
+
+## References
+
+* [Original discussion](https://github.com/cosmos/cosmos-sdk/pull/5230#discussion_r343978513)
diff --git a/copy-of-sdk-docs/build/architecture/adr-004-split-denomination-keys.md b/copy-of-sdk-docs/build/architecture/adr-004-split-denomination-keys.md
new file mode 100644
index 00000000..53c7b097
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-004-split-denomination-keys.md
@@ -0,0 +1,120 @@
+# ADR 004: Split Denomination Keys
+
+## Changelog
+
+* 2020-01-08: Initial version
+* 2020-01-09: Alterations to handle vesting accounts
+* 2020-01-14: Updates from review feedback
+* 2020-01-30: Updates from implementation
+
+### Glossary
+
+* denom / denomination key -- unique token identifier.
+
+## Context
+
+With permissionless IBC, anyone will be able to send arbitrary denominations to any other account. Currently, all non-zero balances are stored along with the account in an `sdk.Coins` struct, which creates a potential denial-of-service concern, as too many denominations will become expensive to load & store each time the account is modified. See issues [5467](https://github.com/cosmos/cosmos-sdk/issues/5467) and [4982](https://github.com/cosmos/cosmos-sdk/issues/4982) for additional context.
+
+Simply rejecting incoming deposits after a denomination count limit doesn't work, since it opens up a griefing vector: someone could send a user lots of nonsensical coins over IBC, and then prevent the user from receiving real denominations (such as staking rewards).
+
+## Decision
+
+Balances shall be stored per-account & per-denomination under a denomination- and account-unique key, thus enabling O(1) read & write access to the balance of a particular account in a particular denomination.
+
+### Account interface (x/auth)
+
+`GetCoins()` and `SetCoins()` will be removed from the account interface, since coin balances will
+now be stored in & managed by the bank module.
+
+The vesting account interface will replace `SpendableCoins` in favor of `LockedCoins` which does
+not require the account balance anymore. In addition, `TrackDelegation()` will now accept the
+account balance of all tokens denominated in the vesting balance instead of loading the entire
+account balance.
+
+Vesting accounts will continue to store original vesting, delegated free, and delegated
+vesting coins (which is safe since these cannot contain arbitrary denominations).
+
+### Bank keeper (x/bank)
+
+The following APIs will be added to the `x/bank` keeper:
+
+* `GetAllBalances(ctx Context, addr AccAddress) Coins`
+* `GetBalance(ctx Context, addr AccAddress, denom string) Coin`
+* `SetBalance(ctx Context, addr AccAddress, coin Coin)`
+* `LockedCoins(ctx Context, addr AccAddress) Coins`
+* `SpendableCoins(ctx Context, addr AccAddress) Coins`
+
+Additional APIs may be added to facilitate iteration and auxiliary functionality not essential to
+core functionality or persistence.
+
+Balances will be stored first by the address, then by the denomination (the reverse is also possible,
+but retrieval of all balances for a single account is presumed to be more frequent):
+
+```go
+var BalancesPrefix = []byte("balances")
+
+func (k Keeper) SetBalance(ctx Context, addr AccAddress, balance Coin) error {
+ if !balance.IsValid() {
+ return err
+ }
+
+ store := ctx.KVStore(k.storeKey)
+ balancesStore := prefix.NewStore(store, BalancesPrefix)
+ accountStore := prefix.NewStore(balancesStore, addr.Bytes())
+
+ bz := Marshal(balance)
+ accountStore.Set([]byte(balance.Denom), bz)
+
+ return nil
+}
+```
+
+This will result in the balances being indexed by the byte representation of
+`balances/{address}/{denom}`.
+
+`DelegateCoins()` and `UndelegateCoins()` will be altered to only load each individual
+account balance by denomination found in the (un)delegation amount. As a result,
+any mutations to the account balance will be made by denomination.
+
+`SubtractCoins()` and `AddCoins()` will be altered to read & write the balances
+directly instead of calling `GetCoins()` / `SetCoins()` (which no longer exist).
+
+`trackDelegation()` and `trackUndelegation()` will be altered to no longer update
+account balances.
+
+External APIs will need to scan all balances under an account to retain backwards-compatibility. It
+is advised that these APIs use `GetBalance` and `SetBalance` instead of `GetAllBalances` when
+possible as to not load the entire account balance.
+
+### Supply module
+
+The supply module, in order to implement the total supply invariant, will now need
+to scan all accounts & call `GetAllBalances` using the `x/bank` Keeper, then sum
+the balances and check that they match the expected total supply.
+
+## Status
+
+Accepted.
+
+## Consequences
+
+### Positive
+
+* O(1) reads & writes of balances (with respect to the number of denominations for
+which an account has non-zero balances). Note, this does not relate to the actual
+I/O cost, rather the total number of direct reads needed.
+
+### Negative
+
+* Slightly less efficient reads/writes when reading & writing all balances of a
+single account in a transaction.
+
+### Neutral
+
+None in particular.
+
+## References
+
+* Ref: https://github.com/cosmos/cosmos-sdk/issues/4982
+* Ref: https://github.com/cosmos/cosmos-sdk/issues/5467
+* Ref: https://github.com/cosmos/cosmos-sdk/issues/5492
diff --git a/copy-of-sdk-docs/build/architecture/adr-006-secret-store-replacement.md b/copy-of-sdk-docs/build/architecture/adr-006-secret-store-replacement.md
new file mode 100644
index 00000000..500ba40c
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-006-secret-store-replacement.md
@@ -0,0 +1,54 @@
+# ADR 006: Secret Store Replacement
+
+## Changelog
+
+* July 29th, 2019: Initial draft
+* September 11th, 2019: Work has started
+* November 4th: Cosmos SDK changes merged in
+* November 18th: Gaia changes merged in
+
+## Context
+
+Currently, a Cosmos SDK application's CLI directory stores key material and metadata in a plain text database in the user’s home directory. Key material is encrypted by a passphrase, protected by bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text.
+
+This is not desirable for a number of reasons. Perhaps the biggest reason is insufficient security protection of key material and metadata. Leaking the plain text allows an attacker to surveil what keys a given computer controls via a number of techniques, like compromised dependencies without any privilege execution. This could be followed by a more targeted attack on a particular user/computer.
+
+All modern desktop computers OS (Ubuntu, Debian, MacOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data.
+
+We are seeking solution that provides a common abstraction layer to the many different backends and reasonable fallback for minimal platforms that don’t provide a native secret store.
+
+## Decision
+
+We recommend replacing the current Keybase backend based on LevelDB with [Keyring](https://github.com/99designs/keyring) by 99 designs. This application is designed to provide a common abstraction and uniform interface between many secret stores and is used by AWS Vault application by 99-designs application.
+
+This appears to fulfill the requirement of protecting both key material and metadata from rogue software on a user’s machine.
+
+## Status
+
+Accepted
+
+## Consequences
+
+### Positive
+
+Increased safety for users.
+
+### Negative
+
+Users must manually migrate.
+
+Testing against all supported backends is difficult.
+
+Running tests locally on a Mac require numerous repetitive password entries.
+
+### Neutral
+
+{neutral consequences}
+
+## References
+
+* #4754 Switch secret store to the keyring secret store (original PR by @poldsam) [__CLOSED__]
+* #5029 Add support for github.com/99designs/keyring-backed keybases [__MERGED__]
+* #5097 Add keys migrate command [__MERGED__]
+* #5180 Drop on-disk keybase in favor of keyring [_PENDING_REVIEW_]
+* cosmos/gaia#164 Drop on-disk keybase in favor of keyring (gaia's changes) [_PENDING_REVIEW_]
diff --git a/copy-of-sdk-docs/build/architecture/adr-007-specialization-groups.md b/copy-of-sdk-docs/build/architecture/adr-007-specialization-groups.md
new file mode 100644
index 00000000..bafcc697
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-007-specialization-groups.md
@@ -0,0 +1,177 @@
+# ADR 007: Specialization Groups
+
+## Changelog
+
+* 2019 Jul 31: Initial Draft
+
+## Context
+
+This idea was first conceived of in order to fulfill the use case of the
+creation of a decentralized Computer Emergency Response Team (dCERT), whose
+members would be elected by a governing community and would fulfill the role of
+coordinating the community under emergency situations. This thinking
+can be further abstracted into the conception of "blockchain specialization
+groups".
+
+The creation of these groups are the beginning of specialization capabilities
+within a wider blockchain community which could be used to enable a certain
+level of delegated responsibilities. Examples of specialization which could be
+beneficial to a blockchain community include: code auditing, emergency response,
+code development etc. This type of community organization paves the way for
+individual stakeholders to delegate votes by issue type, if in the future
+governance proposals include a field for issue type.
+
+## Decision
+
+A specialization group can be broadly broken down into the following functions
+(herein containing examples):
+
+* Membership Admittance
+* Membership Acceptance
+* Membership Revocation
+ * (probably) Without Penalty
+ * member steps down (self-Revocation)
+ * replaced by new member from governance
+ * (probably) With Penalty
+ * due to breach of soft-agreement (determined through governance)
+ * due to breach of hard-agreement (determined by code)
+* Execution of Duties
+ * Special transactions which only execute for members of a specialization
+ group (for example, dCERT members voting to turn off transaction routes in
+ an emergency scenario)
+* Compensation
+ * Group compensation (further distribution decided by the specialization group)
+ * Individual compensation for all constituents of a group from the
+ greater community
+
+Membership admittance to a specialization group could take place over a wide
+variety of mechanisms. The most obvious example is through a general vote among
+the entire community, however in certain systems a community may want to allow
+the members already in a specialization group to internally elect new members,
+or maybe the community may assign a permission to a particular specialization
+group to appoint members to other 3rd party groups. The sky is really the limit
+as to how membership admittance can be structured. We attempt to capture
+some of these possibilities in a common interface dubbed the `Electionator`. For
+its initial implementation as a part of this ADR we recommend that the general
+election abstraction (`Electionator`) is provided as well as a basic
+implementation of that abstraction which allows for a continuous election of
+members of a specialization group.
+
+``` golang
+// The Electionator abstraction covers the concept space for
+// a wide variety of election kinds.
+type Electionator interface {
+
+ // is the election object accepting votes.
+ Active() bool
+
+ // functionality to execute for when a vote is cast in this election, here
+ // the vote field is anticipated to be marshalled into a vote type used
+ // by an election.
+ //
+ // NOTE There are no explicit ids here. Just votes which pertain specifically
+ // to one electionator. Anyone can create and send a vote to the electionator item
+ // which will presumably attempt to marshal those bytes into a particular struct
+ // and apply the vote information in some arbitrary way. There can be multiple
+ // Electionators within the Cosmos-Hub for multiple specialization groups, votes
+ // would need to be routed to the Electionator upstream of here.
+ Vote(addr sdk.AccAddress, vote []byte)
+
+ // here lies all functionality to authenticate and execute changes for
+ // when a member accepts being elected
+ AcceptElection(sdk.AccAddress)
+
+ // Register a revoker object
+ RegisterRevoker(Revoker)
+
+ // No more revokers may be registered after this function is called
+ SealRevokers()
+
+ // register hooks to call when an election actions occur
+ RegisterHooks(ElectionatorHooks)
+
+ // query for the current winner(s) of this election based on arbitrary
+ // election ruleset
+ QueryElected() []sdk.AccAddress
+
+ // query metadata for an address in the election this
+ // could include for example position that an address
+ // is being elected for within a group
+ //
+ // this metadata may be directly related to
+ // voting information and/or privileges enabled
+ // to members within a group.
+ QueryMetadata(sdk.AccAddress) []byte
+}
+
+// ElectionatorHooks, once registered with an Electionator,
+// trigger execution of relevant interface functions when
+// Electionator events occur.
+type ElectionatorHooks interface {
+ AfterVoteCast(addr sdk.AccAddress, vote []byte)
+ AfterMemberAccepted(addr sdk.AccAddress)
+ AfterMemberRevoked(addr sdk.AccAddress, cause []byte)
+}
+
+// Revoker defines the function required for a membership revocation rule-set
+// used by a specialization group. This could be used to create self revoking,
+// and evidence based revoking, etc. Revokers types may be created and
+// reused for different election types.
+//
+// When revoking the "cause" bytes may be arbitrarily marshalled into evidence,
+// memos, etc.
+type Revoker interface {
+ RevokeName() string // identifier for this revoker type
+ RevokeMember(addr sdk.AccAddress, cause []byte) error
+}
+```
+
+Certain level of commonality likely exists between the existing code within
+`x/governance` and required functionality of elections. This common
+functionality should be abstracted during implementation. Similarly for each
+vote implementation client CLI/REST functionality should be abstracted
+to be reused for multiple elections.
+
+The specialization group abstraction firstly extends the `Electionator`
+but also further defines traits of the group.
+
+``` golang
+type SpecializationGroup interface {
+ Electionator
+ GetName() string
+ GetDescription() string
+
+ // general soft contract the group is expected
+ // to fulfill with the greater community
+ GetContract() string
+
+ // messages which can be executed by the members of the group
+ Handler(ctx sdk.Context, msg sdk.Msg) sdk.Result
+
+ // logic to be executed at endblock, this may for instance
+ // include payment of a stipend to the group members
+ // for participation in the security group.
+ EndBlocker(ctx sdk.Context)
+}
+```
+
+## Status
+
+> Proposed
+
+## Consequences
+
+### Positive
+
+* increases specialization capabilities of a blockchain
+* improve abstractions in `x/gov/` such that they can be used with specialization groups
+
+### Negative
+
+* could be used to increase centralization within a community
+
+### Neutral
+
+## References
+
+* [dCERT ADR](./adr-008-dCERT-group.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-008-dCERT-group.md b/copy-of-sdk-docs/build/architecture/adr-008-dCERT-group.md
new file mode 100644
index 00000000..5ee5670b
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-008-dCERT-group.md
@@ -0,0 +1,171 @@
+# ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group
+
+## Changelog
+
+* 2019 Jul 31: Initial Draft
+
+## Context
+
+In order to reduce the number of parties involved with handling sensitive
+information in an emergency scenario, we propose the creation of a
+specialization group named The Decentralized Computer Emergency Response Team
+(dCERT). Initially this group's role is intended to serve as coordinators
+between various actors within a blockchain community such as validators,
+bug-hunters, and developers. During a time of crisis, the dCERT group would
+aggregate and relay input from a variety of stakeholders to the developers who
+are actively devising a patch to the software, this way sensitive information
+does not need to be publicly disclosed while some input from the community can
+still be gained.
+
+Additionally, a special privilege is proposed for the dCERT group: the capacity
+to "circuit-break" (aka. temporarily disable) a particular message path. Note
+that this privilege should be enabled/disabled globally with a governance
+parameter such that this privilege could start disabled and later be enabled
+through a parameter change proposal, once a dCERT group has been established.
+
+In the future it is foreseeable that the community may wish to expand the roles
+of dCERT with further responsibilities such as the capacity to "pre-approve" a
+security update on behalf of the community prior to a full community
+wide vote whereby the sensitive information would be revealed prior to a
+vulnerability being patched on the live network.
+
+## Decision
+
+The dCERT group is proposed to include an implementation of a `SpecializationGroup`
+as defined in [ADR 007](./adr-007-specialization-groups.md). This will include the
+implementation of:
+
+* continuous voting
+* slashing due to breach of soft contract
+* revoking a member due to breach of soft contract
+* emergency disband of the entire dCERT group (ex. for colluding maliciously)
+* compensation stipend from the community pool or other means decided by
+ governance
+
+This system necessitates the following new parameters:
+
+* blockly stipend allowance per dCERT member
+* maximum number of dCERT members
+* required staked slashable tokens for each dCERT member
+* quorum for suspending a particular member
+* proposal wager for disbanding the dCERT group
+* stabilization period for dCERT member transition
+* circuit break dCERT privileges enabled
+
+These parameters are expected to be implemented through the param keeper such
+that governance may change them at any given point.
+
+### Continuous Voting Electionator
+
+An `Electionator` object is to be implemented as continuous voting and with the
+following specifications:
+
+* All delegation addresses may submit votes at any point which updates their
+ preferred representation on the dCERT group.
+* Preferred representation may be arbitrarily split between addresses (ex. 50%
+ to John, 25% to Sally, 25% to Carol)
+* In order for a new member to be added to the dCERT group they must
+ send a transaction accepting their admission at which point the validity of
+ their admission is to be confirmed.
+ * A sequence number is assigned when a member is added to dCERT group.
+ If a member leaves the dCERT group and then enters back, a new sequence number
+ is assigned.
+* Addresses which control the greatest amount of preferred-representation are
+ eligible to join the dCERT group (up the _maximum number of dCERT members_).
+ If the dCERT group is already full and new member is admitted, the existing
+ dCERT member with the lowest amount of votes is kicked from the dCERT group.
+ * In the split situation where the dCERT group is full but a vying candidate
+ has the same amount of vote as an existing dCERT member, the existing
+ member should maintain its position.
+ * In the split situation where somebody must be kicked out but the two
+ addresses with the smallest number of votes have the same number of votes,
+ the address with the smallest sequence number maintains its position.
+* A stabilization period can be optionally included to reduce the
+ "flip-flopping" of the dCERT membership tail members. If a stabilization
+ period is provided which is greater than 0, when members are kicked due to
+ insufficient support, a queue entry is created which documents which member is
+ to replace which other member. While this entry is in the queue, no new entries
+ to kick that same dCERT member can be made. When the entry matures at the
+ duration of the stabilization period, the new member is instantiated, and old
+ member kicked.
+
+### Staking/Slashing
+
+All members of the dCERT group must stake tokens _specifically_ to maintain
+eligibility as a dCERT member. These tokens can be staked directly by the vying
+dCERT member or out of the good will of a 3rd party (who shall gain no on-chain
+benefits for doing so). This staking mechanism should use the existing global
+unbonding time of tokens staked for network validator security. A dCERT member
+can _only be_ a member if it has the required tokens staked under this
+mechanism. If those tokens are unbonded then the dCERT member must be
+automatically kicked from the group.
+
+Slashing of a particular dCERT member due to soft-contract breach should be
+performed by governance on a per member basis based on the magnitude of the
+breach. The process flow is anticipated to be that a dCERT member is suspended
+by the dCERT group prior to being slashed by governance.
+
+Membership suspension by the dCERT group takes place through a voting procedure
+by the dCERT group members. After this suspension has taken place, a governance
+proposal to slash the dCERT member must be submitted, if the proposal is not
+approved by the time the rescinding member has completed unbonding their
+tokens, then the tokens are no longer staked and unable to be slashed.
+
+Additionally in the case of an emergency situation of a colluding and malicious
+dCERT group, the community needs the capability to disband the entire dCERT
+group and likely fully slash them. This could be achieved though a special new
+proposal type (implemented as a general governance proposal) which would halt
+the functionality of the dCERT group until the proposal was concluded. This
+special proposal type would likely need to also have a fairly large wager which
+could be slashed if the proposal creator was malicious. The reason a large
+wager should be required is because as soon as the proposal is made, the
+capability of the dCERT group to halt message routes is put on temporarily
+suspended, meaning that a malicious actor who created such a proposal could
+then potentially exploit a bug during this period of time, with no dCERT group
+capable of shutting down the exploitable message routes.
+
+### dCERT membership transactions
+
+Active dCERT members
+
+* change of the description of the dCERT group
+* circuit break a message route
+* vote to suspend a dCERT member.
+
+Here circuit-breaking refers to the capability to disable a groups of messages,
+This could for instance mean: "disable all staking-delegation messages", or
+"disable all distribution messages". This could be accomplished by verifying
+that the message route has not been "circuit-broken" at CheckTx time (in
+`baseapp/baseapp.go`).
+
+"unbreaking" a circuit is anticipated only to occur during a hard fork upgrade
+meaning that no capability to unbreak a message route on a live chain is
+required.
+
+Note also, that if there was a problem with governance voting (for instance a
+capability to vote many times) then governance would be broken and should be
+halted with this mechanism, it would be then up to the validator set to
+coordinate and hard-fork upgrade to a patched version of the software where
+governance is re-enabled (and fixed). If the dCERT group abuses this privilege
+they should all be severely slashed.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Potential to reduces the number of parties to coordinate with during an emergency
+* Reduction in possibility of disclosing sensitive information to malicious parties
+
+### Negative
+
+* Centralization risks
+
+### Neutral
+
+## References
+
+ [Specialization Groups ADR](./adr-007-specialization-groups.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-009-evidence-module.md b/copy-of-sdk-docs/build/architecture/adr-009-evidence-module.md
new file mode 100644
index 00000000..ded04a14
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-009-evidence-module.md
@@ -0,0 +1,182 @@
+# ADR 009: Evidence Module
+
+## Changelog
+
+* 2019 July 31: Initial draft
+* 2019 October 24: Initial implementation
+
+## Status
+
+Accepted
+
+## Context
+
+In order to support building highly secure, robust and interoperable blockchain
+applications, it is vital for the Cosmos SDK to expose a mechanism in which arbitrary
+evidence can be submitted, evaluated and verified resulting in some agreed upon
+penalty for any misbehavior committed by a validator, such as equivocation (double-voting),
+signing when unbonded, signing an incorrect state transition (in the future), etc.
+Furthermore, such a mechanism is paramount for any
+[IBC](https://github.com/cosmos/ics/blob/master/ibc/2_IBC_ARCHITECTURE.md) or
+cross-chain validation protocol implementation in order to support the ability
+for any misbehavior to be relayed back from a collateralized chain to a primary
+chain so that the equivocating validator(s) can be slashed.
+
+## Decision
+
+We will implement an evidence module in the Cosmos SDK supporting the following
+functionality:
+
+* Provide developers with the abstractions and interfaces necessary to define
+ custom evidence messages, message handlers, and methods to slash and penalize
+ accordingly for misbehavior.
+* Support the ability to route evidence messages to handlers in any module to
+ determine the validity of submitted misbehavior.
+* Support the ability, through governance, to modify slashing penalties of any
+ evidence type.
+* Querier implementation to support querying params, evidence types, params, and
+ all submitted valid misbehavior.
+
+### Types
+
+First, we define the `Evidence` interface type. The `x/evidence` module may implement
+its own types that can be used by many chains (e.g. `CounterFactualEvidence`).
+In addition, other modules may implement their own `Evidence` types in a similar
+manner in which governance is extensible. It is important to note any concrete
+type implementing the `Evidence` interface may include arbitrary fields such as
+an infraction time. We want the `Evidence` type to remain as flexible as possible.
+
+When submitting evidence to the `x/evidence` module, the concrete type must provide
+the validator's consensus address, which should be known by the `x/slashing`
+module (assuming the infraction is valid), the height at which the infraction
+occurred and the validator's power at same height in which the infraction occurred.
+
+```go
+type Evidence interface {
+ Route() string
+ Type() string
+ String() string
+ Hash() HexBytes
+ ValidateBasic() error
+
+ // The consensus address of the malicious validator at time of infraction
+ GetConsensusAddress() ConsAddress
+
+ // Height at which the infraction occurred
+ GetHeight() int64
+
+ // The total power of the malicious validator at time of infraction
+ GetValidatorPower() int64
+
+ // The total validator set power at time of infraction
+ GetTotalPower() int64
+}
+```
+
+### Routing & Handling
+
+Each `Evidence` type must map to a specific unique route and be registered with
+the `x/evidence` module. It accomplishes this through the `Router` implementation.
+
+```go
+type Router interface {
+ AddRoute(r string, h Handler) Router
+ HasRoute(r string) bool
+ GetRoute(path string) Handler
+ Seal()
+}
+```
+
+Upon successful routing through the `x/evidence` module, the `Evidence` type
+is passed through a `Handler`. This `Handler` is responsible for executing all
+corresponding business logic necessary for verifying the evidence as valid. In
+addition, the `Handler` may execute any necessary slashing and potential jailing.
+Since slashing fractions will typically result from some form of static functions,
+allow the `Handler` to do this provides the greatest flexibility. An example could
+be `k * evidence.GetValidatorPower()` where `k` is an on-chain parameter controlled
+by governance. The `Evidence` type should provide all the external information
+necessary in order for the `Handler` to make the necessary state transitions.
+If no error is returned, the `Evidence` is considered valid.
+
+```go
+type Handler func(Context, Evidence) error
+```
+
+### Submission
+
+`Evidence` is submitted through a `MsgSubmitEvidence` message type which is internally
+handled by the `x/evidence` module's `SubmitEvidence`.
+
+```go
+type MsgSubmitEvidence struct {
+ Evidence
+}
+
+func handleMsgSubmitEvidence(ctx Context, keeper Keeper, msg MsgSubmitEvidence) Result {
+ if err := keeper.SubmitEvidence(ctx, msg.Evidence); err != nil {
+ return err.Result()
+ }
+
+ // emit events...
+
+ return Result{
+ // ...
+ }
+}
+```
+
+The `x/evidence` module's keeper is responsible for matching the `Evidence` against
+the module's router and invoking the corresponding `Handler` which may include
+slashing and jailing the validator. Upon success, the submitted evidence is persisted.
+
+```go
+func (k Keeper) SubmitEvidence(ctx Context, evidence Evidence) error {
+ handler := keeper.router.GetRoute(evidence.Route())
+ if err := handler(ctx, evidence); err != nil {
+ return ErrInvalidEvidence(keeper.codespace, err)
+ }
+
+ keeper.setEvidence(ctx, evidence)
+ return nil
+}
+```
+
+### Genesis
+
+Finally, we need to represent the genesis state of the `x/evidence` module. The
+module only needs a list of all submitted valid infractions and any necessary params
+for which the module needs in order to handle submitted evidence. The `x/evidence`
+module will naturally define and route native evidence types for which it'll most
+likely need slashing penalty constants for.
+
+```go
+type GenesisState struct {
+ Params Params
+ Infractions []Evidence
+}
+```
+
+## Consequences
+
+### Positive
+
+* Allows the state machine to process misbehavior submitted on-chain and penalize
+ validators based on agreed upon slashing parameters.
+* Allows evidence types to be defined and handled by any module. This further allows
+ slashing and jailing to be defined by more complex mechanisms.
+* Does not solely rely on Tendermint to submit evidence.
+
+### Negative
+
+* No easy way to introduce new evidence types through governance on a live chain
+ due to the inability to introduce the new evidence type's corresponding handler
+
+### Neutral
+
+* Should we persist infractions indefinitely? Or should we rather rely on events?
+
+## References
+
+* [ICS](https://github.com/cosmos/ics)
+* [IBC Architecture](https://github.com/cosmos/ics/blob/master/ibc/1_IBC_ARCHITECTURE.md)
+* [Tendermint Fork Accountability](https://github.com/tendermint/spec/blob/7b3138e69490f410768d9b1ffc7a17abc23ea397/spec/consensus/fork-accountability.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-010-modular-antehandler.md b/copy-of-sdk-docs/build/architecture/adr-010-modular-antehandler.md
new file mode 100644
index 00000000..4eb5b885
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-010-modular-antehandler.md
@@ -0,0 +1,290 @@
+# ADR 010: Modular AnteHandler
+
+## Changelog
+
+* 2019 Aug 31: Initial draft
+* 2021 Sep 14: Superseded by ADR-045
+
+## Status
+
+SUPERSEDED by ADR-045
+
+## Context
+
+The current AnteHandler design allows users to either use the default AnteHandler provided in `x/auth` or to build their own AnteHandler from scratch. Ideally AnteHandler functionality is split into multiple, modular functions that can be chained together along with custom ante-functions so that users do not have to rewrite common antehandler logic when they want to implement custom behavior.
+
+For example, let's say a user wants to implement some custom signature verification logic. In the current codebase, the user would have to write their own Antehandler from scratch largely reimplementing much of the same code and then set their own custom, monolithic antehandler in the baseapp. Instead, we would like to allow users to specify custom behavior when necessary and combine them with default ante-handler functionality in a way that is as modular and flexible as possible.
+
+## Proposals
+
+### Per-Module AnteHandler
+
+One approach is to use the [ModuleManager](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/module) and have each module implement its own antehandler if it requires custom antehandler logic. The ModuleManager can then be passed in an AnteHandler order in the same way it has an order for BeginBlockers and EndBlockers. The ModuleManager returns a single AnteHandler function that will take in a tx and run each module's `AnteHandle` in the specified order. The module manager's AnteHandler is set as the baseapp's AnteHandler.
+
+Pros:
+
+1. Simple to implement
+2. Utilizes the existing ModuleManager architecture
+
+Cons:
+
+1. Improves granularity but still cannot get more granular than a per-module basis. e.g. If auth's `AnteHandle` function is in charge of validating memo and signatures, users cannot swap the signature-checking functionality while keeping the rest of auth's `AnteHandle` functionality.
+2. Module AnteHandler are run one after the other. There is no way for one AnteHandler to wrap or "decorate" another.
+
+### Decorator Pattern
+
+The [weave project](https://github.com/iov-one/weave) achieves AnteHandler modularity through the use of a decorator pattern. The interface is designed as follows:
+
+```go
+// Decorator wraps a Handler to provide common functionality
+// like authentication, or fee-handling, to many Handlers
+type Decorator interface {
+ Check(ctx Context, store KVStore, tx Tx, next Checker) (*CheckResult, error)
+ Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) (*DeliverResult, error)
+}
+```
+
+Each decorator works like a modularized Cosmos SDK antehandler function, but it can take in a `next` argument that may be another decorator or a Handler (which does not take in a next argument). These decorators can be chained together, one decorator being passed in as the `next` argument of the previous decorator in the chain. The chain ends in a Router which can take a tx and route to the appropriate msg handler.
+
+A key benefit of this approach is that one Decorator can wrap its internal logic around the next Checker/Deliverer. A weave Decorator may do the following:
+
+```go
+// Example Decorator's Deliver function
+func (example Decorator) Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) {
+ // Do some pre-processing logic
+
+ res, err := next.Deliver(ctx, store, tx)
+
+ // Do some post-processing logic given the result and error
+}
+```
+
+Pros:
+
+1. Weave Decorators can wrap over the next decorator/handler in the chain. The ability to both pre-process and post-process may be useful in certain settings.
+2. Provides a nested modular structure that isn't possible in the solution above, while also allowing for a linear one-after-the-other structure like the solution above.
+
+Cons:
+
+1. It is hard to understand at first glance the state updates that would occur after a Decorator runs given the `ctx`, `store`, and `tx`. A Decorator can have an arbitrary number of nested Decorators being called within its function body, each possibly doing some pre- and post-processing before calling the next decorator on the chain. Thus to understand what a Decorator is doing, one must also understand what every other decorator further along the chain is also doing. This can get quite complicated to understand. A linear, one-after-the-other approach while less powerful, may be much easier to reason about.
+
+### Chained Micro-Functions
+
+The benefit of Weave's approach is that the Decorators can be very concise, which when chained together allows for maximum customizability. However, the nested structure can get quite complex and thus hard to reason about.
+
+Another approach is to split the AnteHandler functionality into tightly scoped "micro-functions", while preserving the one-after-the-other ordering that would come from the ModuleManager approach.
+
+We can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions.
+
+Users can order the AnteHandlers easily by simply using the ModuleManager. The ModuleManager will take in a list of AnteHandlers and return a single AnteHandler that runs each AnteHandler in the order of the list provided. If the user is comfortable with the default ordering of each module, this is as simple as providing a list with each module's antehandler (exactly the same as BeginBlocker and EndBlocker).
+
+If however, users wish to change the order or add, modify, or delete ante micro-functions in anyway; they can always define their own ante micro-functions and add them explicitly to the list that gets passed into module manager.
+
+#### Default Workflow
+
+This is an example of a user's AnteHandler if they choose not to make any custom micro-functions.
+
+##### Cosmos SDK code
+
+```go
+// Chains together a list of AnteHandler micro-functions that get run one after the other.
+// Returned AnteHandler will abort on first error.
+func Chainer(order []AnteHandler) AnteHandler {
+ return func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ for _, ante := range order {
+ ctx, err := ante(ctx, tx, simulate)
+ if err != nil {
+ return ctx, err
+ }
+ }
+ return ctx, err
+ }
+}
+```
+
+```go
+// AnteHandler micro-function to verify signatures
+func VerifySignatures(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // verify signatures
+ // Returns InvalidSignature Result and abort=true if sigs invalid
+ // Return OK result and abort=false if sigs are valid
+}
+
+// AnteHandler micro-function to validate memo
+func ValidateMemo(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // validate memo
+}
+
+// Auth defines its own default ante-handler by chaining its micro-functions in a recommended order
+AuthModuleAnteHandler := Chainer([]AnteHandler{VerifySignatures, ValidateMemo})
+```
+
+```go
+// Distribution micro-function to deduct fees from tx
+func DeductFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // Deduct fees from tx
+ // Abort if insufficient funds in account to pay for fees
+}
+
+// Distribution micro-function to check if fees > mempool parameter
+func CheckMempoolFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // If CheckTx: Abort if the fees are less than the mempool's minFee parameter
+}
+
+// Distribution defines its own default ante-handler by chaining its micro-functions in a recommended order
+DistrModuleAnteHandler := Chainer([]AnteHandler{CheckMempoolFees, DeductFees})
+```
+
+```go
+type ModuleManager struct {
+ // other fields
+ AnteHandlerOrder []AnteHandler
+}
+
+func (mm ModuleManager) GetAnteHandler() AnteHandler {
+ return Chainer(mm.AnteHandlerOrder)
+}
+```
+
+##### User Code
+
+```go
+// Note: Since user is not making any custom modifications, we can just SetAnteHandlerOrder with the default AnteHandlers provided by each module in our preferred order
+moduleManager.SetAnteHandlerOrder([]AnteHandler(AuthModuleAnteHandler, DistrModuleAnteHandler))
+
+app.SetAnteHandler(mm.GetAnteHandler())
+```
+
+#### Custom Workflow
+
+This is an example workflow for a user that wants to implement custom antehandler logic. In this example, the user wants to implement custom signature verification and change the order of antehandler so that validate memo runs before signature verification.
+
+##### User Code
+
+```go
+// User can implement their own custom signature verification antehandler micro-function
+func CustomSigVerify(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // do some custom signature verification logic
+}
+```
+
+```go
+// Micro-functions allow users to change order of when they get executed, and swap out default ante-functionality with their own custom logic.
+// Note that users can still chain the default distribution module handler, and auth micro-function along with their custom ante function
+moduleManager.SetAnteHandlerOrder([]AnteHandler(ValidateMemo, CustomSigVerify, DistrModuleAnteHandler))
+```
+
+Pros:
+
+1. Allows for ante functionality to be as modular as possible.
+2. For users that do not need custom ante-functionality, there is little difference between how antehandlers work and how BeginBlock and EndBlock work in ModuleManager.
+3. Still easy to understand
+
+Cons:
+
+1. Cannot wrap antehandlers with decorators like you can with Weave.
+
+### Simple Decorators
+
+This approach takes inspiration from Weave's decorator design while trying to minimize the number of breaking changes to the Cosmos SDK and maximizing simplicity. Like Weave decorators, this approach allows one `AnteDecorator` to wrap the next AnteHandler to do pre- and post-processing on the result. This is useful since decorators can do defer/cleanups after an AnteHandler returns as well as perform some setup beforehand. Unlike Weave decorators, these `AnteDecorator` functions can only wrap over the AnteHandler rather than the entire handler execution path. This is deliberate as we want decorators from different modules to perform authentication/validation on a `tx`. However, we do not want decorators being capable of wrapping and modifying the results of a `MsgHandler`.
+
+In addition, this approach will not break any core Cosmos SDK API's. Since we preserve the notion of an AnteHandler and still set a single AnteHandler in baseapp, the decorator is simply an additional approach available for users that desire more customization. The API of modules (namely `x/auth`) may break with this approach, but the core API remains untouched.
+
+Allow Decorator interface that can be chained together to create a Cosmos SDK AnteHandler.
+
+This allows users to choose between implementing an AnteHandler by themselves and setting it in the baseapp, or use the decorator pattern to chain their custom decorators with the Cosmos SDK provided decorators in the order they wish.
+
+```go
+// An AnteDecorator wraps an AnteHandler, and can do pre- and post-processing on the next AnteHandler
+type AnteDecorator interface {
+ AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error)
+}
+```
+
+```go
+// ChainAnteDecorators will recursively link all of the AnteDecorators in the chain and return a final AnteHandler function
+// This is done to preserve the ability to set a single AnteHandler function in the baseapp.
+func ChainAnteDecorators(chain ...AnteDecorator) AnteHandler {
+ if len(chain) == 1 {
+ return func(ctx Context, tx Tx, simulate bool) {
+ chain[0].AnteHandle(ctx, tx, simulate, nil)
+ }
+ }
+ return func(ctx Context, tx Tx, simulate bool) {
+ chain[0].AnteHandle(ctx, tx, simulate, ChainAnteDecorators(chain[1:]))
+ }
+}
+```
+
+#### Example Code
+
+Define AnteDecorator functions
+
+```go
+// Setup GasMeter, catch OutOfGasPanic and handle appropriately
+type SetUpContextDecorator struct{}
+
+func (sud SetUpContextDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {
+ ctx.GasMeter = NewGasMeter(tx.Gas)
+
+ defer func() {
+ // recover from OutOfGas panic and handle appropriately
+ }
+
+ return next(ctx, tx, simulate)
+}
+
+// Signature Verification decorator. Verify Signatures and move on
+type SigVerifyDecorator struct{}
+
+func (svd SigVerifyDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {
+ // verify sigs. Return error if invalid
+
+ // call next antehandler if sigs ok
+ return next(ctx, tx, simulate)
+}
+
+// User-defined Decorator. Can choose to pre- and post-process on AnteHandler
+type UserDefinedDecorator struct{
+ // custom fields
+}
+
+func (udd UserDefinedDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {
+ // pre-processing logic
+
+ ctx, err = next(ctx, tx, simulate)
+
+ // post-processing logic
+}
+```
+
+Link AnteDecorators to create a final AnteHandler. Set this AnteHandler in baseapp.
+
+```go
+// Create final antehandler by chaining the decorators together
+antehandler := ChainAnteDecorators(NewSetUpContextDecorator(), NewSigVerifyDecorator(), NewUserDefinedDecorator())
+
+// Set chained Antehandler in the baseapp
+bapp.SetAnteHandler(antehandler)
+```
+
+Pros:
+
+1. Allows one decorator to pre- and post-process the next AnteHandler, similar to the Weave design.
+2. Do not need to break baseapp API. Users can still set a single AnteHandler if they choose.
+
+Cons:
+
+1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function.
+2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern.
+
+## Consequences
+
+Since pros and cons are written for each approach, it is omitted from this section
+
+## References
+
+* [#4572](https://github.com/cosmos/cosmos-sdk/issues/4572): Modular AnteHandler Issue
+* [#4582](https://github.com/cosmos/cosmos-sdk/pull/4583): Initial Implementation of Per-Module AnteHandler Approach
+* [Weave Decorator Code](https://github.com/iov-one/weave/blob/master/handler.go#L35)
+* [Weave Design Videos](https://vimeo.com/showcase/6189877)
diff --git a/copy-of-sdk-docs/build/architecture/adr-011-generalize-genesis-accounts.md b/copy-of-sdk-docs/build/architecture/adr-011-generalize-genesis-accounts.md
new file mode 100644
index 00000000..92a704ba
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-011-generalize-genesis-accounts.md
@@ -0,0 +1,170 @@
+# ADR 011: Generalize Genesis Accounts
+
+## Changelog
+
+* 2019-08-30: initial draft
+
+## Context
+
+Currently, the Cosmos SDK allows for custom account types; the `auth` keeper stores any type fulfilling its `Account` interface. However `auth` does not handle exporting or loading accounts to/from a genesis file, this is done by `genaccounts`, which only handles one of 4 concrete account types (`BaseAccount`, `ContinuousVestingAccount`, `DelayedVestingAccount` and `ModuleAccount`).
+
+Projects desiring to use custom accounts (say custom vesting accounts) need to fork and modify `genaccounts`.
+
+## Decision
+
+In summary, we will (un)marshal all accounts (interface types) directly using amino, rather than converting to `genaccounts`’s `GenesisAccount` type. Since doing this removes the majority of `genaccounts`'s code, we will merge `genaccounts` into `auth`. Marshalled accounts will be stored in `auth`'s genesis state.
+
+Detailed changes:
+
+### 1) (Un)Marshal accounts directly using amino
+
+The `auth` module's `GenesisState` gains a new field `Accounts`. Note these aren't of type `exported.Account` for reasons outlined in section 3.
+
+```go
+// GenesisState - all auth state that must be provided at genesis
+type GenesisState struct {
+ Params Params `json:"params" yaml:"params"`
+ Accounts []GenesisAccount `json:"accounts" yaml:"accounts"`
+}
+```
+
+Now `auth`'s `InitGenesis` and `ExportGenesis` (un)marshal accounts as well as the defined params.
+
+```go
+// InitGenesis - Init store state from genesis data
+func InitGenesis(ctx sdk.Context, ak AccountKeeper, data GenesisState) {
+ ak.SetParams(ctx, data.Params)
+ // load the accounts
+ for _, a := range data.Accounts {
+ acc := ak.NewAccount(ctx, a) // set account number
+ ak.SetAccount(ctx, acc)
+ }
+}
+
+// ExportGenesis returns a GenesisState for a given context and keeper
+func ExportGenesis(ctx sdk.Context, ak AccountKeeper) GenesisState {
+ params := ak.GetParams(ctx)
+
+ var genAccounts []exported.GenesisAccount
+ ak.IterateAccounts(ctx, func(account exported.Account) bool {
+ genAccount := account.(exported.GenesisAccount)
+ genAccounts = append(genAccounts, genAccount)
+ return false
+ })
+
+ return NewGenesisState(params, genAccounts)
+}
+```
+
+### 2) Register custom account types on the `auth` codec
+
+The `auth` codec must have all custom account types registered to marshal them. We will follow the pattern established in `gov` for proposals.
+
+An example custom account definition:
+
+```go
+import authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+
+// Register the module account type with the auth module codec so it can decode module accounts stored in a genesis file
+func init() {
+ authtypes.RegisterAccountTypeCodec(ModuleAccount{}, "cosmos-sdk/ModuleAccount")
+}
+
+type ModuleAccount struct {
+ ...
+```
+
+The `auth` codec definition:
+
+```go
+var ModuleCdc *codec.LegacyAmino
+
+func init() {
+ ModuleCdc = codec.NewLegacyAmino()
+ // register module msg's and Account interface
+ ...
+ // leave the codec unsealed
+}
+
+// RegisterAccountTypeCodec registers an external account type defined in another module for the internal ModuleCdc.
+func RegisterAccountTypeCodec(o interface{}, name string) {
+ ModuleCdc.RegisterConcrete(o, name, nil)
+}
+```
+
+### 3) Genesis validation for custom account types
+
+Modules implement a `ValidateGenesis` method. As `auth` does not know of account implementations, accounts will need to validate themselves.
+
+We will unmarshal accounts into a `GenesisAccount` interface that includes a `Validate` method.
+
+```go
+type GenesisAccount interface {
+ exported.Account
+ Validate() error
+}
+```
+
+Then the `auth` `ValidateGenesis` function becomes:
+
+```go
+// ValidateGenesis performs basic validation of auth genesis data returning an
+// error for any failed validation criteria.
+func ValidateGenesis(data GenesisState) error {
+ // Validate params
+ ...
+
+ // Validate accounts
+ addrMap := make(map[string]bool, len(data.Accounts))
+ for _, acc := range data.Accounts {
+
+ // check for duplicated accounts
+ addrStr := acc.GetAddress().String()
+ if _, ok := addrMap[addrStr]; ok {
+ return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr)
+ }
+ addrMap[addrStr] = true
+
+ // check account specific validation
+ if err := acc.Validate(); err != nil {
+ return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error())
+ }
+
+ }
+ return nil
+}
+```
+
+### 4) Move add-genesis-account cli to `auth`
+
+The `genaccounts` module contains a cli command to add base or vesting accounts to a genesis file.
+
+This will be moved to `auth`. We will leave it to projects to write their own commands to add custom accounts. An extensible cli handler, similar to `gov`, could be created but it is not worth the complexity for this minor use case.
+
+### 5) Update module and vesting accounts
+
+Under the new scheme, module and vesting account types need some minor updates:
+
+* Type registration on `auth`'s codec (shown above)
+* A `Validate` method for each `Account` concrete type
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* custom accounts can be used without needing to fork `genaccounts`
+* reduction in lines of code
+
+### Negative
+
+### Neutral
+
+* `genaccounts` module no longer exists
+* accounts in genesis files are stored under `accounts` in `auth` rather than in the `genaccounts` module.
+-`add-genesis-account` cli command now in `auth`
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-012-state-accessors.md b/copy-of-sdk-docs/build/architecture/adr-012-state-accessors.md
new file mode 100644
index 00000000..009e3492
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-012-state-accessors.md
@@ -0,0 +1,155 @@
+# ADR 012: State Accessors
+
+## Changelog
+
+* 2019 Sep 04: Initial draft
+
+## Context
+
+Cosmos SDK modules currently use the `KVStore` interface and `Codec` to access their respective state. While
+this provides a large degree of freedom to module developers, it is hard to modularize and the UX is
+mediocre.
+
+First, each time a module tries to access the state, it has to marshal the value and set or get the
+value and finally unmarshal. Usually this is done by declaring `Keeper.GetXXX` and `Keeper.SetXXX` functions,
+which are repetitive and hard to maintain.
+
+Second, this makes it harder to align with the object capability theorem: the right to access the
+state is defined as a `StoreKey`, which gives full access on the entire Merkle tree, so a module cannot
+send the access right to a specific key-value pair (or a set of key-value pairs) to another module safely.
+
+Finally, because the getter/setter functions are defined as methods of a module's `Keeper`, the reviewers
+have to consider the whole Merkle tree space when they reviewing a function accessing any part of the state.
+There is no static way to know which part of the state that the function is accessing (and which is not).
+
+## Decision
+
+We will define a type named `Value`:
+
+```go
+type Value struct {
+ m Mapping
+ key []byte
+}
+```
+
+The `Value` works as a reference for a key-value pair in the state, where `Value.m` defines the key-value
+space it will access and `Value.key` defines the exact key for the reference.
+
+We will define a type named `Mapping`:
+
+```go
+type Mapping struct {
+ storeKey sdk.StoreKey
+ cdc *codec.LegacyAmino
+ prefix []byte
+}
+```
+
+The `Mapping` works as a reference for a key-value space in the state, where `Mapping.storeKey` defines
+the IAVL (sub-)tree and `Mapping.prefix` defines the optional subspace prefix.
+
+We will define the following core methods for the `Value` type:
+
+```go
+// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal
+func (Value) Get(ctx Context, ptr interface{}) {}
+
+// Get and unmarshal stored data, return error if not exists or cannot unmarshal
+func (Value) GetSafe(ctx Context, ptr interface{}) {}
+
+// Get stored data as raw byte slice
+func (Value) GetRaw(ctx Context) []byte {}
+
+// Marshal and set a raw value
+func (Value) Set(ctx Context, o interface{}) {}
+
+// Check if a raw value exists
+func (Value) Exists(ctx Context) bool {}
+
+// Delete a raw value
+func (Value) Delete(ctx Context) {}
+```
+
+We will define the following core methods for the `Mapping` type:
+
+```go
+// Constructs key-value pair reference corresponding to the key argument in the Mapping space
+func (Mapping) Value(key []byte) Value {}
+
+// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal
+func (Mapping) Get(ctx Context, key []byte, ptr interface{}) {}
+
+// Get and unmarshal stored data, return error if not exists or cannot unmarshal
+func (Mapping) GetSafe(ctx Context, key []byte, ptr interface{})
+
+// Get stored data as raw byte slice
+func (Mapping) GetRaw(ctx Context, key []byte) []byte {}
+
+// Marshal and set a raw value
+func (Mapping) Set(ctx Context, key []byte, o interface{}) {}
+
+// Check if a raw value exists
+func (Mapping) Has(ctx Context, key []byte) bool {}
+
+// Delete a raw value
+func (Mapping) Delete(ctx Context, key []byte) {}
+```
+
+Each method of the `Mapping` type that is passed the arguments `ctx`, `key`, and `args...` will proxy
+the call to `Mapping.Value(key)` with arguments `ctx` and `args...`.
+
+In addition, we will define and provide a common set of types derived from the `Value` type:
+
+```go
+type Boolean struct { Value }
+type Enum struct { Value }
+type Integer struct { Value; enc IntEncoding }
+type String struct { Value }
+// ...
+```
+
+Where the encoding schemes can be different, `o` arguments in core methods are typed, and `ptr` arguments
+in core methods are replaced by explicit return types.
+
+Finally, we will define a family of types derived from the `Mapping` type:
+
+```go
+type Indexer struct {
+ m Mapping
+ enc IntEncoding
+}
+```
+
+Where the `key` argument in core method is typed.
+
+Some of the properties of the accessor types are:
+
+* State access happens only when a function which takes a `Context` as an argument is invoked
+* Accessor type structs give rights to access the state only that the struct is referring, no other
+* Marshalling/Unmarshalling happens implicitly within the core methods
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Serialization will be done automatically
+* Shorter code size, less boilerplate, better UX
+* References to the state can be transferred safely
+* Explicit scope of accessing
+
+### Negative
+
+* Serialization format will be hidden
+* Different architecture from the current, but the use of accessor types can be opt-in
+* Type-specific types (e.g. `Boolean` and `Integer`) have to be defined manually
+
+### Neutral
+
+## References
+
+* [#4554](https://github.com/cosmos/cosmos-sdk/issues/4554)
diff --git a/copy-of-sdk-docs/build/architecture/adr-013-metrics.md b/copy-of-sdk-docs/build/architecture/adr-013-metrics.md
new file mode 100644
index 00000000..b0808d46
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-013-metrics.md
@@ -0,0 +1,157 @@
+# ADR 013: Observability
+
+## Changelog
+
+* 20-01-2020: Initial Draft
+
+## Status
+
+Proposed
+
+## Context
+
+Telemetry is paramount into debugging and understanding what the application is doing and how it is
+performing. We aim to expose metrics from modules and other core parts of the Cosmos SDK.
+
+In addition, we should aim to support multiple configurable sinks that an operator may choose from.
+By default, when telemetry is enabled, the application should track and expose metrics that are
+stored in-memory. The operator may choose to enable additional sinks, where we support only
+[Prometheus](https://prometheus.io/) for now, as it's battle-tested, simple to setup, open source,
+and is rich with ecosystem tooling.
+
+We must also aim to integrate metrics into the Cosmos SDK in the most seamless way possible such that
+metrics may be added or removed at will and without much friction. To do this, we will use the
+[go-metrics](https://github.com/hashicorp/go-metrics) library.
+
+Finally, operators may enable telemetry along with specific configuration options. If enabled, metrics
+will be exposed via `/metrics?format={text|prometheus}` via the API server.
+
+## Decision
+
+We will add an additional configuration block to `app.toml` that defines telemetry settings:
+
+```toml
+###############################################################################
+### Telemetry Configuration ###
+###############################################################################
+
+[telemetry]
+
+# Prefixed with keys to separate services
+service-name = {{ .Telemetry.ServiceName }}
+
+# Enabled enables the application telemetry functionality. When enabled,
+# an in-memory sink is also enabled by default. Operators may also enabled
+# other sinks such as Prometheus.
+enabled = {{ .Telemetry.Enabled }}
+
+# Enable prefixing gauge values with hostname
+enable-hostname = {{ .Telemetry.EnableHostname }}
+
+# Enable adding hostname to labels
+enable-hostname-label = {{ .Telemetry.EnableHostnameLabel }}
+
+# Enable adding service to labels
+enable-service-label = {{ .Telemetry.EnableServiceLabel }}
+
+# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink.
+prometheus-retention-time = {{ .Telemetry.PrometheusRetentionTime }}
+```
+
+The given configuration allows for two sinks -- in-memory and Prometheus. We create a `Metrics`
+type that performs all the bootstrapping for the operator, so capturing metrics becomes seamless.
+
+```go
+// Metrics defines a wrapper around application telemetry functionality. It allows
+// metrics to be gathered at any point in time. When creating a Metrics object,
+// internally, a global metrics is registered with a set of sinks as configured
+// by the operator. In addition to the sinks, when a process gets a SIGUSR1, a
+// dump of formatted recent metrics will be sent to STDERR.
+type Metrics struct {
+ memSink *metrics.InmemSink
+ prometheusEnabled bool
+}
+
+// Gather collects all registered metrics and returns a GatherResponse where the
+// metrics are encoded depending on the type. Metrics are either encoded via
+// Prometheus or JSON if in-memory.
+func (m *Metrics) Gather(format string) (GatherResponse, error) {
+ switch format {
+ case FormatPrometheus:
+ return m.gatherPrometheus()
+
+ case FormatText:
+ return m.gatherGeneric()
+
+ case FormatDefault:
+ return m.gatherGeneric()
+
+ default:
+ return GatherResponse{}, fmt.Errorf("unsupported metrics format: %s", format)
+ }
+}
+```
+
+In addition, `Metrics` allows us to gather the current set of metrics at any given point in time. An
+operator may also choose to send a signal, SIGUSR1, to dump and print formatted metrics to STDERR.
+
+During an application's bootstrapping and construction phase, if `Telemetry.Enabled` is `true`, the
+API server will create an instance of a reference to `Metrics` object and will register a metrics
+handler accordingly.
+
+```go
+func (s *Server) Start(cfg config.Config) error {
+ // ...
+
+ if cfg.Telemetry.Enabled {
+ m, err := telemetry.New(cfg.Telemetry)
+ if err != nil {
+ return err
+ }
+
+ s.metrics = m
+ s.registerMetrics()
+ }
+
+ // ...
+}
+
+func (s *Server) registerMetrics() {
+ metricsHandler := func(w http.ResponseWriter, r *http.Request) {
+ format := strings.TrimSpace(r.FormValue("format"))
+
+ gr, err := s.metrics.Gather(format)
+ if err != nil {
+ rest.WriteErrorResponse(w, http.StatusBadRequest, fmt.Sprintf("failed to gather metrics: %s", err))
+ return
+ }
+
+ w.Header().Set("Content-Type", gr.ContentType)
+ _, _ = w.Write(gr.Metrics)
+ }
+
+ s.Router.HandleFunc("/metrics", metricsHandler).Methods("GET")
+}
+```
+
+Application developers may track counters, gauges, summaries, and key/value metrics. There is no
+additional lifting required by modules to leverage profiling metrics. To do so, it's as simple as:
+
+```go
+func (k BaseKeeper) MintCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) error {
+ defer metrics.MeasureSince(time.Now(), "MintCoins")
+ // ...
+}
+```
+
+## Consequences
+
+### Positive
+
+* Exposure into the performance and behavior of an application
+
+### Negative
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-014-proportional-slashing.md b/copy-of-sdk-docs/build/architecture/adr-014-proportional-slashing.md
new file mode 100644
index 00000000..976136a9
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-014-proportional-slashing.md
@@ -0,0 +1,85 @@
+# ADR 14: Proportional Slashing
+
+## Changelog
+
+* 2019-10-15: Initial draft
+* 2020-05-25: Removed correlation root slashing
+* 2020-07-01: Updated to include S-curve function instead of linear
+
+## Context
+
+In Proof of Stake-based chains, centralization of consensus power amongst a small set of validators can cause harm to the network due to increased risk of censorship, liveness failure, fork attacks, etc. However, while this centralization causes a negative externality to the network, it is not directly felt by the delegators contributing towards delegating towards already large validators. We would like a way to pass on the negative externality cost of centralization onto those large validators and their delegators.
+
+## Decision
+
+### Design
+
+To solve this problem, we will implement a procedure called Proportional Slashing. The desire is that the larger a validator is, the more they should be slashed. The first naive attempt is to make a validator's slash percent proportional to their share of consensus voting power.
+
+```text
+slash_amount = k * power // power is the faulting validator's voting power and k is some on-chain constant
+```
+
+However, this will incentivize validators with large amounts of stake to split up their voting power amongst accounts (sybil attack), so that if they fault, they all get slashed at a lower percent. The solution to this is to take into account not just a validator's own voting percentage, but also the voting percentage of all the other validators who get slashed in a specified time frame.
+
+```text
+slash_amount = k * (power_1 + power_2 + ... + power_n) // where power_i is the voting power of the ith validator faulting in the specified time frame and k is some on-chain constant
+```
+
+Now, if someone splits a validator of 10% into two validators of 5% each which both fault, then they both fault in the same time frame, they both will get slashed at the sum 10% amount.
+
+However in practice, we likely don't want a linear relation between amount of stake at fault, and the percentage of stake to slash. In particular, solely 5% of stake double signing effectively did nothing to majorly threaten security, whereas 30% of stake being at fault clearly merits a large slashing factor, due to being very close to the point at which Tendermint security is threatened. A linear relation would require a factor of 6 gap between these two, whereas the difference in risk posed to the network is much larger. We propose using S-curves (formally [logistic functions](https://en.wikipedia.org/wiki/Logistic_function) to solve this). S-Curves capture the desired criterion quite well. They allow the slashing factor to be minimal for small values, and then grow very rapidly near some threshold point where the risk posed becomes notable.
+
+#### Parameterization
+
+This requires parameterizing a logistic function. It is very well understood how to parameterize this. It has four parameters:
+
+1) A minimum slashing factor
+2) A maximum slashing factor
+3) The inflection point of the S-curve (essentially where do you want to center the S)
+4) The rate of growth of the S-curve (How elongated is the S)
+
+#### Correlation across non-sybil validators
+
+One will note, that this model doesn't differentiate between multiple validators run by the same operators vs validators run by different operators. This can be seen as an additional benefit in fact. It incentivizes validators to differentiate their setups from other validators, to avoid having correlated faults with them or else they risk a higher slash. So for example, operators should avoid using the same popular cloud hosting platforms or using the same Staking as a Service providers. This will lead to a more resilient and decentralized network.
+
+#### Griefing
+
+Griefing, the act of intentionally getting oneself slashed in order to make another's slash worse, could be a concern here. However, using the protocol described here, the attacker also gets equally impacted by the grief as the victim, so it would not provide much benefit to the griefer.
+
+### Implementation
+
+In the slashing module, we will add two queues that will track all of the recent slash events. For double sign faults, we will define "recent slashes" as ones that have occurred within the last `unbonding period`. For liveness faults, we will define "recent slashes" as ones that have occurred within the last `jail period`.
+
+```go
+type SlashEvent struct {
+ Address sdk.ValAddress
+ ValidatorVotingPercent sdk.Dec
+ SlashedSoFar sdk.Dec
+}
+```
+
+These slash events will be pruned from the queue once they are older than their respective "recent slash period".
+
+Whenever a new slash occurs, a `SlashEvent` struct is created with the faulting validator's voting percent and a `SlashedSoFar` of 0. Because recent slash events are pruned before the unbonding period and unjail period expires, it should not be possible for the same validator to have multiple SlashEvents in the same Queue at the same time.
+
+We then will iterate over all the SlashEvents in the queue, adding their `ValidatorVotingPercent` to calculate the new percent to slash all the validators in the queue at, using the "Square of Sum of Roots" formula introduced above.
+
+Once we have the `NewSlashPercent`, we then iterate over all the `SlashEvent`s in the queue once again, and if `NewSlashPercent > SlashedSoFar` for that SlashEvent, we call the `staking.Slash(slashEvent.Address, slashEvent.Power, Math.Min(Math.Max(minSlashPercent, NewSlashPercent - SlashedSoFar), maxSlashPercent)` (we pass in the power of the validator before any slashes occurred, so that we slash the right amount of tokens). We then set `SlashEvent.SlashedSoFar` amount to `NewSlashPercent`.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Increases decentralization by disincentivizing delegating to large validators
+* Incentivizes Decorrelation of Validators
+* More severely punishes attacks than accidental faults
+* More flexibility in slashing rates parameterization
+
+### Negative
+
+* More computationally expensive than current implementation. Will require more data about "recent slashing events" to be stored on chain.
diff --git a/copy-of-sdk-docs/build/architecture/adr-016-validator-consensus-key-rotation.md b/copy-of-sdk-docs/build/architecture/adr-016-validator-consensus-key-rotation.md
new file mode 100644
index 00000000..37ba3e52
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-016-validator-consensus-key-rotation.md
@@ -0,0 +1,125 @@
+# ADR 016: Validator Consensus Key Rotation
+
+## Changelog
+
+* 2019 Oct 23: Initial draft
+* 2019 Nov 28: Add key rotation fee
+
+## Context
+
+Validator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. https://github.com/tendermint/tendermint/issues/1136). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos SDK.
+
+We don't need to make any update on consensus logic in Tendermint because Tendermint does not have any mapping information of consensus key and validator operator key, meaning that from Tendermint's point of view, a consensus key rotation of a validator is simply a replacement of a consensus key to another.
+
+Also, it should be noted that this ADR includes only the simplest form of consensus key rotation without considering the multiple consensus keys concept. Such multiple consensus keys concept shall remain a long term goal of Tendermint and Cosmos SDK.
+
+## Decision
+
+### Pseudo procedure for consensus key rotation
+
+* create new random consensus key.
+* create and broadcast a transaction with a `MsgRotateConsPubKey` that states the new consensus key is now coupled with the validator operator with a signature from the validator's operator key.
+* old consensus key becomes unable to participate on consensus immediately after the update of key mapping state on-chain.
+* start validating with new consensus key.
+* validators using HSM and KMS should update the consensus key in HSM to use the new rotated key after the height `h` when `MsgRotateConsPubKey` is committed to the blockchain.
+
+### Considerations
+
+* consensus key mapping information management strategy
+ * store history of each key mapping changes in the kvstore.
+ * the state machine can search corresponding consensus key paired with the given validator operator for any arbitrary height in a recent unbonding period.
+ * the state machine does not need any historical mapping information which is past more than unbonding period.
+* key rotation costs related to LCD and IBC
+ * LCD and IBC will have a traffic/computation burden when there exists frequent power changes
+ * In current Tendermint design, consensus key rotations are seen as power changes from LCD or IBC perspective
+ * Therefore, to minimize unnecessary frequent key rotation behavior, we limited the maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee
+* limits
+ * a validator cannot rotate its consensus key more than `MaxConsPubKeyRotations` time for any unbonding period, to prevent spam.
+ * parameters can be decided by governance and stored in genesis file.
+* key rotation fee
+ * a validator should pay `KeyRotationFee` to rotate the consensus key which is calculated as below
+ * `KeyRotationFee` = (max(`VotingPowerPercentage` *100, 1)* `InitialKeyRotationFee`) * 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period)
+* evidence module
+ * evidence module can search corresponding consensus key for any height from slashing keeper so that it can decide which consensus key is supposed to be used for the given height.
+* abci.ValidatorUpdate
+ * tendermint already has ability to change a consensus key by ABCI communication(`ValidatorUpdate`).
+ * validator consensus key update can be done via creating new + delete old by change the power to zero.
+ * therefore, we expect we do not even need to change Tendermint codebase at all to implement this feature.
+* new genesis parameters in `staking` module
+ * `MaxConsPubKeyRotations` : maximum number of rotation can be executed by a validator in recent unbonding period. default value 10 is suggested(11th key rotation will be rejected)
+ * `InitialKeyRotationFee` : the initial key rotation fee when no key rotation has happened in recent unbonding period. default value 1atom is suggested(1atom fee for the first key rotation in recent unbonding period)
+
+### Workflow
+
+1. The validator generates a new consensus keypair.
+2. The validator generates and signs a `MsgRotateConsPubKey` tx with their operator key and new ConsPubKey
+
+ ```go
+ type MsgRotateConsPubKey struct {
+ ValidatorAddress sdk.ValAddress
+ NewPubKey crypto.PubKey
+ }
+ ```
+
+3. `handleMsgRotateConsPubKey` gets `MsgRotateConsPubKey`, calls `RotateConsPubKey` with emits event
+4. `RotateConsPubKey`
+ * checks if `NewPubKey` is not duplicated on `ValidatorsByConsAddr`
+ * checks if the validator is does not exceed parameter `MaxConsPubKeyRotations` by iterating `ConsPubKeyRotationHistory`
+ * checks if the signing account has enough balance to pay `KeyRotationFee`
+ * pays `KeyRotationFee` to community fund
+ * overwrites `NewPubKey` in `validator.ConsPubKey`
+ * deletes old `ValidatorByConsAddr`
+ * `SetValidatorByConsAddr` for `NewPubKey`
+ * Add `ConsPubKeyRotationHistory` for tracking rotation
+
+ ```go
+ type ConsPubKeyRotationHistory struct {
+ OperatorAddress sdk.ValAddress
+ OldConsPubKey crypto.PubKey
+ NewConsPubKey crypto.PubKey
+ RotatedHeight int64
+ }
+ ```
+
+5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator
+
+ ```go
+ abci.ValidatorUpdate{
+ PubKey: cmttypes.TM2PB.PubKey(OldConsPubKey),
+ Power: 0,
+ }
+
+ abci.ValidatorUpdate{
+ PubKey: cmttypes.TM2PB.PubKey(NewConsPubKey),
+ Power: v.ConsensusPower(),
+ }
+ ```
+
+6. at `previousVotes` Iteration logic of `AllocateTokens`, `previousVote` using `OldConsPubKey` match up with `ConsPubKeyRotationHistory`, and replace validator for token allocation
+7. Migrate `ValidatorSigningInfo` and `ValidatorMissedBlockBitArray` from `OldConsPubKey` to `NewConsPubKey`
+
+* Note : All above features shall be implemented in `staking` module.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Validators can immediately or periodically rotate their consensus key to have a better security policy
+* improved security against Long-Range attacks (https://nearprotocol.com/blog/long-range-attacks-and-a-new-fork-choice-rule) given a validator throws away the old consensus key(s)
+
+### Negative
+
+* Slash module needs more computation because it needs to look up the corresponding consensus key of validators for each height
+* frequent key rotations will make light client bisection less efficient
+
+### Neutral
+
+## References
+
+* on tendermint repo : https://github.com/tendermint/tendermint/issues/1136
+* on cosmos-sdk repo : https://github.com/cosmos/cosmos-sdk/issues/5231
+* about multiple consensus keys : https://github.com/tendermint/tendermint/issues/1758#issuecomment-545291698
diff --git a/copy-of-sdk-docs/build/architecture/adr-017-historical-header-module.md b/copy-of-sdk-docs/build/architecture/adr-017-historical-header-module.md
new file mode 100644
index 00000000..573c632c
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-017-historical-header-module.md
@@ -0,0 +1,61 @@
+# ADR 17: Historical Header Module
+
+## Changelog
+
+* 26 November 2019: Start of first version
+* 2 December 2019: Final draft of first version
+
+## Context
+
+In order for the Cosmos SDK to implement the [IBC specification](https://github.com/cosmos/ics), modules within the Cosmos SDK must have the ability to introspect recent consensus states (validator sets & commitment roots) as proofs of these values on other chains must be checked during the handshakes.
+
+## Decision
+
+The application MUST store the most recent `n` headers in a persistent store. At first, this store MAY be the current Merklised store. A non-Merklised store MAY be used later as no proofs are necessary.
+
+The application MUST store this information by storing new headers immediately when handling `abci.RequestBeginBlock`:
+
+```go
+func BeginBlock(ctx sdk.Context, keeper HistoricalHeaderKeeper, req abci.RequestBeginBlock) abci.ResponseBeginBlock {
+ info := HistoricalInfo{
+ Header: ctx.BlockHeader(),
+ ValSet: keeper.StakingKeeper.GetAllValidators(ctx), // note that this must be stored in a canonical order
+ }
+ keeper.SetHistoricalInfo(ctx, ctx.BlockHeight(), info)
+ n := keeper.GetParamRecentHeadersToStore()
+ keeper.PruneHistoricalInfo(ctx, ctx.BlockHeight() - n)
+ // continue handling request
+}
+```
+
+Alternatively, the application MAY store only the hash of the validator set.
+
+The application MUST make these past `n` committed headers available for querying by Cosmos SDK modules through the `Keeper`'s `GetHistoricalInfo` function. This MAY be implemented in a new module, or it MAY also be integrated into an existing one (likely `x/staking` or `x/ibc`).
+
+`n` MAY be configured as a parameter store parameter, in which case it could be changed by `ParameterChangeProposal`s, although it will take some blocks for the stored information to catch up if `n` is increased.
+
+## Status
+
+Proposed.
+
+## Consequences
+
+Implementation of this ADR will require changes to the Cosmos SDK. It will not require changes to Tendermint.
+
+### Positive
+
+* Easy retrieval of headers & state roots for recent past heights by modules anywhere in the Cosmos SDK.
+* No RPC calls to Tendermint required.
+* No ABCI alterations required.
+
+### Negative
+
+* Duplicates `n` headers data in Tendermint & the application (additional disk usage) - in the long term, an approach such as [this](https://github.com/tendermint/tendermint/issues/4210) might be preferable.
+
+### Neutral
+
+(none known)
+
+## References
+
+* [ICS 2: "Consensus state introspection"](https://github.com/cosmos/ibc/tree/master/spec/core/ics-002-client-semantics#consensus-state-introspection)
diff --git a/copy-of-sdk-docs/build/architecture/adr-018-extendable-voting-period.md b/copy-of-sdk-docs/build/architecture/adr-018-extendable-voting-period.md
new file mode 100644
index 00000000..2624e21e
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-018-extendable-voting-period.md
@@ -0,0 +1,66 @@
+# ADR 18: Extendable Voting Periods
+
+## Changelog
+
+* 1 January 2020: Start of first version
+
+## Context
+
+Currently the voting period for all governance proposals is the same. However, this is suboptimal as all governance proposals do not require the same time period. For more non-contentious proposals, they can be dealt with more efficiently with a faster period, while more contentious or complex proposals may need a longer period for extended discussion/consideration.
+
+## Decision
+
+We would like to design a mechanism for making the voting period of a governance proposal variable based on the demand of voters. We would like it to be based on the view of the governance participants, rather than just the proposer of a governance proposal (thus, allowing the proposer to select the voting period length is not sufficient).
+
+However, we would like to avoid the creation of an entire second voting process to determine the length of the voting period, as it just pushed the problem to determining the length of that first voting period.
+
+Thus, we propose the following mechanism:
+
+### Params
+
+* The current gov param `VotingPeriod` is to be replaced by a `MinVotingPeriod` param. This is the default voting period that all governance proposal voting periods start with.
+* There is a new gov param called `MaxVotingPeriodExtension`.
+
+### Mechanism
+
+There is a new `Msg` type called `MsgExtendVotingPeriod`, which can be sent by any staked account during a proposal's voting period. It allows the sender to unilaterally extend the length of the voting period by `MaxVotingPeriodExtension * sender's share of voting power`. Every address can only call `MsgExtendVotingPeriod` once per proposal.
+
+So for example, if the `MaxVotingPeriodExtension` is set to 100 Days, then anyone with 1% of voting power can extend the voting power by 1 day. If 33% of voting power has sent the message, the voting period will be extended by 33 days. Thus, if absolutely everyone chooses to extend the voting period, the absolute maximum voting period will be `MinVotingPeriod + MaxVotingPeriodExtension`.
+
+This system acts as a sort of distributed coordination, where individual stakers choosing to extend or not, allows the system the gauge the contentiousness/complexity of the proposal. It is extremely unlikely that many stakers will choose to extend at the exact same time, it allows stakers to view how long others have already extended thus far, to decide whether or not to extend further.
+
+### Dealing with Unbonding/Redelegation
+
+There is one thing that needs to be addressed. How to deal with redelegation/unbonding during the voting period. If a staker of 5% calls `MsgExtendVotingPeriod` and then unbonds, does the voting period then decrease by 5 days again? This is not good as it can give people a false sense of how long they have to make their decision. For this reason, we want to design it such that the voting period length can only be extended, not shortened. To do this, the current extension amount is based on the highest percent that voted extension at any time. This is best explained by example:
+
+1. Let's say 2 stakers of voting power 4% and 3% respectively vote to extend. The voting period will be extended by 7 days.
+2. Now the staker of 3% decides to unbond before the end of the voting period. The voting period extension remains 7 days.
+3. Now, let's say another staker of 2% voting power decides to extend voting period. There is now 6% of active voting power choosing the extend. The voting power remains 7 days.
+4. If a fourth staker of 10% chooses to extend now, there is a total of 16% of active voting power wishing to extend. The voting period will be extended to 16 days.
+
+### Delegators
+
+Just like votes in the actual voting period, delegators automatically inherit the extension of their validators. If their validator chooses to extend, their voting power will be used in the validator's extension. However, the delegator is unable to override their validator and "unextend" as that would contradict the "voting power length can only be ratcheted up" principle described in the previous section. However, a delegator may choose the extend using their personal voting power, if their validator has not done so.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* More complex/contentious governance proposals will have more time to properly digest and deliberate
+
+### Negative
+
+* Governance process becomes more complex and requires more understanding to interact with effectively
+* Can no longer predict when a governance proposal will end. Can't assume order in which governance proposals will end.
+
+### Neutral
+
+* The minimum voting period can be made shorter
+
+## References
+
+* [Cosmos Forum post where idea first originated](https://forum.cosmos.network/t/proposal-draft-reduce-governance-voting-period-to-7-days/3032/9)
diff --git a/copy-of-sdk-docs/build/architecture/adr-019-protobuf-state-encoding.md b/copy-of-sdk-docs/build/architecture/adr-019-protobuf-state-encoding.md
new file mode 100644
index 00000000..d0fc506e
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-019-protobuf-state-encoding.md
@@ -0,0 +1,379 @@
+# ADR 019: Protocol Buffer State Encoding
+
+## Changelog
+
+* 2020 Feb 15: Initial Draft
+* 2020 Feb 24: Updates to handle messages with interface fields
+* 2020 Apr 27: Convert usages of `oneof` for interfaces to `Any`
+* 2020 May 15: Describe `cosmos_proto` extensions and amino compatibility
+* 2020 Dec 4: Move and rename `MarshalAny` and `UnmarshalAny` into the `codec.Codec` interface.
+* 2021 Feb 24: Remove mentions of `HybridCodec`, which has been abandoned in [#6843](https://github.com/cosmos/cosmos-sdk/pull/6843).
+
+## Status
+
+Accepted
+
+## Context
+
+Currently, the Cosmos SDK utilizes [go-amino](https://github.com/tendermint/go-amino/) for binary
+and JSON object encoding over the wire bringing parity between logical objects and persistence objects.
+
+From the Amino docs:
+
+> Amino is an object encoding specification. It is a subset of Proto3 with an extension for interface
+> support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more
+> information on Proto3, which Amino is largely compatible with (but not with Proto2).
+>
+> The goal of the Amino encoding protocol is to bring parity into logic objects and persistence objects.
+
+Amino also aims to have the following goals (not a complete list):
+
+* Binary bytes must be decodable with a schema.
+* Schema must be upgradeable.
+* The encoder and decoder logic must be reasonably simple.
+
+However, we believe that Amino does not fulfill these goals completely and does not fully meet the
+needs of a truly flexible cross-language and multi-client compatible encoding protocol in the Cosmos SDK.
+Namely, Amino has proven to be a big pain-point in regards to supporting object serialization across
+clients written in various languages while providing virtually little in the way of true backwards
+compatibility and upgradeability. Furthermore, through profiling and various benchmarks, Amino has
+been shown to be an extremely large performance bottleneck in the Cosmos SDK 1. This is
+largely reflected in the performance of simulations and application transaction throughput.
+
+Thus, we need to adopt an encoding protocol that meets the following criteria for state serialization:
+
+* Language agnostic
+* Platform agnostic
+* Rich client support and thriving ecosystem
+* High performance
+* Minimal encoded message size
+* Codegen-based over reflection-based
+* Supports backward and forward compatibility
+
+Note, migrating away from Amino should be viewed as a two-pronged approach, state and client encoding.
+This ADR focuses on state serialization in the Cosmos SDK state machine. A corresponding ADR will be
+made to address client-side encoding.
+
+## Decision
+
+We will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers) for serializing
+persisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for
+applications wishing to continue to use Amino. We will provide this mechanism by updating modules to
+accept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK
+will provide two concrete implementations of the `Marshaler` interface: `AminoCodec` and `ProtoCodec`.
+
+* `AminoCodec`: Uses Amino for both binary and JSON encoding.
+* `ProtoCodec`: Uses Protobuf for both binary and JSON encoding.
+
+Modules will use whichever codec is instantiated in the app. By default, the Cosmos SDK's `simapp`
+instantiates a `ProtoCodec` as the concrete implementation of `Marshaler`, inside the `MakeTestEncodingConfig`
+function. This can be easily overwritten by app developers if they so desire.
+
+The ultimate goal will be to replace Amino JSON encoding with Protobuf encoding and thus have
+modules accept and/or extend `ProtoCodec`. Until then, Amino JSON is still provided for legacy use-cases.
+A handful of places in the Cosmos SDK still have Amino JSON hardcoded, such as the Legacy API REST endpoints
+and the `x/params` store. They are planned to be converted to Protobuf in a gradual manner.
+
+### Module Codecs
+
+Modules that do not require the ability to work with and serialize interfaces, the path to Protobuf
+migration is pretty straightforward. These modules are to simply migrate any existing types that
+are encoded and persisted via their concrete Amino codec to Protobuf and have their keeper accept a
+`Marshaler` that will be a `ProtoCodec`. This migration is simple as things will just work as-is.
+
+Note, any business logic that needs to encode primitive types like `bool` or `int64` should use
+[gogoprotobuf](https://github.com/cosmos/gogoproto) Value types.
+
+Example:
+
+```go
+ ts, err := gogotypes.TimestampProto(completionTime)
+ if err != nil {
+ // ...
+ }
+
+ bz := cdc.MustMarshal(ts)
+```
+
+However, modules can vary greatly in purpose and design and so we must support the ability for modules
+to be able to encode and work with interfaces (e.g. `Account` or `Content`). For these modules, they
+must define their own codec interface that extends `Marshaler`. These specific interfaces are unique
+to the module and will contain method contracts that know how to serialize the needed interfaces.
+
+Example:
+
+```go
+// x/auth/types/codec.go
+
+type Codec interface {
+ codec.Codec
+
+ MarshalAccount(acc exported.Account) ([]byte, error)
+ UnmarshalAccount(bz []byte) (exported.Account, error)
+
+ MarshalAccountJSON(acc exported.Account) ([]byte, error)
+ UnmarshalAccountJSON(bz []byte) (exported.Account, error)
+}
+```
+
+### Usage of `Any` to encode interfaces
+
+In general, module-level .proto files should define messages which encode interfaces
+using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto).
+After [extension discussion](https://github.com/cosmos/cosmos-sdk/issues/6030),
+this was chosen as the preferred alternative to application-level `oneof`s
+as in our original protobuf design. The arguments in favor of `Any` can be
+summarized as follows:
+
+* `Any` provides a simpler, more consistent client UX for dealing with
+interfaces than app-level `oneof`s that will need to be coordinated more
+carefully across applications. Creating a generic transaction
+signing library using `oneof`s may be cumbersome and critical logic may need
+to be reimplemented for each chain
+* `Any` provides more resistance against human error than `oneof`
+* `Any` is generally simpler to implement for both modules and apps
+
+The main counter-argument to using `Any` centers around its additional space
+and possibly performance overhead. The space overhead could be dealt with using
+compression at the persistence layer in the future and the performance impact
+is likely to be small. Thus, not using `Any` is seen as a pre-mature optimization,
+with user experience as the higher order concern.
+
+Note, that given the Cosmos SDK's decision to adopt the `Codec` interfaces described
+above, apps can still choose to use `oneof` to encode state and transactions
+but it is not the recommended approach. If apps do choose to use `oneof`s
+instead of `Any` they will likely lose compatibility with client apps that
+support multiple chains. Thus developers should think carefully about whether
+they care more about what is possibly a premature optimization or end-user
+and client developer UX.
+
+### Safe usage of `Any`
+
+By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types)
+uses [global type registration]( https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540)
+to decode values packed in `Any` into concrete
+go types. This introduces a vulnerability where any malicious module
+in the dependency tree could register a type with the global protobuf registry
+and cause it to be loaded and unmarshaled by a transaction that referenced
+it in the `type_url` field.
+
+To prevent this, we introduce a type registration mechanism for decoding `Any`
+values into concrete types through the `InterfaceRegistry` interface which
+bears some similarity to type registration with Amino:
+
+```go
+type InterfaceRegistry interface {
+ // RegisterInterface associates protoName as the public name for the
+ // interface passed in as iface
+ // Ex:
+ // registry.RegisterInterface("cosmos_sdk.Msg", (*sdk.Msg)(nil))
+ RegisterInterface(protoName string, iface interface{})
+
+ // RegisterImplementations registers impls as concrete implementations of
+ // the interface iface
+ // Ex:
+ // registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{}, &MsgMultiSend{})
+ RegisterImplementations(iface interface{}, impls ...proto.Message)
+
+}
+```
+
+In addition to serving as a whitelist, `InterfaceRegistry` can also serve
+to communicate the list of concrete types that satisfy an interface to clients.
+
+In .proto files:
+
+* fields which accept interfaces should be annotated with `cosmos_proto.accepts_interface`
+using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`
+* interface implementations should be annotated with `cosmos_proto.implements_interface`
+using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`
+
+In the future, `protoName`, `cosmos_proto.accepts_interface`, `cosmos_proto.implements_interface`
+may be used via code generation, reflection &/or static linting.
+
+The same struct that implements `InterfaceRegistry` will also implement an
+interface `InterfaceUnpacker` to be used for unpacking `Any`s:
+
+```go
+type InterfaceUnpacker interface {
+ // UnpackAny unpacks the value in any to the interface pointer passed in as
+ // iface. Note that the type in any must have been registered with
+ // RegisterImplementations as a concrete type for that interface
+ // Ex:
+ // var msg sdk.Msg
+ // err := ctx.UnpackAny(any, &msg)
+ // ...
+ UnpackAny(any *Any, iface interface{}) error
+}
+```
+
+Note that `InterfaceRegistry` usage does not deviate from standard protobuf
+usage of `Any`, it just introduces a security and introspection layer for
+golang usage.
+
+`InterfaceRegistry` will be a member of `ProtoCodec`
+described above. In order for modules to register interface types, app modules
+can optionally implement the following interface:
+
+```go
+type InterfaceModule interface {
+ RegisterInterfaceTypes(InterfaceRegistry)
+}
+```
+
+The module manager will include a method to call `RegisterInterfaceTypes` on
+every module that implements it in order to populate the `InterfaceRegistry`.
+
+### Using `Any` to encode state
+
+The Cosmos SDK will provide support methods `MarshalInterface` and `UnmarshalInterface` to hide the complexity of wrapping interface types into `Any` and allow easy serialization.
+
+```go
+import "github.com/cosmos/cosmos-sdk/codec"
+
+// note: eviexported.Evidence is an interface type
+func MarshalEvidence(cdc codec.BinaryCodec, e eviexported.Evidence) ([]byte, error) {
+ return cdc.MarshalInterface(e)
+}
+
+func UnmarshalEvidence(cdc codec.BinaryCodec, bz []byte) (eviexported.Evidence, error) {
+ var evi eviexported.Evidence
+ err := cdc.UnmarshalInterface(&evi, bz)
+ return err, nil
+}
+```
+
+### Using `Any` in `sdk.Msg`s
+
+A similar concept is to be applied for messages that contain interface fields.
+For example, we can define `MsgSubmitEvidence` as follows where `Evidence` is
+an interface:
+
+```protobuf
+// x/evidence/types/types.proto
+
+message MsgSubmitEvidence {
+ bytes submitter = 1
+ [
+ (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress"
+ ];
+ google.protobuf.Any evidence = 2;
+}
+```
+
+Note that in order to unpack the evidence from `Any` we do need a reference to
+`InterfaceRegistry`. In order to reference evidence in methods like
+`ValidateBasic` which shouldn't have to know about the `InterfaceRegistry`, we
+introduce an `UnpackInterfaces` phase to deserialization which unpacks
+interfaces before they're needed.
+
+### Unpacking Interfaces
+
+To implement the `UnpackInterfaces` phase of deserialization which unpacks
+interfaces wrapped in `Any` before they're needed, we create an interface
+that `sdk.Msg`s and other types can implement:
+
+```go
+type UnpackInterfacesMessage interface {
+ UnpackInterfaces(InterfaceUnpacker) error
+}
+```
+
+We also introduce a private `cachedValue interface{}` field onto the `Any`
+struct itself with a public getter `GetCachedValue() interface{}`.
+
+The `UnpackInterfaces` method is to be invoked during message deserialization right
+after `Unmarshal` and any interface values packed in `Any`s will be decoded
+and stored in `cachedValue` for reference later.
+
+Then unpacked interface values can safely be used in any code afterwards
+without knowledge of the `InterfaceRegistry`
+and messages can introduce a simple getter to cast the cached value to the
+correct interface type.
+
+This has the added benefit that unmarshaling of `Any` values only happens once
+during initial deserialization rather than every time the value is read. Also,
+when `Any` values are first packed (for instance in a call to
+`NewMsgSubmitEvidence`), the original interface value is cached so that
+unmarshaling isn't needed to read it again.
+
+`MsgSubmitEvidence` could implement `UnpackInterfaces`, plus a convenience getter
+`GetEvidence` as follows:
+
+```go
+func (msg MsgSubmitEvidence) UnpackInterfaces(ctx sdk.InterfaceRegistry) error {
+ var evi eviexported.Evidence
+ return ctx.UnpackAny(msg.Evidence, *evi)
+}
+
+func (msg MsgSubmitEvidence) GetEvidence() eviexported.Evidence {
+ return msg.Evidence.GetCachedValue().(eviexported.Evidence)
+}
+```
+
+### Amino Compatibility
+
+Our custom implementation of `Any` can be used transparently with Amino if used
+with the proper codec instance. What this means is that interfaces packed within
+`Any`s will be amino marshaled like regular Amino interfaces (assuming they
+have been registered properly with Amino).
+
+In order for this functionality to work:
+
+* **all legacy code must use `*codec.LegacyAmino` instead of `*amino.Codec` which is
+ now a wrapper which properly handles `Any`**
+* **all new code should use `Marshaler` which is compatible with both amino and
+ protobuf**
+* Also, before v0.39, `codec.LegacyAmino` will be renamed to `codec.LegacyAmino`.
+
+### Why Wasn't X Chosen Instead
+
+For a more complete comparison to alternative protocols, see [here](https://codeburst.io/json-vs-protocol-buffers-vs-flatbuffers-a4247f8bda6f).
+
+### Cap'n Proto
+
+While [Cap’n Proto](https://capnproto.org/) does seem like an advantageous alternative to Protobuf
+due to its native support for interfaces/generics and built-in canonicalization, it does lack the
+rich client ecosystem compared to Protobuf and is a bit less mature.
+
+### FlatBuffers
+
+[FlatBuffers](https://google.github.io/flatbuffers/) is also a potentially viable alternative, with the
+primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary
+representation before you can access data, often coupled with per-object memory allocation.
+
+However, it would require great efforts into research and a full understanding the scope of the migration
+and path forward -- which isn't immediately clear. In addition, FlatBuffers aren't designed for
+untrusted inputs.
+
+## Future Improvements & Roadmap
+
+In the future we may consider a compression layer right above the persistence
+layer which doesn't change tx or merkle tree hashes, but reduces the storage
+overhead of `Any`. In addition, we may adopt protobuf naming conventions which
+make type URLs a bit more concise while remaining descriptive.
+
+Additional code generation support around the usage of `Any` is something that
+could also be explored in the future to make the UX for go developers more
+seamless.
+
+## Consequences
+
+### Positive
+
+* Significant performance gains.
+* Supports backward and forward type compatibility.
+* Better support for cross-language clients.
+
+### Negative
+
+* Learning curve required to understand and implement Protobuf messages.
+* Slightly larger message size due to use of `Any`, although this could be offset
+ by a compression layer in the future
+
+### Neutral
+
+## References
+
+1. https://github.com/cosmos/cosmos-sdk/issues/4977
+2. https://github.com/cosmos/cosmos-sdk/issues/5444
diff --git a/copy-of-sdk-docs/build/architecture/adr-020-protobuf-transaction-encoding.md b/copy-of-sdk-docs/build/architecture/adr-020-protobuf-transaction-encoding.md
new file mode 100644
index 00000000..9633fb20
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-020-protobuf-transaction-encoding.md
@@ -0,0 +1,464 @@
+# ADR 020: Protocol Buffer Transaction Encoding
+
+## Changelog
+
+* 2020 March 06: Initial Draft
+* 2020 March 12: API Updates
+* 2020 April 13: Added details on interface `oneof` handling
+* 2020 April 30: Switch to `Any`
+* 2020 May 14: Describe public key encoding
+* 2020 June 08: Store `TxBody` and `AuthInfo` as bytes in `SignDoc`; Document `TxRaw` as broadcast and storage type.
+* 2020 August 07: Use ADR 027 for serializing `SignDoc`.
+* 2020 August 19: Move sequence field from `SignDoc` to `SignerInfo`, as discussed in [#6966](https://github.com/cosmos/cosmos-sdk/issues/6966).
+* 2020 September 25: Remove `PublicKey` type in favor of `secp256k1.PubKey`, `ed25519.PubKey` and `multisig.LegacyAminoPubKey`.
+* 2020 October 15: Add `GetAccount` and `GetAccountWithHeight` methods to the `AccountRetriever` interface.
+* 2021 Feb 24: The Cosmos SDK does not use Tendermint's `PubKey` interface anymore, but its own `cryptotypes.PubKey`. Updates to reflect this.
+* 2021 May 3: Rename `clientCtx.JSONMarshaler` to `clientCtx.JSONCodec`.
+* 2021 June 10: Add `clientCtx.Codec: codec.Codec`.
+
+## Status
+
+Accepted
+
+## Context
+
+This ADR is a continuation of the motivation, design, and context established in
+[ADR 019](./adr-019-protobuf-state-encoding.md), namely, we aim to design the
+Protocol Buffer migration path for the client-side of the Cosmos SDK.
+
+Specifically, the client-side migration path primarily includes tx generation and
+signing, message construction and routing, in addition to CLI & REST handlers and
+business logic (i.e. queriers).
+
+With this in mind, we will tackle the migration path via two main areas, txs and
+querying. However, this ADR solely focuses on transactions. Querying should be
+addressed in a future ADR, but it should build off of these proposals.
+
+Based on detailed discussions ([\#6030](https://github.com/cosmos/cosmos-sdk/issues/6030)
+and [\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078)), the original
+design for transactions was changed substantially from an `oneof` /JSON-signing
+approach to the approach described below.
+
+## Decision
+
+### Transactions
+
+Since interface values are encoded with `google.protobuf.Any` in state (see [ADR 019](adr-019-protobuf-state-encoding.md)),
+`sdk.Msg`s are encoded with `Any` in transactions.
+
+One of the main goals of using `Any` to encode interface values is to have a
+core set of types which is reused by apps so that
+clients can safely be compatible with as many chains as possible.
+
+It is one of the goals of this specification to provide a flexible cross-chain transaction
+format that can serve a wide variety of use cases without breaking the client
+compatibility.
+
+In order to facilitate signing, transactions are separated into `TxBody`,
+which will be reused by `SignDoc` below, and `signatures`:
+
+```protobuf
+// types/types.proto
+package cosmos_sdk.v1;
+
+message Tx {
+ TxBody body = 1;
+ AuthInfo auth_info = 2;
+ // A list of signatures that matches the length and order of AuthInfo's signer_infos to
+ // allow connecting signature meta information like public key and signing mode by position.
+ repeated bytes signatures = 3;
+}
+
+// A variant of Tx that pins the signer's exact binary representation of body and
+// auth_info. This is used for signing, broadcasting and verification. The binary
+// `serialize(tx: TxRaw)` is stored in Tendermint and the hash `sha256(serialize(tx: TxRaw))`
+// becomes the "txhash", commonly used as the transaction ID.
+message TxRaw {
+ // A protobuf serialization of a TxBody that matches the representation in SignDoc.
+ bytes body = 1;
+ // A protobuf serialization of an AuthInfo that matches the representation in SignDoc.
+ bytes auth_info = 2;
+ // A list of signatures that matches the length and order of AuthInfo's signer_infos to
+ // allow connecting signature meta information like public key and signing mode by position.
+ repeated bytes signatures = 3;
+}
+
+message TxBody {
+ // A list of messages to be executed. The required signers of those messages define
+ // the number and order of elements in AuthInfo's signer_infos and Tx's signatures.
+ // Each required signer address is added to the list only the first time it occurs.
+ //
+ // By convention, the first required signer (usually from the first message) is referred
+ // to as the primary signer and pays the fee for the whole transaction.
+ repeated google.protobuf.Any messages = 1;
+ string memo = 2;
+ int64 timeout_height = 3;
+ repeated google.protobuf.Any extension_options = 1023;
+}
+
+message AuthInfo {
+ // This list defines the signing modes for the required signers. The number
+ // and order of elements must match the required signers from TxBody's messages.
+ // The first element is the primary signer and the one which pays the fee.
+ repeated SignerInfo signer_infos = 1;
+ // The fee can be calculated based on the cost of evaluating the body and doing signature verification of the signers. This can be estimated via simulation.
+ Fee fee = 2;
+}
+
+message SignerInfo {
+ // The public key is optional for accounts that already exist in state. If unset, the
+ // verifier can use the required signer address for this position and lookup the public key.
+ google.protobuf.Any public_key = 1;
+ // ModeInfo describes the signing mode of the signer and is a nested
+ // structure to support nested multisig pubkey's
+ ModeInfo mode_info = 2;
+ // sequence is the sequence of the account, which describes the
+ // number of committed transactions signed by a given address. It is used to prevent
+ // replay attacks.
+ uint64 sequence = 3;
+}
+
+message ModeInfo {
+ oneof sum {
+ Single single = 1;
+ Multi multi = 2;
+ }
+
+ // Single is the mode info for a single signer. It is structured as a message
+ // to allow for additional fields such as locale for SIGN_MODE_TEXTUAL in the future
+ message Single {
+ SignMode mode = 1;
+ }
+
+ // Multi is the mode info for a multisig public key
+ message Multi {
+ // bitarray specifies which keys within the multisig are signing
+ CompactBitArray bitarray = 1;
+ // mode_infos is the corresponding modes of the signers of the multisig
+ // which could include nested multisig public keys
+ repeated ModeInfo mode_infos = 2;
+ }
+}
+
+enum SignMode {
+ SIGN_MODE_UNSPECIFIED = 0;
+
+ SIGN_MODE_DIRECT = 1;
+
+ SIGN_MODE_TEXTUAL = 2;
+
+ SIGN_MODE_LEGACY_AMINO_JSON = 127;
+}
+```
+
+As will be discussed below, in order to include as much of the `Tx` as possible
+in the `SignDoc`, `SignerInfo` is separated from signatures so that only the
+raw signatures themselves live outside of what is signed over.
+
+Because we are aiming for a flexible, extensible cross-chain transaction
+format, new transaction processing options should be added to `TxBody` as soon
+those use cases are discovered, even if they can't be implemented yet.
+
+Because there is coordination overhead in this, `TxBody` includes an
+`extension_options` field which can be used for any transaction processing
+options that are not already covered. App developers should, nevertheless,
+attempt to upstream important improvements to `Tx`.
+
+### Signing
+
+All of the signing modes below aim to provide the following guarantees:
+
+* **No Malleability**: `TxBody` and `AuthInfo` cannot change once the transaction
+ is signed
+* **Predictable Gas**: if I am signing a transaction where I am paying a fee,
+ the final gas is fully dependent on what I am signing
+
+These guarantees give the maximum amount of confidence to message signers that
+manipulation of `Tx`s by intermediaries can't result in any meaningful changes.
+
+#### `SIGN_MODE_DIRECT`
+
+The "direct" signing behavior is to sign the raw `TxBody` bytes as broadcast over
+the wire. This has the advantages of:
+
+* requiring the minimum additional client capabilities beyond a standard protocol
+ buffers implementation
+* leaving effectively zero holes for transaction malleability (i.e. there are no
+ subtle differences between the signing and encoding formats which could
+ potentially be exploited by an attacker)
+
+Signatures are structured using the `SignDoc` below which reuses the serialization of
+`TxBody` and `AuthInfo` and only adds the fields which are needed for signatures:
+
+```protobuf
+// types/types.proto
+message SignDoc {
+ // A protobuf serialization of a TxBody that matches the representation in TxRaw.
+ bytes body = 1;
+ // A protobuf serialization of an AuthInfo that matches the representation in TxRaw.
+ bytes auth_info = 2;
+ string chain_id = 3;
+ uint64 account_number = 4;
+}
+```
+
+In order to sign in the default mode, clients take the following steps:
+
+1. Serialize `TxBody` and `AuthInfo` using any valid protobuf implementation.
+2. Create a `SignDoc` and serialize it using [ADR 027](./adr-027-deterministic-protobuf-serialization.md).
+3. Sign the encoded `SignDoc` bytes.
+4. Build a `TxRaw` and serialize it for broadcasting.
+
+Signature verification is based on comparing the raw `TxBody` and `AuthInfo`
+bytes encoded in `TxRaw` not based on any ["canonicalization"](https://github.com/regen-network/canonical-proto3)
+algorithm which creates added complexity for clients in addition to preventing
+some forms of upgradeability (to be addressed later in this document).
+
+Signature verifiers do:
+
+1. Deserialize a `TxRaw` and pull out `body` and `auth_info`.
+2. Create a list of required signer addresses from the messages.
+3. For each required signer:
+ * Pull account number and sequence from the state.
+ * Obtain the public key either from state or `AuthInfo`'s `signer_infos`.
+ * Create a `SignDoc` and serialize it using [ADR 027](./adr-027-deterministic-protobuf-serialization.md).
+ * Verify the signature at the same list position against the serialized `SignDoc`.
+
+#### `SIGN_MODE_LEGACY_AMINO`
+
+In order to support legacy wallets and exchanges, Amino JSON will be temporarily
+supported transaction signing. Once wallets and exchanges have had a
+chance to upgrade to protobuf-based signing, this option will be disabled. In
+the meantime, it is foreseen that disabling the current Amino signing would cause
+too much breakage to be feasible. Note that this is mainly a requirement of the
+Cosmos Hub and other chains may choose to disable Amino signing immediately.
+
+Legacy clients will be able to sign a transaction using the current Amino
+JSON format and have it encoded to protobuf using the REST `/tx/encode`
+endpoint before broadcasting.
+
+#### `SIGN_MODE_TEXTUAL`
+
+As was discussed extensively in [\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078),
+there is a desire for a human-readable signing encoding, especially for hardware
+wallets like the [Ledger](https://www.ledger.com) which display
+transaction contents to users before signing. JSON was an attempt at this but
+falls short of the ideal.
+
+`SIGN_MODE_TEXTUAL` is intended as a placeholder for a human-readable
+encoding which will replace Amino JSON. This new encoding should be even more
+focused on readability than JSON, possibly based on formatting strings like
+[MessageFormat](http://userguide.icu-project.org/formatparse/messages).
+
+In order to ensure that the new human-readable format does not suffer from
+transaction malleability issues, `SIGN_MODE_TEXTUAL`
+requires that the _human-readable bytes are concatenated with the raw `SignDoc`_
+to generate sign bytes.
+
+Multiple human-readable formats (maybe even localized messages) may be supported
+by `SIGN_MODE_TEXTUAL` when it is implemented.
+
+### Unknown Field Filtering
+
+Unknown fields in protobuf messages should generally be rejected by the transaction
+processors because:
+
+* important data may be present in the unknown fields, that if ignored, will
+ cause unexpected behavior for clients
+* they present a malleability vulnerability where attackers can bloat tx size
+ by adding random uninterpreted data to unsigned content (i.e. the master `Tx`,
+ not `TxBody`)
+
+There are also scenarios where we may choose to safely ignore unknown fields
+(https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-624400188) to
+provide graceful forwards compatibility with newer clients.
+
+We propose that field numbers with bit 11 set (for most use cases this is
+the range of 1024-2047) be considered non-critical fields that can safely be
+ignored if unknown.
+
+To handle this we will need an unknown field filter that:
+
+* always rejects unknown fields in unsigned content (i.e. top-level `Tx` and
+ unsigned parts of `AuthInfo` if present based on the signing mode)
+* rejects unknown fields in all messages (including nested `Any`s) other than
+ fields with bit 11 set
+
+This will likely need to be a custom protobuf parser pass that takes message bytes
+and `FileDescriptor`s and returns a boolean result.
+
+### Public Key Encoding
+
+Public keys in the Cosmos SDK implement the `cryptotypes.PubKey` interface.
+We propose to use `Any` for protobuf encoding as we are doing with other interfaces (for example, in `BaseAccount.PubKey` and `SignerInfo.PublicKey`).
+The following public keys are implemented: secp256k1, secp256r1, ed25519 and legacy-multisignature.
+
+Ex:
+
+```protobuf
+message PubKey {
+ bytes key = 1;
+}
+```
+
+`multisig.LegacyAminoPubKey` has an array of `Any`'s member to support any
+protobuf public key type.
+
+Apps should only attempt to handle a registered set of public keys that they
+have tested. The provided signature verification ante handler decorators will
+enforce this.
+
+### CLI & REST
+
+Currently, the REST and CLI handlers encode and decode types and txs via Amino
+JSON encoding using a concrete Amino codec. Being that some of the types dealt with
+in the client can be interfaces, similar to how we described in [ADR 019](./adr-019-protobuf-state-encoding.md),
+the client logic will now need to take a codec interface that knows not only how
+to handle all the types, but also knows how to generate transactions, signatures,
+and messages.
+
+```go
+type AccountRetriever interface {
+ GetAccount(clientCtx Context, addr sdk.AccAddress) (client.Account, error)
+ GetAccountWithHeight(clientCtx Context, addr sdk.AccAddress) (client.Account, int64, error)
+ EnsureExists(clientCtx client.Context, addr sdk.AccAddress) error
+ GetAccountNumberSequence(clientCtx client.Context, addr sdk.AccAddress) (uint64, uint64, error)
+}
+
+type Generator interface {
+ NewTx() TxBuilder
+ NewFee() ClientFee
+ NewSignature() ClientSignature
+ MarshalTx(tx types.Tx) ([]byte, error)
+}
+
+type TxBuilder interface {
+ GetTx() sdk.Tx
+
+ SetMsgs(...sdk.Msg) error
+ GetSignatures() []sdk.Signature
+ SetSignatures(...sdk.Signature)
+ GetFee() sdk.Fee
+ SetFee(sdk.Fee)
+ GetMemo() string
+ SetMemo(string)
+}
+```
+
+We then update `Context` to have new fields: `Codec`, `TxGenerator`,
+and `AccountRetriever`, and we update `AppModuleBasic.GetTxCmd` to take
+a `Context` which should have all of these fields pre-populated.
+
+Each client method should then use one of the `Init` methods to re-initialize
+the pre-populated `Context`. `tx.GenerateOrBroadcastTx` can be used to
+generate or broadcast a transaction. For example:
+
+```go
+import "github.com/spf13/cobra"
+import "github.com/cosmos/cosmos-sdk/client"
+import "github.com/cosmos/cosmos-sdk/client/tx"
+
+func NewCmdDoSomething(clientCtx client.Context) *cobra.Command {
+ return &cobra.Command{
+ RunE: func(cmd *cobra.Command, args []string) error {
+ clientCtx := ctx.InitWithInput(cmd.InOrStdin())
+ msg := NewSomeMsg{...}
+ tx.GenerateOrBroadcastTx(clientCtx, msg)
+ },
+ }
+}
+```
+
+## Future Improvements
+
+### `SIGN_MODE_TEXTUAL` specification
+
+A concrete specification and implementation of `SIGN_MODE_TEXTUAL` is intended
+as a near-term future improvement so that the ledger app and other wallets
+can gracefully transition away from Amino JSON.
+
+### `SIGN_MODE_DIRECT_AUX`
+
+(\*Documented as option (3) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933)
+
+We could add a mode `SIGN_MODE_DIRECT_AUX`
+to support scenarios where multiple signatures
+are being gathered into a single transaction but the message composer does not
+yet know which signatures will be included in the final transaction. For instance,
+I may have a 3/5 multisig wallet and want to send a `TxBody` to all 5
+signers to see who signs first. As soon as I have 3 signatures then I will go
+ahead and build the full transaction.
+
+With `SIGN_MODE_DIRECT`, each signer needs
+to sign the full `AuthInfo` which includes the full list of all signers and
+their signing modes, making the above scenario very hard.
+
+`SIGN_MODE_DIRECT_AUX` would allow "auxiliary" signers to create their signature
+using only `TxBody` and their own `PublicKey`. This allows the full list of
+signers in `AuthInfo` to be delayed until signatures have been collected.
+
+An "auxiliary" signer is any signer besides the primary signer who is paying
+the fee. For the primary signer, the full `AuthInfo` is actually needed to calculate gas and fees
+because that is dependent on how many signers and which key types and signing
+modes they are using. Auxiliary signers, however, do not need to worry about
+fees or gas and thus can just sign `TxBody`.
+
+To generate a signature in `SIGN_MODE_DIRECT_AUX` these steps would be followed:
+
+1. Encode `SignDocAux` (with the same requirement that fields must be serialized
+ in order):
+
+ ```protobuf
+ // types/types.proto
+ message SignDocAux {
+ bytes body_bytes = 1;
+ // PublicKey is included in SignDocAux :
+ // 1. as a special case for multisig public keys. For multisig public keys,
+ // the signer should use the top-level multisig public key they are signing
+ // against, not their own public key. This is to prevent a form
+ // of malleability where a signature could be taken out of context of the
+ // multisig key that was intended to be signed for
+ // 2. to guard against scenario where configuration information is encoded
+ // in public keys (it has been proposed) such that two keys can generate
+ // the same signature but have different security properties
+ //
+ // By including it here, the composer of AuthInfo cannot reference the
+ // a public key variant the signer did not intend to use
+ PublicKey public_key = 2;
+ string chain_id = 3;
+ uint64 account_number = 4;
+ }
+ ```
+
+2. Sign the encoded `SignDocAux` bytes
+3. Send their signature and `SignerInfo` to the primary signer who will then
+ sign and broadcast the final transaction (with `SIGN_MODE_DIRECT` and `AuthInfo`
+ added) once enough signatures have been collected
+
+### `SIGN_MODE_DIRECT_RELAXED`
+
+(_Documented as option (1)(a) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933_)
+
+This is a variation of `SIGN_MODE_DIRECT` where multiple signers wouldn't need to
+coordinate public keys and signing modes in advance. It would involve an alternate
+`SignDoc` similar to `SignDocAux` above with fee. This could be added in the future
+if client developers found the burden of collecting public keys and modes in advance
+too burdensome.
+
+## Consequences
+
+### Positive
+
+* Significant performance gains.
+* Supports backward and forward type compatibility.
+* Better support for cross-language clients.
+* Multiple signing modes allow for greater protocol evolution
+
+### Negative
+
+* `google.protobuf.Any` type URLs increase transaction size although the effect
+ may be negligible or compression may be able to mitigate it.
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-021-protobuf-query-encoding.md b/copy-of-sdk-docs/build/architecture/adr-021-protobuf-query-encoding.md
new file mode 100644
index 00000000..ba155cba
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-021-protobuf-query-encoding.md
@@ -0,0 +1,256 @@
+# ADR 021: Protocol Buffer Query Encoding
+
+## Changelog
+
+* 2020 March 27: Initial Draft
+
+## Status
+
+Accepted
+
+## Context
+
+This ADR is a continuation of the motivation, design, and context established in
+[ADR 019](./adr-019-protobuf-state-encoding.md) and
+[ADR 020](./adr-020-protobuf-transaction-encoding.md), namely, we aim to design the
+Protocol Buffer migration path for the client-side of the Cosmos SDK.
+
+This ADR continues from [ADR 020](./adr-020-protobuf-transaction-encoding.md)
+to specify the encoding of queries.
+
+## Decision
+
+### Custom Query Definition
+
+Modules define custom queries through a protocol buffers `service` definition.
+These `service` definitions are generally associated with and used by the
+GRPC protocol. However, the protocol buffers specification indicates that
+they can be used more generically by any request/response protocol that uses
+protocol buffer encoding. Thus, we can use `service` definitions for specifying
+custom ABCI queries and even reuse a substantial amount of the GRPC infrastructure.
+
+Each module with custom queries should define a service canonically named `Query`:
+
+```protobuf
+// x/bank/types/types.proto
+
+service Query {
+ rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { }
+ rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { }
+}
+```
+
+#### Handling of Interface Types
+
+Modules that use interface types and need true polymorphism generally force a
+`oneof` up to the app-level that provides the set of concrete implementations of
+that interface that the app supports. While app's are welcome to do the same for
+queries and implement an app-level query service, it is recommended that modules
+provide query methods that expose these interfaces via `google.protobuf.Any`.
+There is a concern on the transaction level that the overhead of `Any` is too
+high to justify its usage. However for queries this is not a concern, and
+providing generic module-level queries that use `Any` does not preclude apps
+from also providing app-level queries that return using the app-level `oneof`s.
+
+A hypothetical example for the `gov` module would look something like:
+
+```protobuf
+// x/gov/types/types.proto
+
+import "google/protobuf/any.proto";
+
+service Query {
+ rpc GetProposal(GetProposalParams) returns (AnyProposal) { }
+}
+
+message AnyProposal {
+ ProposalBase base = 1;
+ google.protobuf.Any content = 2;
+}
+```
+
+### Custom Query Implementation
+
+In order to implement the query service, we can reuse the existing [gogo protobuf](https://github.com/cosmos/gogoproto)
+grpc plugin, which for a service named `Query` generates an interface named
+`QueryServer` as below:
+
+```go
+type QueryServer interface {
+ QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error)
+ QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error)
+}
+```
+
+The custom queries for our module are implemented by implementing this interface.
+
+The first parameter in this generated interface is a generic `context.Context`,
+whereas querier methods generally need an instance of `sdk.Context` to read
+from the store. Since arbitrary values can be attached to `context.Context`
+using the `WithValue` and `Value` methods, the Cosmos SDK should provide a function
+`sdk.UnwrapSDKContext` to retrieve the `sdk.Context` from the provided
+`context.Context`.
+
+An example implementation of `QueryBalance` for the bank module as above would
+look something like:
+
+```go
+type Querier struct {
+ Keeper
+}
+
+func (q Querier) QueryBalance(ctx context.Context, params *types.QueryBalanceParams) (*sdk.Coin, error) {
+ balance := q.GetBalance(sdk.UnwrapSDKContext(ctx), params.Address, params.Denom)
+ return &balance, nil
+}
+```
+
+### Custom Query Registration and Routing
+
+Query server implementations as above would be registered with `AppModule`s using
+a new method `RegisterQueryService(grpc.Server)` which could be implemented simply
+as below:
+
+```go
+// x/bank/module.go
+func (am AppModule) RegisterQueryService(server grpc.Server) {
+ types.RegisterQueryServer(server, keeper.Querier{am.keeper})
+}
+```
+
+Underneath the hood, a new method `RegisterService(sd *grpc.ServiceDesc, handler interface{})`
+will be added to the existing `baseapp.QueryRouter` to add the queries to the custom
+query routing table (with the routing method being described below).
+The signature for this method matches the existing
+`RegisterServer` method on the GRPC `Server` type where `handler` is the custom
+query server implementation described above.
+
+GRPC-like requests are routed by the service name (ex. `cosmos_sdk.x.bank.v1.Query`)
+and method name (ex. `QueryBalance`) combined with `/`s to form a full
+method name (ex. `/cosmos_sdk.x.bank.v1.Query/QueryBalance`). This gets translated
+into an ABCI query as `custom/cosmos_sdk.x.bank.v1.Query/QueryBalance`. Service handlers
+registered with `QueryRouter.RegisterService` will be routed this way.
+
+Beyond the method name, GRPC requests carry a protobuf encoded payload, which maps naturally
+to `RequestQuery.Data`, and receive a protobuf encoded response or error. Thus
+there is a quite natural mapping of GRPC-like rpc methods to the existing
+`sdk.Query` and `QueryRouter` infrastructure.
+
+This basic specification allows us to reuse protocol buffer `service` definitions
+for ABCI custom queries substantially reducing the need for manual decoding and
+encoding in query methods.
+
+### GRPC Protocol Support
+
+In addition to providing an ABCI query pathway, we can easily provide a GRPC
+proxy server that routes requests in the GRPC protocol to ABCI query requests
+under the hood. In this way, clients could use their host languages' existing
+GRPC implementations to make direct queries against Cosmos SDK app's using
+these `service` definitions. In order for this server to work, the `QueryRouter`
+on `BaseApp` will need to expose the service handlers registered with
+`QueryRouter.RegisterService` to the proxy server implementation. Nodes could
+launch the proxy server on a separate port in the same process as the ABCI app
+with a command-line flag.
+
+### REST Queries and Swagger Generation
+
+[grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) is a project that
+translates REST calls into GRPC calls using special annotations on service
+methods. Modules that want to expose REST queries should add `google.api.http`
+annotations to their `rpc` methods as in this example below.
+
+```protobuf
+// x/bank/types/types.proto
+
+service Query {
+ rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) {
+ option (google.api.http) = {
+ get: "/x/bank/v1/balance/{address}/{denom}"
+ };
+ }
+ rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) {
+ option (google.api.http) = {
+ get: "/x/bank/v1/balances/{address}"
+ };
+ }
+}
+```
+
+grpc-gateway will work directly against the GRPC proxy described above which will
+translate requests to ABCI queries under the hood. grpc-gateway can also
+generate Swagger definitions automatically.
+
+In the current implementation of REST queries, each module needs to implement
+REST queries manually in addition to ABCI querier methods. Using the grpc-gateway
+approach, there will be no need to generate separate REST query handlers, just
+query servers as described above as grpc-gateway handles the translation of protobuf
+to REST as well as Swagger definitions.
+
+The Cosmos SDK should provide CLI commands for apps to start GRPC gateway either in
+a separate process or the same process as the ABCI app, as well as provide a
+command for generating grpc-gateway proxy `.proto` files and the `swagger.json`
+file.
+
+### Client Usage
+
+The gogo protobuf grpc plugin generates client interfaces in addition to server
+interfaces. For the `Query` service defined above we would get a `QueryClient`
+interface like:
+
+```go
+type QueryClient interface {
+ QueryBalance(ctx context.Context, in *QueryBalanceParams, opts ...grpc.CallOption) (*types.Coin, error)
+ QueryAllBalances(ctx context.Context, in *QueryAllBalancesParams, opts ...grpc.CallOption) (*QueryAllBalancesResponse, error)
+}
+```
+
+Via a small patch to gogo protobuf ([gogo/protobuf#675](https://github.com/gogo/protobuf/pull/675))
+we have tweaked the grpc codegen to use an interface rather than a concrete type
+for the generated client struct. This allows us to also reuse the GRPC infrastructure
+for ABCI client queries.
+
+1Context`will receive a new method`QueryConn`that returns a`ClientConn`
+that routes calls to ABCI queries
+
+Clients (such as CLI methods) will then be able to call query methods like this:
+
+```go
+clientCtx := client.NewContext()
+queryClient := types.NewQueryClient(clientCtx.QueryConn())
+params := &types.QueryBalanceParams{addr, denom}
+result, err := queryClient.QueryBalance(gocontext.Background(), params)
+```
+
+### Testing
+
+Tests would be able to create a query client directly from keeper and `sdk.Context`
+references using a `QueryServerTestHelper` as below:
+
+```go
+queryHelper := baseapp.NewQueryServerTestHelper(ctx)
+types.RegisterQueryServer(queryHelper, keeper.Querier{app.BankKeeper})
+queryClient := types.NewQueryClient(queryHelper)
+```
+
+## Future Improvements
+
+## Consequences
+
+### Positive
+
+* greatly simplified querier implementation (no manual encoding/decoding)
+* easy query client generation (can use existing grpc and swagger tools)
+* no need for REST query implementations
+* type safe query methods (generated via grpc plugin)
+* going forward, there will be less breakage of query methods because of the
+backwards compatibility guarantees provided by buf
+
+### Negative
+
+* all clients using the existing ABCI/REST queries will need to be refactored
+for both the new GRPC/REST query paths as well as protobuf/proto-json encoded
+data, but this is more or less unavoidable in the protobuf refactoring
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-022-custom-panic-handling.md b/copy-of-sdk-docs/build/architecture/adr-022-custom-panic-handling.md
new file mode 100644
index 00000000..a99868b2
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-022-custom-panic-handling.md
@@ -0,0 +1,218 @@
+# ADR 022: Custom BaseApp panic handling
+
+## Changelog
+
+* 2020 Apr 24: Initial Draft
+* 2021 Sep 14: Superseded by ADR-045
+
+## Status
+
+SUPERSEDED by ADR-045
+
+## Context
+
+The current implementation of BaseApp does not allow developers to write custom error handlers during panic recovery
+[runTx()](https://github.com/cosmos/cosmos-sdk/blob/bad4ca75f58b182f600396ca350ad844c18fc80b/baseapp/baseapp.go#L539)
+method. We think that this method can be more flexible and can give Cosmos SDK users more options for customizations without
+the need to rewrite whole BaseApp. Also there's one special case for `sdk.ErrorOutOfGas` error handling, that case
+might be handled in a "standard" way (middleware) alongside the others.
+
+We propose middleware-solution, which could help developers implement the following cases:
+
+* add external logging (let's say sending reports to external services like [Sentry](https://sentry.io));
+* call panic for specific error cases;
+
+It will also make `OutOfGas` case and `default` case one of the middlewares.
+`Default` case wraps recovery object to an error and logs it ([example middleware implementation](#recovery-middleware)).
+
+Our project has a sidecar service running alongside the blockchain node (smart contracts virtual machine). It is
+essential that node <-> sidecar connectivity stays stable for TXs processing. So when the communication breaks we need
+to crash the node and reboot it once the problem is solved. That behaviour makes the node's state machine execution
+deterministic. As all keeper panics are caught by runTx's `defer()` handler, we have to adjust the BaseApp code
+in order to customize it.
+
+## Decision
+
+### Design
+
+#### Overview
+
+Instead of hardcoding custom error handling into BaseApp we suggest using a set of middlewares which can be customized
+externally and will allow developers to use as many custom error handlers as they want. Implementation with tests
+can be found [here](https://github.com/cosmos/cosmos-sdk/pull/6053).
+
+#### Implementation details
+
+##### Recovery handler
+
+New `RecoveryHandler` type added. `recoveryObj` input argument is an object returned by the standard Go function
+`recover()` from the `builtin` package.
+
+```go
+type RecoveryHandler func(recoveryObj interface{}) error
+```
+
+Handler should type assert (or other methods) an object to define if the object should be handled.
+`nil` should be returned if the input object can't be handled by that `RecoveryHandler` (not a handler's target type).
+Not `nil` error should be returned if the input object was handled and the middleware chain execution should be stopped.
+
+An example:
+
+```go
+func exampleErrHandler(recoveryObj interface{}) error {
+ err, ok := recoveryObj.(error)
+ if !ok { return nil }
+
+ if someSpecificError.Is(err) {
+ panic(customPanicMsg)
+ } else {
+ return nil
+ }
+}
+```
+
+This example breaks the application execution, but it also might enrich the error's context like the `OutOfGas` handler.
+
+##### Recovery middleware
+
+We also add a middleware type (decorator). That function type wraps `RecoveryHandler` and returns the next middleware in
+execution chain and handler's `error`. Type is used to separate actual `recovery()` object handling from middleware
+chain processing.
+
+```go
+type recoveryMiddleware func(recoveryObj interface{}) (recoveryMiddleware, error)
+
+func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) recoveryMiddleware {
+ return func(recoveryObj interface{}) (recoveryMiddleware, error) {
+ if err := handler(recoveryObj); err != nil {
+ return nil, err
+ }
+ return next, nil
+ }
+}
+```
+
+Function receives a `recoveryObj` object and returns:
+
+* (next `recoveryMiddleware`, `nil`) if object wasn't handled (not a target type) by `RecoveryHandler`;
+* (`nil`, not nil `error`) if input object was handled and other middlewares in the chain should not be executed;
+* (`nil`, `nil`) in case of invalid behavior. Panic recovery might not have been properly handled;
+this can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`');
+
+`OutOfGas` middleware example:
+
+```go
+func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) recoveryMiddleware {
+ handler := func(recoveryObj interface{}) error {
+ err, ok := recoveryObj.(sdk.ErrorOutOfGas)
+ if !ok { return nil }
+
+ return errorsmod.Wrap(
+ sdkerrors.ErrOutOfGas, fmt.Sprintf(
+ "out of gas in location: %v; gasWanted: %d, gasUsed: %d", err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(),
+ ),
+ )
+ }
+
+ return newRecoveryMiddleware(handler, next)
+}
+```
+
+`Default` middleware example:
+
+```go
+func newDefaultRecoveryMiddleware() recoveryMiddleware {
+ handler := func(recoveryObj interface{}) error {
+ return errorsmod.Wrap(
+ sdkerrors.ErrPanic, fmt.Sprintf("recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack())),
+ )
+ }
+
+ return newRecoveryMiddleware(handler, nil)
+}
+```
+
+##### Recovery processing
+
+Basic chain of middlewares processing would look like:
+
+```go
+func processRecovery(recoveryObj interface{}, middleware recoveryMiddleware) error {
+ if middleware == nil { return nil }
+
+ next, err := middleware(recoveryObj)
+ if err != nil { return err }
+ if next == nil { return nil }
+
+ return processRecovery(recoveryObj, next)
+}
+```
+
+That way we can create a middleware chain which is executed from left to right, the rightmost middleware is a
+`default` handler which must return an `error`.
+
+##### BaseApp changes
+
+The `default` middleware chain must exist in a `BaseApp` object. `Baseapp` modifications:
+
+```go
+type BaseApp struct {
+ // ...
+ runTxRecoveryMiddleware recoveryMiddleware
+}
+
+func NewBaseApp(...) {
+ // ...
+ app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware()
+}
+
+func (app *BaseApp) runTx(...) {
+ // ...
+ defer func() {
+ if r := recover(); r != nil {
+ recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware)
+ err, result = processRecovery(r, recoveryMW), nil
+ }
+
+ gInfo = sdk.GasInfo{GasWanted: gasWanted, GasUsed: ctx.GasMeter().GasConsumed()}
+ }()
+ // ...
+}
+```
+
+Developers can add their custom `RecoveryHandler`s by providing `AddRunTxRecoveryHandler` as a BaseApp option parameter to the `NewBaseapp` constructor:
+
+```go
+func (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) {
+ for _, h := range handlers {
+ app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware)
+ }
+}
+```
+
+This method would prepend handlers to an existing chain.
+
+## Consequences
+
+### Positive
+
+* Developers of Cosmos SDK-based projects can add custom panic handlers to:
+ * add error context for custom panic sources (panic inside of custom keepers);
+ * emit `panic()`: passthrough recovery object to the Tendermint core;
+ * other necessary handling;
+* Developers can use standard Cosmos SDK `BaseApp` implementation, rather than rewriting it in their projects;
+* Proposed solution doesn't break the current "standard" `runTx()` flow;
+
+### Negative
+
+* Introduces changes to the execution model design.
+
+### Neutral
+
+* `OutOfGas` error handler becomes one of the middlewares;
+* Default panic handler becomes one of the middlewares;
+
+## References
+
+* [PR-6053 with proposed solution](https://github.com/cosmos/cosmos-sdk/pull/6053)
+* [Similar solution. ADR-010 Modular AnteHandler](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-010-modular-antehandler.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-023-protobuf-naming.md b/copy-of-sdk-docs/build/architecture/adr-023-protobuf-naming.md
new file mode 100644
index 00000000..46620760
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-023-protobuf-naming.md
@@ -0,0 +1,263 @@
+# ADR 023: Protocol Buffer Naming and Versioning Conventions
+
+## Changelog
+
+* 2020 April 27: Initial Draft
+* 2020 August 5: Update guidelines
+
+## Status
+
+Accepted
+
+## Context
+
+Protocol Buffers provide a basic [style guide](https://developers.google.com/protocol-buffers/docs/style)
+and [Buf](https://buf.build/docs/style-guide) builds upon that. To the
+extent possible, we want to follow industry accepted guidelines and wisdom for
+the effective usage of protobuf, deviating from those only when there is clear
+rationale for our use case.
+
+### Adoption of `Any`
+
+The adoption of `google.protobuf.Any` as the recommended approach for encoding
+interface types (as opposed to `oneof`) makes package naming a central part
+of the encoding as fully-qualified message names now appear in encoded
+messages.
+
+### Current Directory Organization
+
+Thus far we have mostly followed [Buf's](https://buf.build) [DEFAULT](https://buf.build/docs/lint-checkers#default)
+recommendations, with the minor deviation of disabling [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout)
+which although being convenient for developing code comes with the warning
+from Buf that:
+
+> you will have a very bad time with many Protobuf plugins across various languages if you do not do this
+
+### Adoption of gRPC Queries
+
+In [ADR 021](adr-021-protobuf-query-encoding.md), gRPC was adopted for Protobuf
+native queries. The full gRPC service path thus becomes a key part of ABCI query
+path. In the future, gRPC queries may be allowed from within persistent scripts
+by technologies such as CosmWasm and these query routes would be stored within
+script binaries.
+
+## Decision
+
+The goal of this ADR is to provide thoughtful naming conventions that:
+
+* encourage a good user experience for when users interact directly with
+.proto files and fully-qualified protobuf names
+* balance conciseness against the possibility of either over-optimizing (making
+names too short and cryptic) or under-optimizing (just accepting bloated names
+with lots of redundant information)
+
+These guidelines are meant to act as a style guide for both the Cosmos SDK and
+third-party modules.
+
+As a starting point, we should adopt all of the [DEFAULT](https://buf.build/docs/lint-checkers#default)
+checkers in [Buf's](https://buf.build) including [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout),
+except:
+
+* [PACKAGE_VERSION_SUFFIX](https://buf.build/docs/lint-checkers#package_version_suffix)
+* [SERVICE_SUFFIX](https://buf.build/docs/lint-checkers#service_suffix)
+
+Further guidelines to be described below.
+
+### Principles
+
+#### Concise and Descriptive Names
+
+Names should be descriptive enough to convey their meaning and distinguish
+them from other names.
+
+Given that we are using fully-qualified names within
+`google.protobuf.Any` as well as within gRPC query routes, we should aim to
+keep names concise, without going overboard. The general rule of thumb should
+be if a shorter name would convey more or else the same thing, pick the shorter
+name.
+
+For instance, `cosmos.bank.MsgSend` (19 bytes) conveys roughly the same information
+as `cosmos_sdk.x.bank.v1.MsgSend` (28 bytes) but is more concise.
+
+Such conciseness makes names both more pleasant to work with and take up less
+space within transactions and on the wire.
+
+We should also resist the temptation to over-optimize, by making names
+cryptically short with abbreviations. For instance, we shouldn't try to
+reduce `cosmos.bank.MsgSend` to `csm.bk.MSnd` just to save a few bytes.
+
+The goal is to make names **_concise but not cryptic_**.
+
+#### Names are for Clients First
+
+Package and type names should be chosen for the benefit of users, not
+necessarily because of legacy concerns related to the go code-base.
+
+#### Plan for Longevity
+
+In the interests of long-term support, we should plan on the names we do
+choose to be in usage for a long time, so now is the opportunity to make
+the best choices for the future.
+
+### Versioning
+
+#### Guidelines on Stable Package Versions
+
+In general, schema evolution is the way to update protobuf schemas. That means that new fields,
+messages, and RPC methods are _added_ to existing schemas and old fields, messages and RPC methods
+are maintained as long as possible.
+
+Breaking things is often unacceptable in a blockchain scenario. For instance, immutable smart contracts
+may depend on certain data schemas on the host chain. If the host chain breaks those schemas, the smart
+contract may be irreparably broken. Even when things can be fixed (for instance in client software),
+this often comes at a high cost.
+
+Instead of breaking things, we should make every effort to evolve schemas rather than just breaking them.
+[Buf](https://buf.build) breaking change detection should be used on all stable (non-alpha or beta) packages
+to prevent such breakage.
+
+With that in mind, different stable versions (i.e. `v1` or `v2`) of a package should more or less be considered
+different packages and this should be a last resort approach for upgrading protobuf schemas. Scenarios where creating
+a `v2` may make sense are:
+
+* we want to create a new module with similar functionality to an existing module and adding `v2` is the most natural
+way to do this. In that case, there are really just two different, but similar modules with different APIs.
+* we want to add a new revamped API for an existing module and it's just too cumbersome to add it to the existing package,
+so putting it in `v2` is cleaner for users. In this case, care should be made to not deprecate support for
+`v1` if it is actively used in immutable smart contracts.
+
+#### Guidelines on unstable (alpha and beta) package versions
+
+The following guidelines are recommended for marking packages as alpha or beta:
+
+* marking something as `alpha` or `beta` should be a last resort and just putting something in the
+stable package (i.e. `v1` or `v2`) should be preferred
+* a package _should_ be marked as `alpha` _if and only if_ there are active discussions to remove
+or significantly alter the package in the near future
+* a package _should_ be marked as `beta` _if and only if_ there is an active discussion to
+significantly refactor/rework the functionality in the near future but do not remove it
+* modules _can and should_ have types in both stable (i.e. `v1` or `v2`) and unstable (`alpha` or `beta`) packages.
+
+_`alpha` and `beta` should not be used to avoid responsibility for maintaining compatibility._
+Whenever code is released into the wild, especially on a blockchain, there is a high cost to changing things. In some
+cases, for instance with immutable smart contracts, a breaking change may be impossible to fix.
+
+When marking something as `alpha` or `beta`, maintainers should ask the following questions:
+
+* what is the cost of asking others to change their code vs the benefit of us maintaining the optionality to change it?
+* what is the plan for moving this to `v1` and how will that affect users?
+
+`alpha` or `beta` should really be used to communicate "changes are planned".
+
+As a case study, gRPC reflection is in the package `grpc.reflection.v1alpha`. It hasn't been changed since
+2017 and it is now used in other widely used software like gRPCurl. Some folks probably use it in production services
+and so if they actually went and changed the package to `grpc.reflection.v1`, some software would break and
+they probably don't want to do that... So now the `v1alpha` package is more or less the de-facto `v1`. Let's not do that.
+
+The following are guidelines for working with non-stable packages:
+
+* [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix)
+(ex. `v1alpha1`) _should_ be used for non-stable packages
+* non-stable packages should generally be excluded from breaking change detection
+* immutable smart contract modules (i.e. CosmWasm) _should_ block smart contracts/persistent
+scripts from interacting with `alpha`/`beta` packages
+
+#### Omit v1 suffix
+
+Instead of using [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix),
+we can omit `v1` for packages that don't actually have a second version. This
+allows for more concise names for common use cases like `cosmos.bank.Send`.
+Packages that do have a second or third version can indicate that with `.v2`
+or `.v3`.
+
+### Package Naming
+
+#### Adopt a short, unique top-level package name
+
+Top-level packages should adopt a short name that is known not to collide with
+other names in common usage within the Cosmos ecosystem. In the near future, a
+registry should be created to reserve and index top-level package names used
+within the Cosmos ecosystem. Because the Cosmos SDK is intended to provide
+the top-level types for the Cosmos project, the top-level package name `cosmos`
+is recommended for usage within the Cosmos SDK instead of the longer `cosmos_sdk`.
+[ICS](https://github.com/cosmos/ics) specifications could consider a
+short top-level package like `ics23` based upon the standard number.
+
+#### Limit sub-package depth
+
+Sub-package depth should be increased with caution. Generally a single
+sub-package is needed for a module or a library. Even though `x` or `modules`
+is used in source code to denote modules, this is often unnecessary for .proto
+files as modules are the primary thing sub-packages are used for. Only items which
+are known to be used infrequently should have deep sub-package depths.
+
+For the Cosmos SDK, it is recommended that we simply write `cosmos.bank`,
+`cosmos.gov`, etc. rather than `cosmos.x.bank`. In practice, most non-module
+types can go straight in the `cosmos` package or we can introduce a
+`cosmos.base` package if needed. Note that this naming _will not_ change
+go package names, i.e. the `cosmos.bank` protobuf package will still live in
+`x/bank`.
+
+### Message Naming
+
+Message type names should be as concise as possible without losing clarity. `sdk.Msg`
+types which are used in transactions will retain the `Msg` prefix as that provides
+helpful context.
+
+### Service and RPC Naming
+
+[ADR 021](adr-021-protobuf-query-encoding.md) specifies that modules should
+implement a gRPC query service. We should consider the principle of conciseness
+for query service and RPC names as these may be called from persistent script
+modules such as CosmWasm. Also, users may use these query paths from tools like
+[gRPCurl](https://github.com/fullstorydev/grpcurl). As an example, we can shorten
+`/cosmos_sdk.x.bank.v1.QueryService/QueryBalance` to
+`/cosmos.bank.Query/Balance` without losing much useful information.
+
+RPC request and response types _should_ follow the `ServiceNameMethodNameRequest`/
+`ServiceNameMethodNameResponse` naming convention. i.e. for an RPC method named `Balance`
+on the `Query` service, the request and response types would be `QueryBalanceRequest`
+and `QueryBalanceResponse`. This will be more self-explanatory than `BalanceRequest`
+and `BalanceResponse`.
+
+#### Use just `Query` for the query service
+
+Instead of [Buf's default service suffix recommendation](https://github.com/cosmos/cosmos-sdk/pull/6033),
+we should simply use the shorter `Query` for query services.
+
+For other types of gRPC services, we should consider sticking with Buf's
+default recommendation.
+
+#### Omit `Get` and `Query` from query service RPC names
+
+`Get` and `Query` should be omitted from `Query` service names because they are
+redundant in the fully-qualified name. For instance, `/cosmos.bank.Query/QueryBalance`
+just says `Query` twice without any new information.
+
+## Future Improvements
+
+A registry of top-level package names should be created to coordinate naming
+across the ecosystem, prevent collisions, and also help developers discover
+useful schemas. A simple starting point would be a git repository with
+community-based governance.
+
+## Consequences
+
+### Positive
+
+* names will be more concise and easier to read and type
+* all transactions using `Any` will be at shorter (`_sdk.x` and `.v1` will be removed)
+* `.proto` file imports will be more standard (without `"third_party/proto"` in
+the path)
+* code generation will be easier for clients because .proto files will be
+in a single `proto/` directory which can be copied rather than scattered
+throughout the Cosmos SDK
+
+### Negative
+
+### Neutral
+
+* `.proto` files will need to be reorganized and refactored
+* some modules may need to be marked as alpha or beta
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-024-coin-metadata.md b/copy-of-sdk-docs/build/architecture/adr-024-coin-metadata.md
new file mode 100644
index 00000000..71bedac5
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-024-coin-metadata.md
@@ -0,0 +1,140 @@
+# ADR 024: Coin Metadata
+
+## Changelog
+
+* 05/19/2020: Initial draft
+
+## Status
+
+Proposed
+
+## Context
+
+Assets in the Cosmos SDK are represented via a `Coins` type that consists of an `amount` and a `denom`,
+where the `amount` can be any arbitrarily large or small value. In addition, the Cosmos SDK uses an
+account-based model where there are two types of primary accounts -- basic accounts and module accounts.
+All account types have a set of balances that are composed of `Coins`. The `x/bank` module keeps
+track of all balances for all accounts and also keeps track of the total supply of balances in an
+application.
+
+With regards to a balance `amount`, the Cosmos SDK assumes a static and fixed unit of denomination,
+regardless of the denomination itself. In other words, clients and apps built atop a Cosmos-SDK-based
+chain may choose to define and use arbitrary units of denomination to provide a richer UX, however, by
+the time a tx or operation reaches the Cosmos SDK state machine, the `amount` is treated as a single
+unit. For example, for the Cosmos Hub (Gaia), clients assume 1 ATOM = 10^6 uatom, and so all txs and
+operations in the Cosmos SDK work off of units of 10^6.
+
+This clearly provides a poor and limited UX especially as interoperability of networks increases and
+as a result the total amount of asset types increases. We propose to have `x/bank` additionally keep
+track of metadata per `denom` in order to help clients, wallet providers, and explorers improve their
+UX and remove the requirement for making any assumptions on the unit of denomination.
+
+## Decision
+
+The `x/bank` module will be updated to store and index metadata by `denom`, specifically the "base" or
+smallest unit -- the unit the Cosmos SDK state-machine works with.
+
+Metadata may also include a non-zero length list of denominations. Each entry contains the name of
+the denomination `denom`, the exponent to the base and a list of aliases. An entry is to be
+interpreted as `1 denom = 10^exponent base_denom` (e.g. `1 ETH = 10^18 wei` and `1 uatom = 10^0 uatom`).
+
+There are two denominations that are of high importance for clients: the `base`, which is the smallest
+possible unit and the `display`, which is the unit that is commonly referred to in human communication
+and on exchanges. The values in those fields link to an entry in the list of denominations.
+
+The list in `denom_units` and the `display` entry may be changed via governance.
+
+As a result, we can define the type as follows:
+
+```protobuf
+message DenomUnit {
+ string denom = 1;
+ uint32 exponent = 2;
+ repeated string aliases = 3;
+}
+
+message Metadata {
+ string description = 1;
+ repeated DenomUnit denom_units = 2;
+ string base = 3;
+ string display = 4;
+}
+```
+
+As an example, the ATOM's metadata can be defined as follows:
+
+```json
+{
+ "name": "atom",
+ "description": "The native staking token of the Cosmos Hub.",
+ "denom_units": [
+ {
+ "denom": "uatom",
+ "exponent": 0,
+ "aliases": [
+ "microatom"
+ ],
+ },
+ {
+ "denom": "matom",
+ "exponent": 3,
+ "aliases": [
+ "milliatom"
+ ]
+ },
+ {
+ "denom": "atom",
+ "exponent": 6,
+ }
+ ],
+ "base": "uatom",
+ "display": "atom",
+}
+```
+
+Given the above metadata, a client may infer the following things:
+
+* 4.3atom = 4.3 * (10^6) = 4,300,000uatom
+* The string "atom" can be used as a display name in a list of tokens.
+* The balance 4300000 can be displayed as 4,300,000uatom or 4,300matom or 4.3atom.
+ The `display` denomination 4.3atom is a good default if the authors of the client don't make
+ an explicit decision to choose a different representation.
+
+A client should be able to query for metadata by denom both via the CLI and REST interfaces. In
+addition, we will add handlers to these interfaces to convert from any unit to another given unit,
+as the base framework for this already exists in the Cosmos SDK.
+
+Finally, we need to ensure metadata exists in the `GenesisState` of the `x/bank` module which is also
+indexed by the base `denom`.
+
+```go
+type GenesisState struct {
+ SendEnabled bool `json:"send_enabled" yaml:"send_enabled"`
+ Balances []Balance `json:"balances" yaml:"balances"`
+ Supply sdk.Coins `json:"supply" yaml:"supply"`
+ DenomMetadata []Metadata `json:"denom_metadata" yaml:"denom_metadata"`
+}
+```
+
+## Future Work
+
+In order for clients to avoid having to convert assets to the base denomination -- either manually or
+via an endpoint, we may consider supporting automatic conversion of a given unit input.
+
+## Consequences
+
+### Positive
+
+* Provides clients, wallet providers and block explorers with additional data on
+ asset denomination to improve UX and remove any need to make assumptions on
+ denomination units.
+
+### Negative
+
+* A small amount of required additional storage in the `x/bank` module. The amount
+ of additional storage should be minimal as the amount of total assets should not
+ be large.
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-027-deterministic-protobuf-serialization.md b/copy-of-sdk-docs/build/architecture/adr-027-deterministic-protobuf-serialization.md
new file mode 100644
index 00000000..0b0b4c9f
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-027-deterministic-protobuf-serialization.md
@@ -0,0 +1,314 @@
+# ADR 027: Deterministic Protobuf Serialization
+
+## Changelog
+
+* 2020-08-07: Initial Draft
+* 2020-09-01: Further clarify rules
+
+## Status
+
+Proposed
+
+## Abstract
+
+Fully deterministic structure serialization, which works across many languages and clients,
+is needed when signing messages. We need to be sure that whenever we serialize
+a data structure, no matter in which supported language, the raw bytes
+will stay the same.
+[Protobuf](https://developers.google.com/protocol-buffers/docs/proto3)
+serialization is not bijective (i.e. there exists a practically unlimited number of
+valid binary representations for a given protobuf document)1.
+
+This document describes a deterministic serialization scheme for
+a subset of protobuf documents, that covers this use case but can be reused in
+other cases as well.
+
+### Context
+
+For signature verification in Cosmos SDK, the signer and verifier need to agree on
+the same serialization of a `SignDoc` as defined in
+[ADR-020](./adr-020-protobuf-transaction-encoding.md) without transmitting the
+serialization.
+
+Currently, for block signatures we are using a workaround: we create a new [TxRaw](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L30)
+instance (as defined in [adr-020-protobuf-transaction-encoding](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#transactions))
+by converting all [Tx](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L13)
+fields to bytes on the client side. This adds an additional manual
+step when sending and signing transactions.
+
+### Decision
+
+The following encoding scheme is to be used by other ADRs,
+and in particular for `SignDoc` serialization.
+
+## Specification
+
+### Scope
+
+This ADR defines a protobuf3 serializer. The output is a valid protobuf
+serialization, such that every protobuf parser can parse it.
+
+No maps are supported in version 1 due to the complexity of defining a
+deterministic serialization. This might change in future. Implementations must
+reject documents containing maps as invalid input.
+
+### Background - Protobuf3 Encoding
+
+Most numeric types in protobuf3 are encoded as
+[varints](https://developers.google.com/protocol-buffers/docs/encoding#varints).
+Varints are at most 10 bytes, and since each varint byte has 7 bits of data,
+varints are a representation of `uint70` (70-bit unsigned integer). When
+encoding, numeric values are casted from their base type to `uint70`, and when
+decoding, the parsed `uint70` is casted to the appropriate numeric type.
+
+The maximum valid value for a varint that complies with protobuf3 is
+`FF FF FF FF FF FF FF FF FF 7F` (i.e. `2**70 -1`). If the field type is
+`{,u,s}int64`, the highest 6 bits of the 70 are dropped during decoding,
+introducing 6 bits of malleability. If the field type is `{,u,s}int32`, the
+highest 38 bits of the 70 are dropped during decoding, introducing 38 bits of
+malleability.
+
+Among other sources of non-determinism, this ADR eliminates the possibility of
+encoding malleability.
+
+### Serialization rules
+
+The serialization is based on the
+[protobuf3 encoding](https://developers.google.com/protocol-buffers/docs/encoding)
+with the following additions:
+
+1. Fields must be serialized only once in ascending order
+2. Extra fields or any extra data must not be added
+3. [Default values](https://developers.google.com/protocol-buffers/docs/proto3#default)
+ must be omitted
+4. `repeated` fields of scalar numeric types must use
+ [packed encoding](https://developers.google.com/protocol-buffers/docs/encoding#packed)
+5. Varint encoding must not be longer than needed:
+ * No trailing zero bytes (in little endian, i.e. no leading zeroes in big
+ endian). Per rule 3 above, the default value of `0` must be omitted, so
+ this rule does not apply in such cases.
+ * The maximum value for a varint must be `FF FF FF FF FF FF FF FF FF 01`.
+ In other words, when decoded, the highest 6 bits of the 70-bit unsigned
+ integer must be `0`. (10-byte varints are 10 groups of 7 bits, i.e.
+ 70 bits, of which only the lowest 70-6=64 are useful.)
+ * The maximum value for 32-bit values in varint encoding must be `FF FF FF FF 0F`
+ with one exception (below). In other words, when decoded, the highest 38
+ bits of the 70-bit unsigned integer must be `0`.
+ * The one exception to the above is _negative_ `int32`, which must be
+ encoded using the full 10 bytes for sign extension2.
+ * The maximum value for Boolean values in varint encoding must be `01` (i.e.
+ it must be `0` or `1`). Per rule 3 above, the default value of `0` must
+ be omitted, so if a Boolean is included it must have a value of `1`.
+
+While rules number 1. and 2. should be pretty straightforward and describe the
+default behavior of all protobuf encoders the author is aware of, the 3rd rule
+is more interesting. After a protobuf3 deserialization you cannot differentiate
+between unset fields and fields set to the default value3. At
+serialization level however, it is possible to set the fields with an empty
+value or omit them entirely. This is a significant difference to e.g. JSON
+where a property can be empty (`""`, `0`), `null` or undefined, leading to 3
+different documents.
+
+Omitting fields set to default values is valid because the parser must assign
+the default value to fields missing in the serialization4. For scalar
+types, omitting defaults is required by the spec5. For `repeated`
+fields, not serializing them is the only way to express empty lists. Enums must
+have a first element of numeric value 0, which is the default6. And
+message fields default to unset7.
+
+Omitting defaults allows for some amount of forward compatibility: users of
+newer versions of a protobuf schema produce the same serialization as users of
+older versions as long as newly added fields are not used (i.e. set to their
+default value).
+
+### Implementation
+
+There are three main implementation strategies, ordered from the least to the
+most custom development:
+
+* **Use a protobuf serializer that follows the above rules by default.** E.g.
+ [gogoproto](https://pkg.go.dev/github.com/cosmos/gogoproto/gogoproto) is known to
+ be compliant in most cases, but not when certain annotations such as
+ `nullable = false` are used. It might also be an option to configure an
+ existing serializer accordingly.
+* **Normalize default values before encoding them.** If your serializer follows
+ rules 1. and 2. and allows you to explicitly unset fields for serialization,
+ you can normalize default values to unset. This can be done when working with
+ [protobuf.js](https://www.npmjs.com/package/protobufjs):
+
+ ```js
+ const bytes = SignDoc.encode({
+ bodyBytes: body.length > 0 ? body : null, // normalize empty bytes to unset
+ authInfoBytes: authInfo.length > 0 ? authInfo : null, // normalize empty bytes to unset
+ chainId: chainId || null, // normalize "" to unset
+ accountNumber: accountNumber || null, // normalize 0 to unset
+ accountSequence: accountSequence || null, // normalize 0 to unset
+ }).finish();
+ ```
+
+* **Use a hand-written serializer for the types you need.** If none of the above
+ ways works for you, you can write a serializer yourself. For SignDoc this
+ would look something like this in Go, building on existing protobuf utilities:
+
+ ```go
+ if !signDoc.body_bytes.empty() {
+ buf.WriteUVarInt64(0xA) // wire type and field number for body_bytes
+ buf.WriteUVarInt64(signDoc.body_bytes.length())
+ buf.WriteBytes(signDoc.body_bytes)
+ }
+
+ if !signDoc.auth_info.empty() {
+ buf.WriteUVarInt64(0x12) // wire type and field number for auth_info
+ buf.WriteUVarInt64(signDoc.auth_info.length())
+ buf.WriteBytes(signDoc.auth_info)
+ }
+
+ if !signDoc.chain_id.empty() {
+ buf.WriteUVarInt64(0x1a) // wire type and field number for chain_id
+ buf.WriteUVarInt64(signDoc.chain_id.length())
+ buf.WriteBytes(signDoc.chain_id)
+ }
+
+ if signDoc.account_number != 0 {
+ buf.WriteUVarInt64(0x20) // wire type and field number for account_number
+ buf.WriteUVarInt(signDoc.account_number)
+ }
+
+ if signDoc.account_sequence != 0 {
+ buf.WriteUVarInt64(0x28) // wire type and field number for account_sequence
+ buf.WriteUVarInt(signDoc.account_sequence)
+ }
+ ```
+
+### Test vectors
+
+Given the protobuf definition `Article.proto`
+
+```protobuf
+package blog;
+syntax = "proto3";
+
+enum Type {
+ UNSPECIFIED = 0;
+ IMAGES = 1;
+ NEWS = 2;
+};
+
+enum Review {
+ UNSPECIFIED = 0;
+ ACCEPTED = 1;
+ REJECTED = 2;
+};
+
+message Article {
+ string title = 1;
+ string description = 2;
+ uint64 created = 3;
+ uint64 updated = 4;
+ bool public = 5;
+ bool promoted = 6;
+ Type type = 7;
+ Review review = 8;
+ repeated string comments = 9;
+ repeated string backlinks = 10;
+};
+```
+
+serializing the values
+
+```yaml
+title: "The world needs change 🌳"
+description: ""
+created: 1596806111080
+updated: 0
+public: true
+promoted: false
+type: Type.NEWS
+review: Review.UNSPECIFIED
+comments: ["Nice one", "Thank you"]
+backlinks: []
+```
+
+must result in the serialization
+
+```text
+0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75
+```
+
+When inspecting the serialized document, you see that every second field is
+omitted:
+
+```shell
+$ echo 0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 | xxd -r -p | protoc --decode_raw
+1: "The world needs change \360\237\214\263"
+3: 1596806111080
+5: 1
+7: 2
+9: "Nice one"
+9: "Thank you"
+```
+
+## Consequences
+
+Having such an encoding available allows us to get deterministic serialization
+for all protobuf documents we need in the context of Cosmos SDK signing.
+
+### Positive
+
+* Well defined rules that can be verified independently of a reference
+ implementation
+* Simple enough to keep the barrier to implementing transaction signing low
+* It allows us to continue to use 0 and other empty values in SignDoc, avoiding
+ the need to work around 0 sequences. This does not imply the change from
+ https://github.com/cosmos/cosmos-sdk/pull/6949 should not be merged, but not
+ too important anymore.
+
+### Negative
+
+* When implementing transaction signing, the encoding rules above must be
+ understood and implemented.
+* The need for rule number 3. adds some complexity to implementations.
+* Some data structures may require custom code for serialization. Thus
+ the code is not very portable - it will require additional work for each
+ client implementing serialization to properly handle custom data structures.
+
+### Neutral
+
+### Usage in Cosmos SDK
+
+For the reasons mentioned above ("Negative" section) we prefer to keep workarounds
+for shared data structure. Example: the aforementioned `TxRaw` is using raw bytes
+as a workaround. This allows them to use any valid Protobuf library without
+the need to implement a custom serializer that adheres to this standard (and related risks of bugs).
+
+## References
+
+* 1 _When a message is serialized, there is no guaranteed order for
+ how its known or unknown fields should be written. Serialization order is an
+ implementation detail and the details of any particular implementation may
+ change in the future. Therefore, protocol buffer parsers must be able to parse
+ fields in any order._ from
+ https://developers.google.com/protocol-buffers/docs/encoding#order
+* 2 https://developers.google.com/protocol-buffers/docs/encoding#signed_integers
+* 3 _Note that for scalar message fields, once a message is parsed
+ there's no way of telling whether a field was explicitly set to the default
+ value (for example whether a boolean was set to false) or just not set at all:
+ you should bear this in mind when defining your message types. For example,
+ don't have a boolean that switches on some behavior when set to false if you
+ don't want that behavior to also happen by default._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 4 _When a message is parsed, if the encoded message does not
+ contain a particular singular element, the corresponding field in the parsed
+ object is set to the default value for that field._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 5 _Also note that if a scalar message field is set to its default,
+ the value will not be serialized on the wire._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 6 _For enums, the default value is the first defined enum value,
+ which must be 0._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 7 _For message fields, the field is not set. Its exact value is
+ language-dependent._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* Encoding rules and parts of the reasoning taken from
+ [canonical-proto3 Aaron Craelius](https://github.com/regen-network/canonical-proto3)
diff --git a/copy-of-sdk-docs/build/architecture/adr-028-public-key-addresses.md b/copy-of-sdk-docs/build/architecture/adr-028-public-key-addresses.md
new file mode 100644
index 00000000..f24d24ae
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-028-public-key-addresses.md
@@ -0,0 +1,342 @@
+# ADR 028: Public Key Addresses
+
+## Changelog
+
+* 2020/08/18: Initial version
+* 2021/01/15: Analysis and algorithm update
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR defines an address format for all addressable Cosmos SDK accounts. That includes: new public key algorithms, multisig public keys, and module accounts.
+
+## Context
+
+Issue [\#3685](https://github.com/cosmos/cosmos-sdk/issues/3685) identified that public key
+address spaces are currently overlapping. We confirmed that it significantly decreases security of Cosmos SDK.
+
+### Problem
+
+An attacker can control an input for an address generation function. This leads to a birthday attack, which significantly decreases the security space.
+To overcome this, we need to separate the inputs for different kinds of account types:
+a security break of one account type shouldn't impact the security of other account types.
+
+### Initial proposals
+
+One initial proposal was to extend the address length and
+adding prefixes for different types of addresses.
+
+@ethanfrey explained an alternate approach originally used in https://github.com/iov-one/weave:
+
+> I spent quite a bit of time thinking about this issue while building weave... The other cosmos Sdk.
+> Basically I define a condition to be a type and format as human readable string with some binary data appended. This condition is hashed into an Address (again at 20 bytes). The use of this prefix makes it impossible to find a preimage for a given address with a different condition (eg ed25519 vs secp256k1).
+> This is explained in depth here https://weave.readthedocs.io/en/latest/design/permissions.html
+> And the code is here, look mainly at the top where we process conditions. https://github.com/iov-one/weave/blob/master/conditions.go
+
+And explained how this approach should be sufficiently collision resistant:
+
+> Yeah, AFAIK, 20 bytes should be collision resistance when the preimages are unique and not malleable. A space of 2^160 would expect some collision to be likely around 2^80 elements (birthday paradox). And if you want to find a collision for some existing element in the database, it is still 2^160. 2^80 only if all these elements are written to state.
+> The good example you brought up was eg. a public key bytes being a valid public key on two algorithms supported by the codec. Meaning if either was broken, you would break accounts even if they were secured with the safer variant. This is only as the issue when no differentiating type info is present in the preimage (before hashing into an address).
+> I would like to hear an argument if the 20 bytes space is an actual issue for security, as I would be happy to increase my address sizes in weave. I just figured cosmos and ethereum and bitcoin all use 20 bytes, it should be good enough. And the arguments above which made me feel it was secure. But I have not done a deeper analysis.
+
+This led to the first proposal (which we proved to be not good enough):
+we concatenate a key type with a public key, hash it and take the first 20 bytes of that hash, summarized as `sha256(keyTypePrefix || keybytes)[:20]`.
+
+### Review and Discussions
+
+In [\#5694](https://github.com/cosmos/cosmos-sdk/issues/5694) we discussed various solutions.
+We agreed that 20 bytes it's not future proof, and extending the address length is the only way to allow addresses of different types, various signature types, etc.
+This disqualifies the initial proposal.
+
+In the issue we discussed various modifications:
+
+* Choice of the hash function.
+* Move the prefix out of the hash function: `keyTypePrefix + sha256(keybytes)[:20]` [post-hash-prefix-proposal].
+* Use double hashing: `sha256(keyTypePrefix + sha256(keybytes)[:20])`.
+* Increase to keybytes hash slice from 20 bytes to 32 or 40 bytes. We concluded that 32 bytes, produced by a good hash functions is future secure.
+
+### Requirements
+
+* Support currently used tools - we don't want to break an ecosystem, or add a long adaptation period. Ref: https://github.com/cosmos/cosmos-sdk/issues/8041
+* Try to keep the address length small - addresses are widely used in state, both as part of a key and object value.
+
+### Scope
+
+This ADR only defines a process for the generation of address bytes. For end-user interactions with addresses (through the API, or CLI, etc.), we still use bech32 to format these addresses as strings. This ADR doesn't change that.
+Using Bech32 for string encoding gives us support for checksum error codes and handling of user typos.
+
+## Decision
+
+We define the following account types, for which we define the address function:
+
+1. simple accounts: represented by a regular public key (ie: secp256k1, sr25519)
+2. naive multisig: accounts composed by other addressable objects (ie: naive multisig)
+3. composed accounts with a native address key (ie: bls, group module accounts)
+4. module accounts: basically any accounts which cannot sign transactions and which are managed internally by modules
+
+### Legacy Public Key Addresses Don't Change
+
+Currently (Jan 2021), the only officially supported Cosmos SDK user accounts are `secp256k1` basic accounts and legacy amino multisig.
+They are used in existing Cosmos SDK zones. They use the following address formats:
+
+* secp256k1: `ripemd160(sha256(pk_bytes))[:20]`
+* legacy amino multisig: `sha256(aminoCdc.Marshal(pk))[:20]`
+
+We don't want to change existing addresses. So the addresses for these two key types will remain the same.
+
+The current multisig public keys use amino serialization to generate the address. We will retain
+those public keys and their address formatting, and call them "legacy amino" multisig public keys
+in protobuf. We will also create multisig public keys without amino addresses to be described below.
+
+### Hash Function Choice
+
+As in other parts of the Cosmos SDK, we will use `sha256`.
+
+### Basic Address
+
+We start by defining a base algorithm for generating addresses which we will call `Hash`. Notably, it's used for accounts represented by a single key pair. For each public key schema we have to have an associated `typ` string, explained in the next section. `hash` is the cryptographic hash function defined in the previous section.
+
+```go
+const A_LEN = 32
+
+func Hash(typ string, key []byte) []byte {
+ return hash(hash(typ) + key)[:A_LEN]
+}
+```
+
+The `+` is bytes concatenation, which doesn't use any separator.
+
+This algorithm is the outcome of a consultation session with a professional cryptographer.
+Motivation: this algorithm keeps the address relatively small (length of the `typ` doesn't impact the length of the final address)
+and it's more secure than [post-hash-prefix-proposal] (which uses the first 20 bytes of a pubkey hash, significantly reducing the address space).
+Moreover the cryptographer motivated the choice of adding `typ` in the hash to protect against a switch table attack.
+
+`address.Hash` is a low level function to generate _base_ addresses for new key types. Example:
+
+* BLS: `address.Hash("bls", pubkey)`
+
+### Composed Addresses
+
+For simple composed accounts (like a new naive multisig) we generalize the `address.Hash`. The address is constructed by recursively creating addresses for the sub accounts, sorting the addresses and composing them into a single address. It ensures that the ordering of keys doesn't impact the resulting address.
+
+```go
+// We don't need a PubKey interface - we need anything which is addressable.
+type Addressable interface {
+ Address() []byte
+}
+
+func Composed(typ string, subaccounts []Addressable) []byte {
+ addresses = map(subaccounts, \a -> LengthPrefix(a.Address()))
+ addresses = sort(addresses)
+ return address.Hash(typ, addresses[0] + ... + addresses[n])
+}
+```
+
+The `typ` parameter should be a schema descriptor, containing all significant attributes with deterministic serialization (eg: utf8 string).
+`LengthPrefix` is a function which prepends 1 byte to the address. The value of that byte is the length of the address bits before prepending. The address must be at most 255 bits long.
+We are using `LengthPrefix` to eliminate conflicts - it assures, that for 2 lists of addresses: `as = {a1, a2, ..., an}` and `bs = {b1, b2, ..., bm}` such that every `bi` and `ai` is at most 255 long, `concatenate(map(as, (a) => LengthPrefix(a))) = map(bs, (b) => LengthPrefix(b))` if `as = bs`.
+
+Implementation Tip: account implementations should cache addresses.
+
+#### Multisig Addresses
+
+For a new multisig public keys, we define the `typ` parameter not based on any encoding scheme (amino or protobuf). This avoids issues with non-determinism in the encoding scheme.
+
+Example:
+
+```protobuf
+package cosmos.crypto.multisig;
+
+message PubKey {
+ uint32 threshold = 1;
+ repeated google.protobuf.Any pubkeys = 2;
+}
+```
+
+```go
+func (multisig PubKey) Address() {
+ // first gather all nested pub keys
+ var keys []address.Addressable // cryptotypes.PubKey implements Addressable
+ for _, _key := range multisig.Pubkeys {
+ keys = append(keys, key.GetCachedValue().(cryptotypes.PubKey))
+ }
+
+ // form the type from the message name (cosmos.crypto.multisig.PubKey) and the threshold joined together
+ prefix := fmt.Sprintf("%s/%d", proto.MessageName(multisig), multisig.Threshold)
+
+ // use the Composed function defined above
+ return address.Composed(prefix, keys)
+}
+```
+
+
+### Derived Addresses
+
+We must be able to cryptographically derive one address from another one. The derivation process must guarantee hash properties, hence we use the already defined `Hash` function:
+
+```go
+func Derive(address, derivationKey []byte) []byte {
+ return Hash(address, derivationKey)
+}
+```
+
+### Module Account Addresses
+
+A module account will have `"module"` type. Module accounts can have sub accounts. The submodule account will be created based on module name, and sequence of derivation keys. Typically, the first derivation key should be a class of the derived accounts. The derivation process has a defined order: module name, submodule key, subsubmodule key... An example module account is created using:
+
+```go
+address.Module(moduleName, key)
+```
+
+An example sub-module account is created using:
+
+```go
+groupPolicyAddresses := []byte{1}
+address.Module(moduleName, groupPolicyAddresses, policyID)
+```
+
+The `address.Module` function is using `address.Hash` with `"module"` as the type argument, and byte representation of the module name concatenated with submodule key. The last two components must be uniquely separated to avoid potential clashes (example: modulename="ab" & submodulekey="bc" will have the same derivation key as modulename="a" & submodulekey="bbc").
+We use a null byte (`'\x00'`) to separate module name from the submodule key. This works, because null byte is not a part of a valid module name. Finally, the sub-submodule accounts are created by applying the `Derive` function recursively.
+We could use `Derive` function also in the first step (rather than concatenating the module name with a zero byte and the submodule key). We decided to do concatenation to avoid one level of derivation and speed up computation.
+
+For backward compatibility with the existing `authtypes.NewModuleAddress`, we add a special case in `Module` function: when no derivation key is provided, we fallback to the "legacy" implementation.
+
+```go
+func Module(moduleName string, derivationKeys ...[]byte) []byte{
+ if len(derivationKeys) == 0 {
+ return authtypes.NewModuleAddress(moduleName) // legacy case
+ }
+ submoduleAddress := Hash("module", []byte(moduleName) + 0 + key)
+ return fold((a, k) => Derive(a, k), subsubKeys, submoduleAddress)
+}
+```
+
+**Example 1** A lending BTC pool address would be:
+
+```go
+btcPool := address.Module("lending", btc.Address()})
+```
+
+If we want to create an address for a module account depending on more than one key, we can concatenate them:
+
+```go
+btcAtomAMM := address.Module("amm", btc.Address() + atom.Address()})
+```
+
+**Example 2** a smart-contract address could be constructed by:
+
+```go
+smartContractAddr = Module("mySmartContractVM", smartContractsNamespace, smartContractKey})
+
+// which equals to:
+smartContractAddr = Derived(
+ Module("mySmartContractVM", smartContractsNamespace),
+ []{smartContractKey})
+```
+
+### Schema Types
+
+A `typ` parameter used in `Hash` function SHOULD be unique for each account type.
+Since all Cosmos SDK account types are serialized in the state, we propose to use the protobuf message name string.
+
+Example: all public key types have a unique protobuf message type similar to:
+
+```protobuf
+package cosmos.crypto.sr25519;
+
+message PubKey {
+ bytes key = 1;
+}
+```
+
+All protobuf messages have unique fully qualified names, in this example `cosmos.crypto.sr25519.PubKey`.
+These names are derived directly from .proto files in a standardized way and used
+in other places such as the type URL in `Any`s. We can easily obtain the name using
+`proto.MessageName(msg)`.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR is compatible with what was committed and directly supported in the Cosmos SDK repository.
+
+### Positive
+
+* a simple algorithm for generating addresses for new public keys, complex accounts and modules
+* the algorithm generalizes _native composed keys_
+* increased security and collision resistance of addresses
+* the approach is extensible for future use-cases - one can use other address types, as long as they don't conflict with the address length specified here (20 or 32 bytes).
+* support new account types.
+
+### Negative
+
+* addresses do not communicate key type, a prefixed approach would have done this
+* addresses are 60% longer and will consume more storage space
+* requires a refactor of KVStore store keys to handle variable length addresses
+
+### Neutral
+
+* protobuf message names are used as key type prefixes
+
+## Further Discussions
+
+Some accounts can have a fixed name or may be constructed in another way (eg: modules). We were discussing an idea of an account with a predefined name (eg: `me.regen`), which could be used by institutions.
+Without going into details, these kinds of addresses are compatible with the hash based addresses described here as long as they don't have the same length.
+More specifically, any special account address must not have a length equal to 20 or 32 bytes.
+
+## Appendix: Consulting session
+
+End of Dec 2020 we had a session with [Alan Szepieniec](https://scholar.google.be/citations?user=4LyZn8oAAAAJ&hl=en) to consult the approach presented above.
+
+Alan general observations:
+
+* we don’t need 2-preimage resistance
+* we need 32bytes address space for collision resistance
+* when an attacker can control an input for an object with an address then we have a problem with a birthday attack
+* there is an issue with smart-contracts for hashing
+* sha2 mining can be used to break the address pre-image
+
+Hashing algorithm
+
+* any attack breaking blake3 will break blake2
+* Alan is pretty confident about the current security analysis of the blake hash algorithm. It was a finalist, and the author is well known in security analysis.
+
+Algorithm:
+
+* Alan recommends to hash the prefix: `address(pub_key) = hash(hash(key_type) + pub_key)[:32]`, main benefits:
+ * we are free to user arbitrary long prefix names
+ * we still don’t risk collisions
+ * switch tables
+* discussion about penalization -> about adding prefix post hash
+* Aaron asked about post hash prefixes (`address(pub_key) = key_type + hash(pub_key)`) and differences. Alan noted that this approach has longer address space and it’s stronger.
+
+Algorithm for complex / composed keys:
+
+* merging tree-like addresses with same algorithm are fine
+
+Module addresses: Should module addresses have a different size to differentiate it?
+
+* we will need to set a pre-image prefix for module addresses to keep them in 32-byte space: `hash(hash('module') + module_key)`
+* Aaron observation: we already need to deal with variable length (to not break secp256k1 keys).
+
+Discussion about an arithmetic hash function for ZKP
+
+* Poseidon / Rescue
+* Problem: much bigger risk because we don’t know much techniques and the history of crypto-analysis of arithmetic constructions. It’s still a new ground and area of active research.
+
+Post quantum signature size
+
+* Alan suggestion: Falcon: speed / size ratio - very good.
+* Aaron - should we think about it?
+ Alan: based on early extrapolation this thing will get able to break EC cryptography in 2050. But that’s a lot of uncertainty. But there is magic happening with recursions / linking / simulation and that can speedup the progress.
+
+Other ideas
+
+* Let’s say we use the same key and two different address algorithms for 2 different use cases. Is it still safe to use it? Alan: if we want to hide the public key (which is not our use case), then it’s less secure but there are fixes.
+
+### References
+
+* [Notes](https://hackmd.io/_NGWI4xZSbKzj1BkCqyZMw)
diff --git a/copy-of-sdk-docs/build/architecture/adr-029-fee-grant-module.md b/copy-of-sdk-docs/build/architecture/adr-029-fee-grant-module.md
new file mode 100644
index 00000000..597ea5f7
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-029-fee-grant-module.md
@@ -0,0 +1,153 @@
+# ADR 029: Fee Grant Module
+
+## Changelog
+
+* 2020/08/18: Initial Draft
+* 2021/05/05: Removed height based expiration support and simplified naming.
+
+## Status
+
+Accepted
+
+## Context
+
+In order to make blockchain transactions, the signing account must possess a sufficient balance of the right denomination
+in order to pay fees. There are classes of transactions where needing to maintain a wallet with sufficient fees is a
+barrier to adoption.
+
+For instance, when proper permissions are set up, someone may temporarily delegate the ability to vote on proposals to
+a "burner" account that is stored on a mobile phone with only minimal security.
+
+Other use cases include workers tracking items in a supply chain or farmers submitting field data for analytics
+or compliance purposes.
+
+For all of these use cases, UX would be significantly enhanced by obviating the need for these accounts to always
+maintain the appropriate fee balance. This is especially true if we want to achieve enterprise adoption for something
+like supply chain tracking.
+
+While one solution would be to have a service that fills up these accounts automatically with the appropriate fees, a better UX
+would be provided by allowing these accounts to pull from a common fee pool account with proper spending limits.
+A single pool would reduce the churn of making lots of small "fill up" transactions and also more effectively leverage
+the resources of the organization setting up the pool.
+
+## Decision
+
+As a solution we propose a module, `x/feegrant` which allows one account, the "granter" to grant another account, the "grantee"
+an allowance to spend the granter's account balance for fees within certain well-defined limits.
+
+Fee allowances are defined by the extensible `FeeAllowanceI` interface:
+
+```go
+type FeeAllowanceI {
+ // Accept can use fee payment requested as well as timestamp of the current block
+ // to determine whether or not to process this. This is checked in
+ // Keeper.UseGrantedFees and the return values should match how it is handled there.
+ //
+ // If it returns an error, the fee payment is rejected, otherwise it is accepted.
+ // The FeeAllowance implementation is expected to update it's internal state
+ // and will be saved again after an acceptance.
+ //
+ // If remove is true (regardless of the error), the FeeAllowance will be deleted from storage
+ // (eg. when it is used up). (See call to RevokeFeeAllowance in Keeper.UseGrantedFees)
+ Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error)
+
+ // ValidateBasic should evaluate this FeeAllowance for internal consistency.
+ // Don't allow negative amounts, or negative periods for example.
+ ValidateBasic() error
+}
+```
+
+Two basic fee allowance types, `BasicAllowance` and `PeriodicAllowance` are defined to support known use cases:
+
+```protobuf
+// BasicAllowance implements FeeAllowanceI with a one-time grant of tokens
+// that optionally expires. The delegatee can use up to SpendLimit to cover fees.
+message BasicAllowance {
+ // spend_limit specifies the maximum amount of tokens that can be spent
+ // by this allowance and will be updated as tokens are spent. If it is
+ // empty, there is no spend limit and any amount of coins can be spent.
+ repeated cosmos_sdk.v1.Coin spend_limit = 1;
+
+ // expiration specifies an optional time when this allowance expires
+ google.protobuf.Timestamp expiration = 2;
+}
+
+// PeriodicAllowance extends FeeAllowanceI to allow for both a maximum cap,
+// as well as a limit per time period.
+message PeriodicAllowance {
+ BasicAllowance basic = 1;
+
+ // period specifies the time duration in which period_spend_limit coins can
+ // be spent before that allowance is reset
+ google.protobuf.Duration period = 2;
+
+ // period_spend_limit specifies the maximum number of coins that can be spent
+ // in the period
+ repeated cosmos_sdk.v1.Coin period_spend_limit = 3;
+
+ // period_can_spend is the number of coins left to be spent before the period_reset time
+ repeated cosmos_sdk.v1.Coin period_can_spend = 4;
+
+ // period_reset is the time at which this period resets and a new one begins,
+ // it is calculated from the start time of the first transaction after the
+ // last period ended
+ google.protobuf.Timestamp period_reset = 5;
+}
+
+```
+
+Allowances can be granted and revoked using `MsgGrantAllowance` and `MsgRevokeAllowance`:
+
+```protobuf
+// MsgGrantAllowance adds permission for Grantee to spend up to Allowance
+// of fees from the account of Granter.
+message MsgGrantAllowance {
+ string granter = 1;
+ string grantee = 2;
+ google.protobuf.Any allowance = 3;
+ }
+
+ // MsgRevokeAllowance removes any existing FeeAllowance from Granter to Grantee.
+ message MsgRevokeAllowance {
+ string granter = 1;
+ string grantee = 2;
+ }
+```
+
+In order to use allowances in transactions, we add a new field `granter` to the transaction `Fee` type:
+
+```protobuf
+package cosmos.tx.v1beta1;
+
+message Fee {
+ repeated cosmos.base.v1beta1.Coin amount = 1;
+ uint64 gas_limit = 2;
+ string payer = 3;
+ string granter = 4;
+}
+```
+
+`granter` must either be left empty or must correspond to an account which has granted
+a fee allowance to the fee payer (either the first signer or the value of the `payer` field).
+
+A new `AnteDecorator` named `DeductGrantedFeeDecorator` will be created in order to process transactions with `fee_payer`
+set and correctly deduct fees based on fee allowances.
+
+## Consequences
+
+### Positive
+
+* improved UX for use cases where it is cumbersome to maintain an account balance just for fees
+
+### Negative
+
+### Neutral
+
+* a new field must be added to the transaction `Fee` message and a new `AnteDecorator` must be
+created to use it
+
+## References
+
+* Blog article describing initial work: https://medium.com/regen-network/hacking-the-cosmos-cosmwasm-and-key-management-a08b9f561d1b
+* Initial public specification: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56
+* Original subkeys proposal from B-harvest which influenced this design: https://github.com/cosmos/cosmos-sdk/issues/4480
diff --git a/copy-of-sdk-docs/build/architecture/adr-030-authz-module.md b/copy-of-sdk-docs/build/architecture/adr-030-authz-module.md
new file mode 100644
index 00000000..e8b64f18
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-030-authz-module.md
@@ -0,0 +1,258 @@
+# ADR 030: Authorization Module
+
+## Changelog
+
+* 2019-11-06: Initial Draft
+* 2020-10-12: Updated Draft
+* 2020-11-13: Accepted
+* 2020-05-06: proto API updates, use `sdk.Msg` instead of `sdk.ServiceMsg` (the latter concept was removed from Cosmos SDK)
+* 2022-04-20: Updated the `SendAuthorization` proto docs to clarify the `SpendLimit` is a required field. (Generic authorization can be used with bank msg type url to create limit less bank authorization)
+
+## Status
+
+Accepted
+
+## Abstract
+
+This ADR defines the `x/authz` module which allows accounts to grant authorizations to perform actions
+on behalf of that account to other accounts.
+
+## Context
+
+The concrete use cases which motivated this module include:
+
+* the desire to delegate the ability to vote on proposals to other accounts besides the account which one has
+delegated stake
+* "sub-keys" functionality, as originally proposed in [\#4480](https://github.com/cosmos/cosmos-sdk/issues/4480) which
+is a term used to describe the functionality provided by this module together with
+the `fee_grant` module from [ADR 029](./adr-029-fee-grant-module.md) and the [group module](https://github.com/cosmos/cosmos-sdk/tree/main/x/group).
+
+The "sub-keys" functionality roughly refers to the ability for one account to grant some subset of its capabilities to
+other accounts with possibly less robust, but easier to use security measures. For instance, a master account representing
+an organization could grant the ability to spend small amounts of the organization's funds to individual employee accounts.
+Or an individual (or group) with a multisig wallet could grant the ability to vote on proposals to any one of the member
+keys.
+
+The current implementation is based on work done by the [Gaian's team at Hackatom Berlin 2019](https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation).
+
+## Decision
+
+We will create a module named `authz` which provides functionality for
+granting arbitrary privileges from one account (the _granter_) to another account (the _grantee_). Authorizations
+must be granted for a particular `Msg` service methods one by one using an implementation
+of `Authorization` interface.
+
+### Types
+
+Authorizations determine exactly what privileges are granted. They are extensible
+and can be defined for any `Msg` service method even outside of the module where
+the `Msg` method is defined. `Authorization`s reference `Msg`s using their TypeURL.
+
+#### Authorization
+
+```go
+type Authorization interface {
+ proto.Message
+
+ // MsgTypeURL returns the fully-qualified Msg TypeURL (as described in ADR 020),
+ // which will process and accept or reject a request.
+ MsgTypeURL() string
+
+ // Accept determines whether this grant permits the provided sdk.Msg to be performed, and if
+ // so provides an upgraded authorization instance.
+ Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error)
+
+ // ValidateBasic does a simple validation check that
+ // doesn't require access to any other information.
+ ValidateBasic() error
+}
+
+// AcceptResponse instruments the controller of an authz message if the request is accepted
+// and if it should be updated or deleted.
+type AcceptResponse struct {
+ // If Accept=true, the controller can accept and authorization and handle the update.
+ Accept bool
+ // If Delete=true, the controller must delete the authorization object and release
+ // storage resources.
+ Delete bool
+ // Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes,
+ // it must use the updated version and handle the update on the storage level.
+ Updated Authorization
+}
+```
+
+For example a `SendAuthorization` like this is defined for `MsgSend` that takes
+a `SpendLimit` and updates it down to zero:
+
+```go
+type SendAuthorization struct {
+ // SpendLimit specifies the maximum amount of tokens that can be spent
+ // by this authorization and will be updated as tokens are spent. This field is required. (Generic authorization
+ // can be used with bank msg type url to create limit less bank authorization).
+ SpendLimit sdk.Coins
+}
+
+func (a SendAuthorization) MsgTypeURL() string {
+ return sdk.MsgTypeURL(&MsgSend{})
+}
+
+func (a SendAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) {
+ mSend, ok := msg.(*MsgSend)
+ if !ok {
+ return authz.AcceptResponse{}, sdkerrors.ErrInvalidType.Wrap("type mismatch")
+ }
+ limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount)
+ if isNegative {
+ return authz.AcceptResponse{}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit")
+ }
+ if limitLeft.IsZero() {
+ return authz.AcceptResponse{Accept: true, Delete: true}, nil
+ }
+
+ return authz.AcceptResponse{Accept: true, Delete: false, Updated: &SendAuthorization{SpendLimit: limitLeft}}, nil
+}
+```
+
+A different type of capability for `MsgSend` could be implemented
+using the `Authorization` interface with no need to change the underlying
+`bank` module.
+
+##### Small notes on `AcceptResponse`
+
+* The `AcceptResponse.Accept` field will be set to `true` if the authorization is accepted.
+However, if it is rejected, the function `Accept` will raise an error (without setting `AcceptResponse.Accept` to `false`).
+
+* The `AcceptResponse.Updated` field will be set to a non-nil value only if there is a real change to the authorization.
+If authorization remains the same (as is, for instance, always the case for a [`GenericAuthorization`](#genericauthorization)),
+the field will be `nil`.
+
+### `Msg` Service
+
+```protobuf
+service Msg {
+ // Grant grants the provided authorization to the grantee on the granter's
+ // account with the provided expiration time.
+ rpc Grant(MsgGrant) returns (MsgGrantResponse);
+
+ // Exec attempts to execute the provided messages using
+ // authorizations granted to the grantee. Each message should have only
+ // one signer corresponding to the granter of the authorization.
+ rpc Exec(MsgExec) returns (MsgExecResponse);
+
+ // Revoke revokes any authorization corresponding to the provided method name on the
+ // granter's account that has been granted to the grantee.
+ rpc Revoke(MsgRevoke) returns (MsgRevokeResponse);
+}
+
+// Grant gives permissions to execute
+// the provided method with expiration time.
+message Grant {
+ google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"];
+ google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
+}
+
+message MsgGrant {
+ string granter = 1;
+ string grantee = 2;
+
+ Grant grant = 3 [(gogoproto.nullable) = false];
+}
+
+message MsgExecResponse {
+ cosmos.base.abci.v1beta1.Result result = 1;
+}
+
+message MsgExec {
+ string grantee = 1;
+ // Authorization Msg requests to execute. Each msg must implement Authorization interface
+ repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"];
+}
+```
+
+### Router Middleware
+
+The `authz` `Keeper` will expose a `DispatchActions` method which allows other modules to send `Msg`s
+to the router based on `Authorization` grants:
+
+```go
+type Keeper interface {
+ // DispatchActions routes the provided msgs to their respective handlers if the grantee was granted an authorization
+ // to send those messages by the first (and only) signer of each msg.
+ DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) sdk.Result`
+}
+```
+
+### CLI
+
+#### `tx exec` Method
+
+When a CLI user wants to run a transaction on behalf of another account using `MsgExec`, they
+can use the `exec` method. For instance `gaiacli tx gov vote 1 yes --from --generate-only | gaiacli tx authz exec --send-as --from `
+would send a transaction like this:
+
+```go
+MsgExec {
+ Grantee: mykey,
+ Msgs: []sdk.Msg{
+ MsgVote {
+ ProposalID: 1,
+ Voter: cosmos3thsdgh983egh823
+ Option: Yes
+ }
+ }
+}
+```
+
+#### `tx grant --from `
+
+This CLI command will send a `MsgGrant` transaction. `authorization` should be encoded as
+JSON on the CLI.
+
+#### `tx revoke --from `
+
+This CLI command will send a `MsgRevoke` transaction.
+
+### Built-in Authorizations
+
+#### `SendAuthorization`
+
+```protobuf
+// SendAuthorization allows the grantee to spend up to spend_limit coins from
+// the granter's account.
+message SendAuthorization {
+ repeated cosmos.base.v1beta1.Coin spend_limit = 1;
+}
+```
+
+#### `GenericAuthorization`
+
+```protobuf
+// GenericAuthorization gives the grantee unrestricted permissions to execute
+// the provided method on behalf of the granter's account.
+message GenericAuthorization {
+ option (cosmos_proto.implements_interface) = "Authorization";
+
+ // Msg, identified by it's type URL, to grant unrestricted permissions to execute
+ string msg = 1;
+}
+```
+
+## Consequences
+
+### Positive
+
+* Users will be able to authorize arbitrary actions on behalf of their accounts to other
+users, improving key management for many use cases
+* The solution is more generic than previously considered approaches and the
+`Authorization` interface approach can be extended to cover other use cases by
+SDK users
+
+### Negative
+
+### Neutral
+
+## References
+
+* Initial Hackatom implementation: https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation
+* Post-Hackatom spec: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#delegation-module
+* B-Harvest subkeys spec: https://github.com/cosmos/cosmos-sdk/issues/4480
diff --git a/copy-of-sdk-docs/build/architecture/adr-031-msg-service.md b/copy-of-sdk-docs/build/architecture/adr-031-msg-service.md
new file mode 100644
index 00000000..65d3bc5c
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-031-msg-service.md
@@ -0,0 +1,202 @@
+# ADR 031: Protobuf Msg Services
+
+## Changelog
+
+* 2020-10-05: Initial Draft
+* 2021-04-21: Remove `ServiceMsg`s to follow Protobuf `Any`'s spec, see [#9063](https://github.com/cosmos/cosmos-sdk/issues/9063).
+
+## Status
+
+Accepted
+
+## Abstract
+
+We want to leverage protobuf `service` definitions for defining `Msg`s, which will give us significant developer UX
+improvements in terms of the code that is generated and the fact that return types will now be well defined.
+
+## Context
+
+Currently `Msg` handlers in the Cosmos SDK have return values that are placed in the `data` field of the response.
+These return values, however, are not specified anywhere except in the golang handler code.
+
+In early conversations [it was proposed](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc/edit)
+that `Msg` return types be captured using a protobuf extension field, ex:
+
+```protobuf
+package cosmos.gov;
+
+message MsgSubmitProposal
+ option (cosmos_proto.msg_return) = “uint64”;
+ string delegator_address = 1;
+ string validator_address = 2;
+ repeated sdk.Coin amount = 3;
+}
+```
+
+This was never adopted, however.
+
+Having a well-specified return value for `Msg`s would improve client UX. For instance,
+in `x/gov`, `MsgSubmitProposal` returns the proposal ID as a big-endian `uint64`.
+This isn’t really documented anywhere and clients would need to know the internals
+of the Cosmos SDK to parse that value and return it to users.
+
+Also, there may be cases where we want to use these return values programmatically.
+For instance, https://github.com/cosmos/cosmos-sdk/issues/7093 proposes a method for
+doing inter-module Ocaps using the `Msg` router. A well-defined return type would
+improve the developer UX for this approach.
+
+In addition, handler registration of `Msg` types tends to add a bit of
+boilerplate on top of keepers and is usually done through manual type switches.
+This isn't necessarily bad, but it does add overhead to creating modules.
+
+## Decision
+
+We decide to use protobuf `service` definitions for defining `Msg`s as well as
+the code generated by them as a replacement for `Msg` handlers.
+
+Below we define how this will look for the `SubmitProposal` message from `x/gov` module.
+We start with a `Msg` `service` definition:
+
+```protobuf
+package cosmos.gov;
+
+service Msg {
+ rpc SubmitProposal(MsgSubmitProposal) returns (MsgSubmitProposalResponse);
+}
+
+// Note that for backwards compatibility this uses MsgSubmitProposal as the request
+// type instead of the more canonical MsgSubmitProposalRequest
+message MsgSubmitProposal {
+ google.protobuf.Any content = 1;
+ string proposer = 2;
+}
+
+message MsgSubmitProposalResponse {
+ uint64 proposal_id;
+}
+```
+
+While this is most commonly used for gRPC, overloading protobuf `service` definitions like this does not violate
+the intent of the [protobuf spec](https://developers.google.com/protocol-buffers/docs/proto3#services) which says:
+> If you don’t want to use gRPC, it’s also possible to use protocol buffers with your own RPC implementation.
+With this approach, we would get an auto-generated `MsgServer` interface:
+
+In addition to clearly specifying return types, this has the benefit of generating client and server code. On the server
+side, this is almost like an automatically generated keeper method and could maybe be used instead of keepers eventually
+(see [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)):
+
+```go
+package gov
+
+type MsgServer interface {
+ SubmitProposal(context.Context, *MsgSubmitProposal) (*MsgSubmitProposalResponse, error)
+}
+```
+
+On the client side, developers could take advantage of this by creating RPC implementations that encapsulate transaction
+logic. Protobuf libraries that use asynchronous callbacks, like [protobuf.js](https://github.com/protobufjs/protobuf.js#using-services)
+could use this to register callbacks for specific messages even for transactions that include multiple `Msg`s.
+
+Each `Msg` service method should have exactly one request parameter: its corresponding `Msg` type. For example, the `Msg` service method `/cosmos.gov.v1beta1.Msg/SubmitProposal` above has exactly one request parameter, namely the `Msg` type `/cosmos.gov.v1beta1.MsgSubmitProposal`. It is important the reader understands clearly the nomenclature difference between a `Msg` service (a Protobuf service) and a `Msg` type (a Protobuf message), and the differences in their fully-qualified name.
+
+This convention has been decided over the more canonical `Msg...Request` names mainly for backwards compatibility, but also for better readability in `TxBody.messages` (see [Encoding section](#encoding) below): transactions containing `/cosmos.gov.MsgSubmitProposal` read better than those containing `/cosmos.gov.v1beta1.MsgSubmitProposalRequest`.
+
+One consequence of this convention is that each `Msg` type can be the request parameter of only one `Msg` service method. However, we consider this limitation a good practice in explicitness.
+
+### Encoding
+
+Encoding of transactions generated with `Msg` services does not differ from current Protobuf transaction encoding as defined in [ADR-020](./adr-020-protobuf-transaction-encoding.md). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the
+binary-encoded `Msg` with its type URL.
+
+### Decoding
+
+Since `Msg` types are packed into `Any`, decoding transaction messages is done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](./adr-020-protobuf-transaction-encoding.md#transactions).
+
+### Routing
+
+We propose to add a `msg_service_router` in BaseApp. This router is a key/value map which maps `Msg` types' `type_url`s to their corresponding `Msg` service method handler. Since there is a 1-to-1 mapping between `Msg` types and `Msg` service method, the `msg_service_router` has exactly one entry per `Msg` service method.
+
+When a transaction is processed by BaseApp (in CheckTx or in DeliverTx), its `TxBody.messages` are decoded as `Msg`s. Each `Msg`'s `type_url` is matched against an entry in the `msg_service_router`, and the respective `Msg` service method handler is called.
+
+For backward compatibility, the old handlers are not removed yet. If BaseApp receives a legacy `Msg` with no corresponding entry in the `msg_service_router`, it will be routed via its legacy `Route()` method into the legacy handler.
+
+### Module Configuration
+
+In [ADR 021](./adr-021-protobuf-query-encoding.md), we introduced a method `RegisterQueryService`
+to `AppModule` which allows for modules to register gRPC queriers.
+
+To register `Msg` services, we attempt a more extensible approach by converting `RegisterQueryService`
+to a more generic `RegisterServices` method:
+
+```go
+type AppModule interface {
+ RegisterServices(Configurator)
+ ...
+}
+
+type Configurator interface {
+ QueryServer() grpc.Server
+ MsgServer() grpc.Server
+}
+
+// example module:
+func (am AppModule) RegisterServices(cfg Configurator) {
+ types.RegisterQueryServer(cfg.QueryServer(), keeper)
+ types.RegisterMsgServer(cfg.MsgServer(), keeper)
+}
+```
+
+The `RegisterServices` method and the `Configurator` interface are intended to
+evolve to satisfy the use cases discussed in [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)
+and [\#7122](https://github.com/cosmos/cosmos-sdk/issues/7421).
+
+When `Msg` services are registered, the framework _should_ verify that all `Msg` types
+implement the `sdk.Msg` interface and throw an error during initialization rather
+than later when transactions are processed.
+
+### `Msg` Service Implementation
+
+Just like query services, `Msg` service methods can retrieve the `sdk.Context`
+from the `context.Context` parameter using the `sdk.UnwrapSDKContext`
+method:
+
+```go
+package gov
+
+func (k Keeper) SubmitProposal(goCtx context.Context, params *types.MsgSubmitProposal) (*MsgSubmitProposalResponse, error) {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+ ...
+}
+```
+
+The `sdk.Context` should have an `EventManager` already attached by BaseApp's `msg_service_router`.
+
+Separate handler definition is no longer needed with this approach.
+
+## Consequences
+
+This design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://developers.google.com/protocol-buffers/docs/proto3#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers.
+
+This also allows us to change how we perform functional tests. Instead of mocking AppModules and Router, we will mock a client (server will stay hidden). More specifically: we will never mock `moduleA.MsgServer` in `moduleB`, but rather `moduleA.MsgClient`. One can think about it as working with external services (eg DBs, or online servers...). We assume that the transmission between clients and servers is correctly handled by generated Protocol Buffers.
+
+Finally, closing a module to client API opens desirable OCAP patterns discussed in ADR-033. Since server implementation and interface is hidden, nobody can hold "keepers"/servers and will be forced to relay on the client interface, which will drive developers for correct encapsulation and software engineering patterns.
+
+### Pros
+
+* communicates return type clearly
+* manual handler registration and return type marshaling is no longer needed, just implement the interface and register it
+* communication interface is automatically generated, the developer can now focus only on the state transition methods - this would improve the UX of [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) approach (1) if we chose to adopt that
+* generated client code could be useful for clients and tests
+* dramatically reduces and simplifies the code
+
+### Cons
+
+* using `service` definitions outside the context of gRPC could be confusing (but doesn’t violate the proto3 spec)
+
+## References
+
+* [Initial Github Issue \#7122](https://github.com/cosmos/cosmos-sdk/issues/7122)
+* [proto 3 Language Guide: Defining Services](https://developers.google.com/protocol-buffers/docs/proto3#services)
+* [Initial pre-`Any` `Msg` designs](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc)
+* [ADR 020](./adr-020-protobuf-transaction-encoding.md)
+* [ADR 021](./adr-021-protobuf-query-encoding.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-032-typed-events.md b/copy-of-sdk-docs/build/architecture/adr-032-typed-events.md
new file mode 100644
index 00000000..0a5122da
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-032-typed-events.md
@@ -0,0 +1,319 @@
+# ADR 032: Typed Events
+
+## Changelog
+
+* 28-Sept-2020: Initial Draft
+
+## Authors
+
+* Anil Kumar (@anilcse)
+* Jack Zampolin (@jackzampolin)
+* Adam Bozanich (@boz)
+
+## Status
+
+Proposed
+
+## Abstract
+
+Currently in the Cosmos SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emitting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
+
+## Context
+
+Currently in the Cosmos SDK, events are defined in the handlers for each message, meaning each module doesn't have a canonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emitting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
+
+[Our platform](http://github.com/ovrclk/akash) requires a number of programmatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes development, our team has developed a standard method for defining and consuming typed events in Cosmos SDK modules. We have found that it is extremely useful in building this type of event driven application.
+
+As the Cosmos SDK gets used more extensively for apps like `peggy`, other peg zones, IBC, DeFi, etc... there will be an exploding demand for event driven applications to support new features desired by users. We propose upstreaming our findings into the Cosmos SDK to enable all Cosmos SDK applications to quickly and easily build event driven apps to aid their core application. Wallets, exchanges, explorers, and defi protocols all stand to benefit from this work.
+
+If this proposal is accepted, users will be able to build event driven Cosmos SDK apps in go by just writing `EventHandler`s for their specific event types and passing them to `EventEmitters` that are defined in the Cosmos SDK.
+
+The end of this proposal contains a detailed example of how to consume events after this refactor.
+
+This proposal is specifically about how to consume these events as a client of the blockchain, not for intermodule communication.
+
+## Decision
+
+**Step-1**: Implement additional functionality in the `types` package: `EmitTypedEvent` and `ParseTypedEvent` functions
+
+```go
+// types/events.go
+
+// EmitTypedEvent takes typed event and emits converting it into sdk.Event
+func (em *EventManager) EmitTypedEvent(event proto.Message) error {
+ evtType := proto.MessageName(event)
+ evtJSON, err := codec.ProtoMarshalJSON(event)
+ if err != nil {
+ return err
+ }
+
+ var attrMap map[string]json.RawMessage
+ err = json.Unmarshal(evtJSON, &attrMap)
+ if err != nil {
+ return err
+ }
+
+ var attrs []abci.EventAttribute
+ for k, v := range attrMap {
+ attrs = append(attrs, abci.EventAttribute{
+ Key: []byte(k),
+ Value: v,
+ })
+ }
+
+ em.EmitEvent(Event{
+ Type: evtType,
+ Attributes: attrs,
+ })
+
+ return nil
+}
+
+// ParseTypedEvent converts abci.Event back to typed event
+func ParseTypedEvent(event abci.Event) (proto.Message, error) {
+ concreteGoType := proto.MessageType(event.Type)
+ if concreteGoType == nil {
+ return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type)
+ }
+
+ var value reflect.Value
+ if concreteGoType.Kind() == reflect.Ptr {
+ value = reflect.New(concreteGoType.Elem())
+ } else {
+ value = reflect.Zero(concreteGoType)
+ }
+
+ protoMsg, ok := value.Interface().(proto.Message)
+ if !ok {
+ return nil, fmt.Errorf("%q does not implement proto.Message", event.Type)
+ }
+
+ attrMap := make(map[string]json.RawMessage)
+ for _, attr := range event.Attributes {
+ attrMap[string(attr.Key)] = attr.Value
+ }
+
+ attrBytes, err := json.Marshal(attrMap)
+ if err != nil {
+ return nil, err
+ }
+
+ err = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg)
+ if err != nil {
+ return nil, err
+ }
+
+ return protoMsg, nil
+}
+```
+
+Here, the `EmitTypedEvent` is a method on `EventManager` which takes typed event as input and apply json serialization on it. Then it maps the JSON key/value pairs to `event.Attributes` and emits it in form of `sdk.Event`. `Event.Type` will be the type URL of the proto message.
+
+When we subscribe to emitted events on the CometBFT websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message.
+
+**Step-2**: Add proto definitions for typed events for msgs in each module:
+
+For example, let's take `MsgSubmitProposal` of `gov` module and implement this event's type.
+
+```protobuf
+// proto/cosmos/gov/v1beta1/gov.proto
+// Add typed event definition
+
+package cosmos.gov.v1beta1;
+
+message EventSubmitProposal {
+ string from_address = 1;
+ uint64 proposal_id = 2;
+ TextProposal proposal = 3;
+}
+```
+
+**Step-3**: Refactor event emission to use the typed event created and emit using `sdk.EmitTypedEvent`:
+
+```go
+// x/gov/handler.go
+func handleMsgSubmitProposal(ctx sdk.Context, keeper keeper.Keeper, msg types.MsgSubmitProposalI) (*sdk.Result, error) {
+ ...
+ types.Context.EventManager().EmitTypedEvent(
+ &EventSubmitProposal{
+ FromAddress: fromAddress,
+ ProposalId: id,
+ Proposal: proposal,
+ },
+ )
+ ...
+}
+```
+
+### How to subscribe to these typed events in `Client`
+
+> NOTE: Full code example below
+
+Users will be able to subscribe using `client.Context.Client.Subscribe` and consume events which are emitted using `EventHandler`s.
+
+Akash Network has built a simple [`pubsub`](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/pubsub/bus.go#L20). This can be used to subscribe to `abci.Events` and [publish](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L21) them as typed events.
+
+Please see the below code sample for more detail on how this flow looks for clients.
+
+## Consequences
+
+### Positive
+
+* Improves consistency of implementation for the events currently in the Cosmos SDK
+* Provides a much more ergonomic way to handle events and facilitates writing event driven applications
+* This implementation will support a middleware ecosystem of `EventHandler`s
+
+### Negative
+
+## Detailed code example of publishing events
+
+This ADR also proposes adding affordances to emit and consume these events. This way developers will only need to write
+`EventHandler`s which define the actions they desire to take.
+
+```go
+// EventEmitter is a type that describes event emitter functions
+// This should be defined in `types/events.go`
+type EventEmitter func(context.Context, client.Context, ...EventHandler) error
+
+// EventHandler is a type of function that handles events coming out of the event bus
+// This should be defined in `types/events.go`
+type EventHandler func(proto.Message) error
+
+// Sample use of the functions below
+func main() {
+ ctx, cancel := context.WithCancel(context.Background())
+
+ if err := TxEmitter(ctx, client.Context{}.WithNodeURI("tcp://localhost:26657"), SubmitProposalEventHandler); err != nil {
+ cancel()
+ panic(err)
+ }
+
+ return
+}
+
+// SubmitProposalEventHandler is an example of an event handler that prints proposal details
+// when any EventSubmitProposal is emitted.
+func SubmitProposalEventHandler(ev proto.Message) (err error) {
+ switch event := ev.(type) {
+ // Handle governance proposal events creation events
+ case govtypes.EventSubmitProposal:
+ // Users define business logic here e.g.
+ fmt.Println(ev.FromAddress, ev.ProposalId, ev.Proposal)
+ return nil
+ default:
+ return nil
+ }
+}
+
+// TxEmitter is an example of an event emitter that emits just transaction events. This can and
+// should be implemented somewhere in the Cosmos SDK. The Cosmos SDK can include an EventEmitters for tm.event='Tx'
+// and/or tm.event='NewBlock' (the new block events may contain typed events)
+func TxEmitter(ctx context.Context, cliCtx client.Context, ehs ...EventHandler) (err error) {
+ // Instantiate and start CometBFT RPC client
+ client, err := cliCtx.GetNode()
+ if err != nil {
+ return err
+ }
+
+ if err = client.Start(); err != nil {
+ return err
+ }
+
+ // Start the pubsub bus
+ bus := pubsub.NewBus()
+ defer bus.Close()
+
+ // Initialize a new error group
+ eg, ctx := errgroup.WithContext(ctx)
+
+ // Publish chain events to the pubsub bus
+ eg.Go(func() error {
+ return PublishChainTxEvents(ctx, client, bus, simapp.ModuleBasics)
+ })
+
+ // Subscribe to the bus events
+ subscriber, err := bus.Subscribe()
+ if err != nil {
+ return err
+ }
+
+ // Handle all the events coming out of the bus
+ eg.Go(func() error {
+ var err error
+ for {
+ select {
+ case <-ctx.Done():
+ return nil
+ case <-subscriber.Done():
+ return nil
+ case ev := <-subscriber.Events():
+ for _, eh := range ehs {
+ if err = eh(ev); err != nil {
+ break
+ }
+ }
+ }
+ }
+ return nil
+ })
+
+ return group.Wait()
+}
+
+// PublishChainTxEvents events using cmtclient. Waits on context shutdown signals to exit.
+func PublishChainTxEvents(ctx context.Context, client cmtclient.EventsClient, bus pubsub.Bus, mb module.BasicManager) (err error) {
+ // Subscribe to transaction events
+ txch, err := client.Subscribe(ctx, "txevents", "tm.event='Tx'", 100)
+ if err != nil {
+ return err
+ }
+
+ // Unsubscribe from transaction events on function exit
+ defer func() {
+ err = client.UnsubscribeAll(ctx, "txevents")
+ }()
+
+ // Use errgroup to manage concurrency
+ g, ctx := errgroup.WithContext(ctx)
+
+ // Publish transaction events in a goroutine
+ g.Go(func() error {
+ var err error
+ for {
+ select {
+ case <-ctx.Done():
+ break
+ case ed := <-ch:
+ switch evt := ed.Data.(type) {
+ case cmttypes.EventDataTx:
+ if !evt.Result.IsOK() {
+ continue
+ }
+ // range over events, parse them using the basic manager and
+ // send them to the pubsub bus
+ for _, abciEv := range events {
+ typedEvent, err := sdk.ParseTypedEvent(abciEv)
+ if err != nil {
+ return err
+ }
+ if err := bus.Publish(typedEvent); err != nil {
+ bus.Close()
+ return
+ }
+ continue
+ }
+ }
+ }
+ }
+ return err
+ })
+
+ // Exit on error or context cancellation
+ return g.Wait()
+}
+```
+
+## References
+
+* [Publish Custom Events via a bus](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L19-L58)
+* [Consuming the events in `Client`](https://github.com/ovrclk/deploy/blob/bf6c633ab6c68f3026df59efd9982d6ca1bf0561/cmd/event-handlers.go#L57)
diff --git a/copy-of-sdk-docs/build/architecture/adr-033-protobuf-inter-module-comm.md b/copy-of-sdk-docs/build/architecture/adr-033-protobuf-inter-module-comm.md
new file mode 100644
index 00000000..acbc98e1
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-033-protobuf-inter-module-comm.md
@@ -0,0 +1,400 @@
+# ADR 033: Protobuf-based Inter-Module Communication
+
+## Changelog
+
+* 2020-10-05: Initial Draft
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR introduces a system for permissioned inter-module communication leveraging the protobuf `Query` and `Msg`
+service definitions defined in [ADR 021](./adr-021-protobuf-query-encoding.md) and
+[ADR 031](./adr-031-msg-service.md) which provides:
+
+* stable protobuf based module interfaces to potentially later replace the keeper paradigm
+* stronger inter-module object capabilities (OCAPs) guarantees
+* module accounts and sub-account authorization
+
+## Context
+
+In the current Cosmos SDK documentation on the [Object-Capability Model](../docs/learn/advanced/10-ocap.md), it is stated that:
+
+> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules.
+
+There is currently not a thriving ecosystem of Cosmos SDK modules. We hypothesize that this is in part due to:
+
+1. lack of a stable v1.0 Cosmos SDK to build modules off of. Module interfaces are changing, sometimes dramatically, from
+point release to point release, often for good reasons, but this does not create a stable foundation to build on.
+2. lack of a properly implemented object capability or even object-oriented encapsulation system which makes refactors
+of module keeper interfaces inevitable because the current interfaces are poorly constrained.
+
+### `x/bank` Case Study
+
+Currently the `x/bank` keeper gives pretty much unrestricted access to any module which references it. For instance, the
+`SetBalance` method allows the caller to set the balance of any account to anything, bypassing even proper tracking of supply.
+
+There appears to have been some later attempts to implement some semblance of OCAPs using module-level minting, staking
+and burning permissions. These permissions allow a module to mint, burn or delegate tokens with reference to the module’s
+own account. These permissions are actually stored as a `[]string` array on the `ModuleAccount` type in state.
+
+However, these permissions don’t really do much. They control what modules can be referenced in the `MintCoins`,
+`BurnCoins` and `DelegateCoins***` methods, but for one there is no unique object capability token that controls access —
+just a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simply by calling
+`MintCoins(“staking”)`. Furthermore, all modules which have access to these keeper methods, also have access to
+`SetBalance` negating any other attempt at OCAPs and breaking even basic object-oriented encapsulation.
+
+## Decision
+
+Based on [ADR-021](./adr-021-protobuf-query-encoding.md) and [ADR-031](./adr-031-msg-service.md), we introduce the
+Inter-Module Communication framework for secure module authorization and OCAPs.
+When implemented, this could also serve as an alternative to the existing paradigm of passing keepers between
+modules. The approach outlined here-in is intended to form the basis of a Cosmos SDK v1.0 that provides the necessary
+stability and encapsulation guarantees that allow a thriving module ecosystem to emerge.
+
+Of particular note — the decision is to _enable_ this functionality for modules to adopt at their own discretion.
+Proposals to migrate existing modules to this new paradigm will have to be a separate conversation, potentially
+addressed as amendments to this ADR.
+
+### New "Keeper" Paradigm
+
+In [ADR 021](./adr-021-protobuf-query-encoding.md), a mechanism for using protobuf service definitions to define queriers
+was introduced and in [ADR 31](./adr-031-msg-service.md), a mechanism for using protobuf service to define `Msg`s was added.
+Protobuf service definitions generate two golang interfaces representing the client and server sides of a service plus
+some helper code. Here is a minimal example for the bank `cosmos.bank.Msg/Send` message type:
+
+```go
+package bank
+
+type MsgClient interface {
+ Send(context.Context, *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error)
+}
+
+type MsgServer interface {
+ Send(context.Context, *MsgSend) (*MsgSendResponse, error)
+}
+```
+
+[ADR 021](./adr-021-protobuf-query-encoding.md) and [ADR 31](./adr-031-msg-service.md) specifies how modules can implement the generated `QueryServer`
+and `MsgServer` interfaces as replacements for the legacy queriers and `Msg` handlers respectively.
+
+In this ADR we explain how modules can make queries and send `Msg`s to other modules using the generated `QueryClient`
+and `MsgClient` interfaces and propose this mechanism as a replacement for the existing `Keeper` paradigm. To be clear,
+this ADR does not necessitate the creation of new protobuf definitions or services. Rather, it leverages the same proto
+based service interfaces already used by clients for inter-module communication.
+
+Using this `QueryClient`/`MsgClient` approach has the following key benefits over exposing keepers to external modules:
+
+1. Protobuf types are checked for breaking changes using [buf](https://buf.build/docs/breaking-overview) and because of
+the way protobuf is designed this will give us strong backwards compatibility guarantees while allowing for forward
+evolution.
+2. The separation between the client and server interfaces will allow us to insert permission checking code in between
+the two which checks if one module is authorized to send the specified `Msg` to the other module providing a proper
+object capability system (see below).
+3. The router for inter-module communication gives us a convenient place to handle rollback of transactions,
+enabling atomicity of operations ([currently a problem](https://github.com/cosmos/cosmos-sdk/issues/8030)). Any failure within a module-to-module call would result in a failure of the entire
+transaction
+
+This mechanism has the added benefits of:
+
+* reducing boilerplate through code generation, and
+* allowing for modules in other languages either via a VM like CosmWasm or sub-processes using gRPC
+
+### Inter-module Communication
+
+To use the `Client` generated by the protobuf compiler we need a `grpc.ClientConn` [interface](https://github.com/grpc/grpc-go/blob/v1.49.x/clientconn.go#L441-L450)
+implementation. For this we introduce
+a new type, `ModuleKey`, which implements the `grpc.ClientConn` interface. `ModuleKey` can be thought of as the "private
+key" corresponding to a module account, where authentication is provided through use of a special `Invoker()` function,
+described in more detail below.
+
+Blockchain users (external clients) use their account's private key to sign transactions containing `Msg`s where they are listed as signers (each
+message specifies required signers with `Msg.GetSigner`). The authentication check is performed by `AnteHandler`.
+
+Here, we extend this process, by allowing modules to be identified in `Msg.GetSigners`. When a module wants to trigger the execution a `Msg` in another module,
+its `ModuleKey` acts as the sender (through the `ClientConn` interface we describe below) and is set as a sole "signer". It's worth to note
+that we don't use any cryptographic signature in this case.
+For example, module `A` could use its `A.ModuleKey` to create `MsgSend` object for `/cosmos.bank.Msg/Send` transaction. `MsgSend` validation
+will assure that the `from` account (`A.ModuleKey` in this case) is the signer.
+
+Here's an example of a hypothetical module `foo` interacting with `x/bank`:
+
+```go
+package foo
+
+
+type FooMsgServer {
+ // ...
+
+ bankQuery bank.QueryClient
+ bankMsg bank.MsgClient
+}
+
+func NewFooMsgServer(moduleKey RootModuleKey, ...) FooMsgServer {
+ // ...
+
+ return FooMsgServer {
+ // ...
+ modouleKey: moduleKey,
+ bankQuery: bank.NewQueryClient(moduleKey),
+ bankMsg: bank.NewMsgClient(moduleKey),
+ }
+}
+
+func (foo *FooMsgServer) Bar(ctx context.Context, req *MsgBarRequest) (*MsgBarResponse, error) {
+ balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{Address: foo.moduleKey.Address(), Denom: "foo"})
+
+ ...
+
+ res, err := foo.bankMsg.Send(ctx, &bank.MsgSendRequest{FromAddress: fooMsgServer.moduleKey.Address(), ...})
+
+ ...
+}
+```
+
+This design is also intended to be extensible to cover use cases of more fine grained permissioning like minting by
+denom prefix being restricted to certain modules (as discussed in
+[#7459](https://github.com/cosmos/cosmos-sdk/pull/7459#discussion_r529545528)).
+
+### `ModuleKey`s and `ModuleID`s
+
+A `ModuleKey` can be thought of as a "private key" for a module account and a `ModuleID` can be thought of as the
+corresponding "public key". From the [ADR 028](./adr-028-public-key-addresses.md), modules can have both a root module account and any number of sub-accounts
+or derived accounts that can be used for different pools (ex. staking pools) or managed accounts (ex. group
+accounts). We can also think of module sub-accounts as similar to derived keys - there is a root key and then some
+derivation path. `ModuleID` is a simple struct which contains the module name and optional "derivation" path,
+and forms its address based on the `AddressHash` method from [the ADR-028](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md):
+
+```go
+type ModuleID struct {
+ ModuleName string
+ Path []byte
+}
+
+func (key ModuleID) Address() []byte {
+ return AddressHash(key.ModuleName, key.Path)
+}
+```
+
+In addition to being able to generate a `ModuleID` and address, a `ModuleKey` contains a special function called
+`Invoker` which is the key to safe inter-module access. The `Invoker` creates an `InvokeFn` closure which is used as an `Invoke` method in
+the `grpc.ClientConn` interface and under the hood is able to route messages to the appropriate `Msg` and `Query` handlers
+performing appropriate security checks on `Msg`s. This allows for even safer inter-module access than keeper's whose
+private member variables could be manipulated through reflection. Golang does not support reflection on a function
+closure's captured variables and direct manipulation of memory would be needed for a truly malicious module to bypass
+the `ModuleKey` security.
+
+The two `ModuleKey` types are `RootModuleKey` and `DerivedModuleKey`:
+
+```go
+type Invoker func(callInfo CallInfo) func(ctx context.Context, request, response interface{}, opts ...interface{}) error
+
+type CallInfo {
+ Method string
+ Caller ModuleID
+}
+
+type RootModuleKey struct {
+ moduleName string
+ invoker Invoker
+}
+
+func (rm RootModuleKey) Derive(path []byte) DerivedModuleKey { /* ... */}
+
+type DerivedModuleKey struct {
+ moduleName string
+ path []byte
+ invoker Invoker
+}
+```
+
+A module can get access to a `DerivedModuleKey`, using the `Derive(path []byte)` method on `RootModuleKey` and then
+would use this key to authenticate `Msg`s from a sub-account. Ex:
+
+```go
+package foo
+
+func (fooMsgServer *MsgServer) Bar(ctx context.Context, req *MsgBar) (*MsgBarResponse, error) {
+ derivedKey := fooMsgServer.moduleKey.Derive(req.SomePath)
+ bankMsgClient := bank.NewMsgClient(derivedKey)
+ res, err := bankMsgClient.Balance(ctx, &bank.MsgSend{FromAddress: derivedKey.Address(), ...})
+ ...
+}
+```
+
+In this way, a module can gain permissioned access to a root account and any number of sub-accounts and send
+authenticated `Msg`s from these accounts. The `Invoker` `callInfo.Caller` parameter is used under the hood to
+distinguish between different module accounts, but either way the function returned by `Invoker` only allows `Msg`s
+from either the root or a derived module account to pass through.
+
+Note that `Invoker` itself returns a function closure based on the `CallInfo` passed in. This will allow client implementations
+in the future that cache the invoke function for each method type avoiding the overhead of hash table lookup.
+This would reduce the performance overhead of this inter-module communication method to the bare minimum required for
+checking permissions.
+
+To re-iterate, the closure only allows access to authorized calls. There is no access to anything else regardless of any
+name impersonation.
+
+Below is a rough sketch of the implementation of `grpc.ClientConn.Invoke` for `RootModuleKey`:
+
+```go
+func (key RootModuleKey) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...grpc.CallOption) error {
+ f := key.invoker(CallInfo {Method: method, Caller: ModuleID {ModuleName: key.moduleName}})
+ return f(ctx, args, reply)
+}
+```
+
+### `AppModule` Wiring and Requirements
+
+In [ADR 031](./adr-031-msg-service.md), the `AppModule.RegisterService(Configurator)` method was introduced. To support
+inter-module communication, we extend the `Configurator` interface to pass in the `ModuleKey` and to allow modules to
+specify their dependencies on other modules using `RequireServer()`:
+
+```go
+type Configurator interface {
+ MsgServer() grpc.Server
+ QueryServer() grpc.Server
+
+ ModuleKey() ModuleKey
+ RequireServer(msgServer interface{})
+}
+```
+
+The `ModuleKey` is passed to modules in the `RegisterService` method itself so that `RegisterServices` serves as a single
+entry point for configuring module services. This is intended to also have the side-effect of greatly reducing boilerplate in
+`app.go`. For now, `ModuleKey`s will be created based on `AppModuleBasic.Name()`, but a more flexible system may be
+introduced in the future. The `ModuleManager` will handle creation of module accounts behind the scenes.
+
+Because modules do not get direct access to each other anymore, modules may have unfulfilled dependencies. To make sure
+that module dependencies are resolved at startup, the `Configurator.RequireServer` method should be added. The `ModuleManager`
+will make sure that all dependencies declared with `RequireServer` can be resolved before the app starts. An example
+module `foo` could declare its dependency on `x/bank` like this:
+
+```go
+package foo
+
+func (am AppModule) RegisterServices(cfg Configurator) {
+ cfg.RequireServer((*bank.QueryServer)(nil))
+ cfg.RequireServer((*bank.MsgServer)(nil))
+}
+```
+
+### Security Considerations
+
+In addition to checking for `ModuleKey` permissions, a few additional security precautions will need to be taken by
+the underlying router infrastructure.
+
+#### Recursion and Re-entry
+
+Recursive or re-entrant method invocations pose a potential security threat. This can be a problem if Module A
+calls Module B and Module B calls module A again in the same call.
+
+One basic way for the router system to deal with this is to maintain a call stack which prevents a module from
+being referenced more than once in the call stack so that there is no re-entry. A `map[string]interface{}` table
+in the router could be used to perform this security check.
+
+#### Queries
+
+Queries in Cosmos SDK are generally un-permissioned so allowing one module to query another module should not pose
+any major security threats assuming basic precautions are taken. The basic precaution that the router system will
+need to take is making sure that the `sdk.Context` passed to query methods does not allow writing to the store. This
+can be done for now with a `CacheMultiStore` as is currently done for `BaseApp` queries.
+
+### Internal Methods
+
+In many cases, we may wish for modules to call methods on other modules which are not exposed to clients at all. For this
+purpose, we add the `InternalServer` method to `Configurator`:
+
+```go
+type Configurator interface {
+ MsgServer() grpc.Server
+ QueryServer() grpc.Server
+ InternalServer() grpc.Server
+}
+```
+
+As an example, x/slashing's Slash must call x/staking's Slash, but we don't want to expose x/staking's Slash to end users
+and clients.
+
+Internal protobuf services will be defined in a corresponding `internal.proto` file in the given module's
+proto package.
+
+Services registered against `InternalServer` will be callable from other modules but not by external clients.
+
+An alternative solution to internal-only methods could involve hooks / plugins as discussed [here](https://github.com/cosmos/cosmos-sdk/pull/7459#issuecomment-733807753).
+A more detailed evaluation of a hooks / plugin system will be addressed later in follow-ups to this ADR or as a separate
+ADR.
+
+### Authorization
+
+By default, the inter-module router requires that messages are sent by the first signer returned by `GetSigners`. The
+inter-module router should also accept authorization middleware such as that provided by [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md).
+This middleware will allow accounts to authorize specific module accounts to perform actions on their behalf.
+Authorization middleware should take into account the need to grant certain modules effectively "admin" privileges to
+other modules. This will be addressed in separate ADRs or updates to this ADR.
+
+### Future Work
+
+Other future improvements may include:
+
+* custom code generation that:
+ * simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`)
+ * optimizes inter-module calls - for instance caching resolved methods after first invocation
+* combining `StoreKey`s and `ModuleKey`s into a single interface so that modules have a single OCAPs handle
+* code generation which makes inter-module communication more performant
+* decoupling `ModuleKey` creation from `AppModuleBasic.Name()` so that app's can override root module account names
+* inter-module hooks and plugins
+
+## Alternatives
+
+### MsgServices vs `x/capability`
+
+The `x/capability` module does provide a proper object-capability implementation that can be used by any module in the
+Cosmos SDK and could even be used for inter-module OCAPs as described in [\#5931](https://github.com/cosmos/cosmos-sdk/issues/5931).
+
+The advantages of the approach described in this ADR are mostly around how it integrates with other parts of the Cosmos SDK,
+specifically:
+
+* protobuf so that:
+ * code generation of interfaces can be leveraged for a better dev UX
+ * module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview)
+* sub-module accounts as per ADR 028
+* the general `Msg` passing paradigm and the way signers are specified by `GetSigners`
+
+Also, this is a complete replacement for keepers and could be applied to _all_ inter-module communication whereas the
+`x/capability` approach in #5931 would need to be applied method by method.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR is intended to provide a pathway to a scenario where there is greater long term compatibility between modules.
+In the short-term, this will likely result in breaking certain `Keeper` interfaces which are too permissive and/or
+replacing `Keeper` interfaces altogether.
+
+### Positive
+
+* an alternative to keepers which can more easily lead to stable inter-module interfaces
+* proper inter-module OCAPs
+* improved module developer DevX, as commented on by several participants on
+ [Architecture Review Call, Dec 3](https://hackmd.io/E0wxxOvRQ5qVmTf6N_k84Q)
+* lays the groundwork for what can be a greatly simplified `app.go`
+* router can be setup to enforce atomic transactions for module-to-module calls
+
+### Negative
+
+* modules which adopt this will need significant refactoring
+
+### Neutral
+
+## Test Cases [optional]
+
+## References
+
+* [ADR 021](./adr-021-protobuf-query-encoding.md)
+* [ADR 031](./adr-031-msg-service.md)
+* [ADR 028](./adr-028-public-key-addresses.md)
+* [ADR 030 draft](https://github.com/cosmos/cosmos-sdk/pull/7105)
+* [Object-Capability Model](https://docs.network.com/main/core/ocap)
diff --git a/copy-of-sdk-docs/build/architecture/adr-034-account-rekeying.md b/copy-of-sdk-docs/build/architecture/adr-034-account-rekeying.md
new file mode 100644
index 00000000..06825c5d
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-034-account-rekeying.md
@@ -0,0 +1,76 @@
+# ADR 034: Account Rekeying
+
+## Changelog
+
+* 30-09-2020: Initial Draft
+
+## Status
+
+PROPOSED
+
+## Abstract
+
+Account rekeying is a process that allows an account to replace its authentication pubkey with a new one.
+
+## Context
+
+Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it cannot be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons.
+
+Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three-week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.
+
+## Decision
+
+We propose the addition of a new feature to `x/auth` that allows accounts to update the public key associated with their account, while keeping the address the same.
+
+This is possible because the Cosmos SDK `BaseAccount` stores the public key for an account in state, instead of making the assumption that the public key is included in the transaction (whether explicitly or implicitly through the signature) as in other blockchains such as Bitcoin and Ethereum. Because the public key is stored on chain, it is okay for the public key to not hash to the address of an account, as the address is not pertinent to the signature checking process.
+
+To build this system, we design a new Msg type as follows:
+
+```protobuf
+service Msg {
+ rpc ChangePubKey(MsgChangePubKey) returns (MsgChangePubKeyResponse);
+}
+
+message MsgChangePubKey {
+ string address = 1;
+ google.protobuf.Any pub_key = 2;
+}
+
+message MsgChangePubKeyResponse {}
+```
+
+The MsgChangePubKey transaction needs to be signed by the existing pubkey in state.
+
+Once approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg.
+
+An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open down the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this externality (this bound gas amount is configured as a parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action.
+
+```go
+ amount := ak.GetParams(ctx).PubKeyChangeCost
+ ctx.GasMeter().ConsumeGas(amount, "pubkey change fee")
+```
+
+Every time a key for an address is changed, we will store a log of this change in the state of the chain, thus creating a stack of all previous keys for an address and the time intervals for which they were active. This allows dapps and clients to easily query past keys for an account which may be useful for features such as verifying timestamped off-chain signed messages.
+
+## Consequences
+
+### Positive
+
+* Will allow users and validator operators to employ better operational security practices with key rotation.
+* Will allow organizations or groups to easily change and add/remove multisig signers.
+
+### Negative
+
+Breaks the current assumed relationship between address and pubkey as H(pubkey) = address. This has a couple of consequences.
+
+* This makes wallets that support this feature more complicated. For example, if an address on-chain was updated, the corresponding key in the CLI wallet also needs to be updated.
+* Cannot automatically prune accounts with 0 balance that have had their pubkey changed.
+
+### Neutral
+
+* While the purpose of this is intended to allow the owner of an account to update to a new pubkey they own, this could technically also be used to transfer ownership of an account to a new owner. For example, this could be used to sell a staked position without unbonding or an account that has vesting tokens. However, the friction of this is very high as this would essentially have to be done as a very specific OTC trade. Furthermore, additional constraints could be added to prevent accounts with Vesting tokens to use this feature.
+* Will require that PubKeys for an account are included in the genesis exports.
+
+## References
+
+* https://www.algorand.com/resources/blog/announcing-rekeying
diff --git a/copy-of-sdk-docs/build/architecture/adr-035-rosetta-api-support.md b/copy-of-sdk-docs/build/architecture/adr-035-rosetta-api-support.md
new file mode 100644
index 00000000..5b910262
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-035-rosetta-api-support.md
@@ -0,0 +1,211 @@
+# ADR 035: Rosetta API Support
+
+## Authors
+
+* Jonathan Gimeno (@jgimeno)
+* David Grierson (@senormonito)
+* Alessio Treglia (@alessio)
+* Frojdy Dymylja (@fdymylja)
+
+## Changelog
+
+* 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK.
+
+## Context
+
+[Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to
+standardise blockchain interactions.
+
+Through the use of a standard API for integrating blockchain applications it will
+
+* Be easier for a user to interact with a given blockchain
+* Allow exchanges to integrate new blockchains quickly and easily
+* Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at
+ considerably lower cost and effort.
+
+## Decision
+
+It is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and
+Cosmos SDK based chains in the ecosystem. How it is implemented is key.
+
+The driving principles of the proposed design are:
+
+1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network
+ configurations to expose Rosetta API-compliant services.
+2. **Long term support:** This proposal aims to provide support for all the Cosmos SDK release series.
+3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable
+ branches of Cosmos SDK is a cost that needs to be reduced.
+
+We will achieve these by delivering on these principles by the following:
+
+1. There will be a package `rosetta/lib`
+ for the implementation of the core Rosetta API features, particularly:
+ a. The types and interfaces (`Client`, `OfflineClient`...), this separates design from implementation detail.
+ b. The `Server` functionality as this is independent of the Cosmos SDK version.
+ c. The `Online/OfflineNetwork`, which is not exported, and implements the rosetta API using the `Client` interface to query the node, build tx and so on.
+ d. The `errors` package to extend rosetta errors.
+2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface.
+3. There will be two options for starting an API service in applications:
+ a. API shares the application process
+ b. API-specific process.
+
+## Architecture
+
+### The External Repo
+
+This section will describe the proposed external library, including the service implementation, plus the defined types and interfaces.
+
+#### Server
+
+`Server` is a simple `struct` that is started and listens to the port specified in the settings. This is meant to be used across all the Cosmos SDK versions that are actively supported.
+
+The constructor follows:
+
+`func NewServer(settings Settings) (Server, error)`
+
+`Settings`, which are used to construct a new server, are the following:
+
+```go
+// Settings define the rosetta server settings
+type Settings struct {
+ // Network contains the information regarding the network
+ Network *types.NetworkIdentifier
+ // Client is the online API handler
+ Client crgtypes.Client
+ // Listen is the address the handler will listen at
+ Listen string
+ // Offline defines if the rosetta service should be exposed in offline mode
+ Offline bool
+ // Retries is the number of readiness checks that will be attempted when instantiating the handler
+ // valid only for online API
+ Retries int
+ // RetryWait is the time that will be waited between retries
+ RetryWait time.Duration
+}
+```
+
+#### Types
+
+Package types uses a mixture of rosetta types and custom defined type wrappers, that the client must parse and return while executing operations.
+
+##### Interfaces
+
+Every SDK version uses a different format to connect (rpc, gRPC, etc), query and build transactions, we have abstracted this in what is the `Client` interface.
+The client uses rosetta types, whilst the `Online/OfflineNetwork` takes care of returning correctly parsed rosetta responses and errors.
+
+Each Cosmos SDK release series will have their own `Client` implementations.
+Developers can implement their own custom `Client`s as required.
+
+```go
+// Client defines the API the client implementation should provide.
+type Client interface {
+ // Needed if the client needs to perform some action before connecting.
+ Bootstrap() error
+ // Ready checks if the servicer constraints for queries are satisfied
+ // for example the node might still not be ready, it's useful in process
+ // when the rosetta instance might come up before the node itself
+ // the servicer must return nil if the node is ready
+ Ready() error
+
+ // Data API
+
+ // Balances fetches the balance of the given address
+ // if height is not nil, then the balance will be displayed
+ // at the provided height, otherwise last block balance will be returned
+ Balances(ctx context.Context, addr string, height *int64) ([]*types.Amount, error)
+ // BlockByHashAlt gets a block and its transaction at the provided height
+ BlockByHash(ctx context.Context, hash string) (BlockResponse, error)
+ // BlockByHeightAlt gets a block given its height, if height is nil then last block is returned
+ BlockByHeight(ctx context.Context, height *int64) (BlockResponse, error)
+ // BlockTransactionsByHash gets the block, parent block and transactions
+ // given the block hash.
+ BlockTransactionsByHash(ctx context.Context, hash string) (BlockTransactionsResponse, error)
+ // BlockTransactionsByHeight gets the block, parent block and transactions
+ // given the block height.
+ BlockTransactionsByHeight(ctx context.Context, height *int64) (BlockTransactionsResponse, error)
+ // GetTx gets a transaction given its hash
+ GetTx(ctx context.Context, hash string) (*types.Transaction, error)
+ // GetUnconfirmedTx gets an unconfirmed Tx given its hash
+ // NOTE(fdymylja): NOT IMPLEMENTED YET!
+ GetUnconfirmedTx(ctx context.Context, hash string) (*types.Transaction, error)
+ // Mempool returns the list of the current non confirmed transactions
+ Mempool(ctx context.Context) ([]*types.TransactionIdentifier, error)
+ // Peers gets the peers currently connected to the node
+ Peers(ctx context.Context) ([]*types.Peer, error)
+ // Status returns the node status, such as sync data, version etc
+ Status(ctx context.Context) (*types.SyncStatus, error)
+
+ // Construction API
+
+ // PostTx posts txBytes to the node and returns the transaction identifier plus metadata related
+ // to the transaction itself.
+ PostTx(txBytes []byte) (res *types.TransactionIdentifier, meta map[string]interface{}, err error)
+ // ConstructionMetadataFromOptions
+ ConstructionMetadataFromOptions(ctx context.Context, options map[string]interface{}) (meta map[string]interface{}, err error)
+ OfflineClient
+}
+
+// OfflineClient defines the functionalities supported without having access to the node
+type OfflineClient interface {
+ NetworkInformationProvider
+ // SignedTx returns the signed transaction given the tx bytes (msgs) plus the signatures
+ SignedTx(ctx context.Context, txBytes []byte, sigs []*types.Signature) (signedTxBytes []byte, err error)
+ // TxOperationsAndSignersAccountIdentifiers returns the operations related to a transaction and the account
+ // identifiers if the transaction is signed
+ TxOperationsAndSignersAccountIdentifiers(signed bool, hexBytes []byte) (ops []*types.Operation, signers []*types.AccountIdentifier, err error)
+ // ConstructionPayload returns the construction payload given the request
+ ConstructionPayload(ctx context.Context, req *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error)
+ // PreprocessOperationsToOptions returns the options given the preprocess operations
+ PreprocessOperationsToOptions(ctx context.Context, req *types.ConstructionPreprocessRequest) (options map[string]interface{}, err error)
+ // AccountIdentifierFromPublicKey returns the account identifier given the public key
+ AccountIdentifierFromPublicKey(pubKey *types.PublicKey) (*types.AccountIdentifier, error)
+}
+```
+
+### 2. Cosmos SDK Implementation
+
+The Cosmos SDK implementation, based on version, takes care of satisfying the `Client` interface.
+In Stargate, Launchpad and 0.37, we have introduced the concept of rosetta.Msg, this message is not in the shared repository as the sdk.Msg type differs between Cosmos SDK versions.
+
+The rosetta.Msg interface follows:
+
+```go
+// Msg represents a cosmos-sdk message that can be converted from and to a rosetta operation.
+type Msg interface {
+ sdk.Msg
+ ToOperations(withStatus, hasError bool) []*types.Operation
+ FromOperations(ops []*types.Operation) (sdk.Msg, error)
+}
+```
+
+Hence developers who want to extend the rosetta set of supported operations just need to extend their module's sdk.Msgs with the `ToOperations` and `FromOperations` methods.
+
+### 3. API service invocation
+
+As stated at the start, application developers will have two methods for invocation of the Rosetta API service:
+
+1. Shared process for both application and API
+2. Standalone API service
+
+#### Shared Process (Only Stargate)
+
+Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spun in offline mode (tx building capabilities only).
+
+#### Separate API service
+
+Client application developers can write a new command to launch a Rosetta API server as a separate process too, using the rosetta command contained in the `/server/rosetta` package. Construction of the command depends on Cosmos SDK version. Examples can be found inside `simd` for stargate, and `contrib/rosetta/simapp` for other release series.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Out-of-the-box Rosetta API support within Cosmos SDK.
+* Blockchain interface standardisation
+
+## References
+
+* https://www.rosetta-api.org/
diff --git a/copy-of-sdk-docs/build/architecture/adr-036-arbitrary-signature.md b/copy-of-sdk-docs/build/architecture/adr-036-arbitrary-signature.md
new file mode 100644
index 00000000..187a34e5
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-036-arbitrary-signature.md
@@ -0,0 +1,132 @@
+# ADR 036: Arbitrary Message Signature Specification
+
+## Changelog
+
+* 28/10/2020 - Initial draft
+
+## Authors
+
+* Antoine Herzog (@antoineherzog)
+* Zaki Manian (@zmanian)
+* Aleksandr Bezobchuk (alexanderbez) [1]
+* Frojdi Dymylja (@fdymylja)
+
+## Status
+
+Draft
+
+## Abstract
+
+Currently, in the Cosmos SDK, there is no convention to sign arbitrary messages like in Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages.
+
+This specification serves the purpose of covering every use case; this means that Cosmos SDK application developers decide how to serialize and represent `Data` to users.
+
+## Context
+
+Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data include, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device.
+
+Further context and use cases can be found in the reference links.
+
+## Decision
+
+The aim is being able to sign arbitrary messages, even using Ledger or similar HSM devices.
+
+As a result, signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values.
+
+Cosmos SDK 0.40 also introduces a concept of “auth_info” this can specify SIGN_MODES.
+
+A spec should include an `auth_info` that supports SIGN_MODE_DIRECT and SIGN_MODE_LEGACY_AMINO.
+
+To create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages.
+
+An offchain transaction follows these rules:
+
+* the memo must be empty
+* nonce, sequence number must be equal to 0
+* chain-id must be equal to “”
+* fee gas must be equal to 0
+* fee amount must be an empty array
+
+Verification of an offchain transaction follows the same rules as an onchain one, except for the spec differences highlighted above.
+
+The first message added to the `offchain` package is `MsgSignData`.
+
+`MsgSignData` allows developers to sign arbitrary bytes validatable offchain only. `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context.
+
+It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent.
+
+Proto definition:
+
+```protobuf
+// MsgSignData defines an arbitrary, general-purpose, off-chain message
+message MsgSignData {
+ // Signer is the sdk.AccAddress of the message signer
+ bytes Signer = 1 [(gogoproto.jsontag) = "signer", (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress"];
+ // Data represents the raw bytes of the content that is signed (text, json, etc)
+ bytes Data = 2 [(gogoproto.jsontag) = "data"];
+}
+```
+
+Signed MsgSignData json example:
+
+```json
+{
+ "type": "cosmos-sdk/StdTx",
+ "value": {
+ "msg": [
+ {
+ "type": "sign/MsgSignData",
+ "value": {
+ "signer": "cosmos1hftz5ugqmpg9243xeegsqqav62f8hnywsjr4xr",
+ "data": "cmFuZG9t"
+ }
+ }
+ ],
+ "fee": {
+ "amount": [],
+ "gas": "0"
+ },
+ "signatures": [
+ {
+ "pub_key": {
+ "type": "tendermint/PubKeySecp256k1",
+ "value": "AqnDSiRoFmTPfq97xxEb2VkQ/Hm28cPsqsZm9jEVsYK9"
+ },
+ "signature": "8y8i34qJakkjse9pOD2De+dnlc4KvFgh0wQpes4eydN66D9kv7cmCEouRrkka9tlW9cAkIL52ErB+6ye7X5aEg=="
+ }
+ ],
+ "memo": ""
+ }
+}
+```
+
+## Consequences
+
+There is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed.
+
+### Backwards Compatibility
+
+Backwards compatibility is maintained as this is a new message spec definition.
+
+### Positive
+
+* A common format that can be used by multiple applications to sign and verify off-chain messages.
+* The specification is primitive which means it can cover every use case without limiting what is possible to fit inside it.
+* It gives room for other off-chain messages specifications that aim to target more specific and common use cases such as off-chain-based authN/authZ layers [2].
+
+### Negative
+
+* The current proposal requires a fixed relationship between an account address and a public key.
+* Doesn't work with multisig accounts.
+
+## Further discussion
+
+* Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content contained in `Data` non-replayable when, and if, needed.
+* The offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general.
+
+## References
+
+1. https://github.com/cosmos/ics/pull/33
+2. https://github.com/cosmos/cosmos-sdk/pull/7727#discussion_r515668204
+3. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-722478477
+4. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923
diff --git a/copy-of-sdk-docs/build/architecture/adr-037-gov-split-vote.md b/copy-of-sdk-docs/build/architecture/adr-037-gov-split-vote.md
new file mode 100644
index 00000000..e7d6e693
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-037-gov-split-vote.md
@@ -0,0 +1,111 @@
+# ADR 037: Governance split votes
+
+## Changelog
+
+* 2020/10/28: Initial draft
+
+## Status
+
+Accepted
+
+## Abstract
+
+This ADR defines a modification to the governance module that would allow a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No.
+
+## Context
+
+Currently, an address can cast a vote with only one option (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice.
+
+However, oftentimes the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll.
+
+## Decision
+
+We modify the vote structs to be
+
+```go
+type WeightedVoteOption struct {
+ Option string
+ Weight sdk.Dec
+}
+
+type Vote struct {
+ ProposalID int64
+ Voter sdk.Address
+ Options []WeightedVoteOption
+}
+```
+
+And for backwards compatibility, we introduce `MsgVoteWeighted` while keeping `MsgVote`.
+
+```go
+type MsgVote struct {
+ ProposalID int64
+ Voter sdk.Address
+ Option Option
+}
+
+type MsgVoteWeighted struct {
+ ProposalID int64
+ Voter sdk.Address
+ Options []WeightedVoteOption
+}
+```
+
+The `ValidateBasic` of a `MsgVoteWeighted` struct would require that
+
+1. The sum of all the rates is equal to 1.0
+2. No Option is repeated
+
+The governance tally function will iterate over all the options in a vote and add to the tally the result of the voter's voting power * the rate for that option.
+
+```go
+tally() {
+ results := map[types.VoteOption]sdk.Dec
+
+ for _, vote := range votes {
+ for i, weightedOption := range vote.Options {
+ results[weightedOption.Option] += getVotingPower(vote.voter) * weightedOption.Weight
+ }
+ }
+}
+```
+
+The CLI command for creating a multi-option vote would be as such:
+
+```shell
+simd tx gov vote 1 "yes=0.6,no=0.3,abstain=0.05,no_with_veto=0.05" --from mykey
+```
+
+To create a single-option vote a user can do either
+
+```shell
+simd tx gov vote 1 "yes=1" --from mykey
+```
+
+or
+
+```shell
+simd tx gov vote 1 yes --from mykey
+```
+
+to maintain backwards compatibility.
+
+## Consequences
+
+### Backwards Compatibility
+
+* Previous VoteMsg types will remain the same and so clients will not have to update their procedure unless they want to support the WeightedVoteMsg feature.
+* When querying a Vote struct from state, its structure will be different, and so clients wanting to display all voters and their respective votes will have to handle the new format and the fact that a single voter can have split votes.
+* The result of querying the tally function should have the same API for clients.
+
+### Positive
+
+* Can make the voting process more accurate for addresses representing multiple stakeholders, often some of the largest addresses.
+
+### Negative
+
+* Is more complex than simple voting, and so may be harder to explain to users. However, this is mostly mitigated because the feature is opt-in.
+
+### Neutral
+
+* Relatively minor change to governance tally function.
diff --git a/copy-of-sdk-docs/build/architecture/adr-038-state-listening.md b/copy-of-sdk-docs/build/architecture/adr-038-state-listening.md
new file mode 100644
index 00000000..63f2ec16
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-038-state-listening.md
@@ -0,0 +1,724 @@
+# ADR 038: KVStore state listening
+
+## Changelog
+
+* 11/23/2020: Initial draft
+* 10/06/2022: Introduce plugin system based on hashicorp/go-plugin
+* 10/14/2022:
+ * Add `ListenCommit`, flatten the state writes in a block to a single batch.
+ * Remove listeners from cache stores, should only listen to `rootmulti.Store`.
+ * Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if they don't want to propagate errors.
+* 26/05/2023: Update with ABCI 2.0
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR defines a set of changes to enable listening to state changes of individual KVStores and exposing these data to consumers.
+
+## Context
+
+Currently, KVStore data can be remotely accessed through [Queries](https://docs.cosmos.network/main/build/building-modules/messages-and-queries#queries)
+which proceed either through Tendermint and the ABCI, or through the gRPC server.
+In addition to these request/response queries, it would be beneficial to have a means of listening to state changes as they occur in real time.
+
+## Decision
+
+We will modify the `CommitMultiStore` interface and its concrete (`rootmulti`) implementations and introduce a new `listenkv.Store` to allow listening to state changes in underlying KVStores. We don't need to listen to cache stores, because we can't be sure that the writes will be committed eventually, and the writes are duplicated in `rootmulti.Store` eventually, so we should only listen to `rootmulti.Store`.
+We will introduce a plugin system for configuring and running streaming services that write these state changes and their surrounding ABCI message context to different destinations.
+
+### Listening
+
+In a new file, `store/types/listening.go`, we will create a `MemoryListener` struct for streaming out protobuf encoded KV pairs state changes from a KVStore.
+The `MemoryListener` will be used internally by the concrete `rootmulti` implementation to collect state changes from KVStores.
+
+```go
+// MemoryListener listens to the state writes and accumulate the records in memory.
+type MemoryListener struct {
+ stateCache []StoreKVPair
+}
+
+// NewMemoryListener creates a listener that accumulates the state writes in memory.
+func NewMemoryListener() *MemoryListener {
+ return &MemoryListener{}
+}
+
+// OnWrite writes state change events to the internal cache
+func (fl *MemoryListener) OnWrite(storeKey StoreKey, key []byte, value []byte, delete bool) {
+ fl.stateCache = append(fl.stateCache, StoreKVPair{
+ StoreKey: storeKey.Name(),
+ Delete: delete,
+ Key: key,
+ Value: value,
+ })
+}
+
+// PopStateCache returns the current state caches and set to nil
+func (fl *MemoryListener) PopStateCache() []StoreKVPair {
+ res := fl.stateCache
+ fl.stateCache = nil
+ return res
+}
+```
+
+We will also define a protobuf type for the KV pairs. In addition to the key and value fields this message
+will include the StoreKey for the originating KVStore so that we can collect information from separate KVStores and determine the source of each KV pair.
+
+```protobuf
+message StoreKVPair {
+ optional string store_key = 1; // the store key for the KVStore this pair originates from
+ required bool set = 2; // true indicates a set operation, false indicates a delete operation
+ required bytes key = 3;
+ required bytes value = 4;
+}
+```
+
+### ListenKVStore
+
+We will create a new `Store` type `listenkv.Store` that the `rootmulti` store will use to wrap a `KVStore` to enable state listening.
+We will configure the `Store` with a `MemoryListener` which will collect state changes for output to specific destinations.
+
+```go
+// Store implements the KVStore interface with listening enabled.
+// Operations are traced on each core KVStore call and written to any of the
+// underlying listeners with the proper key and operation permissions
+type Store struct {
+ parent types.KVStore
+ listener *types.MemoryListener
+ parentStoreKey types.StoreKey
+}
+
+// NewStore returns a reference to a new traceKVStore given a parent
+// KVStore implementation and a buffered writer.
+func NewStore(parent types.KVStore, psk types.StoreKey, listener *types.MemoryListener) *Store {
+ return &Store{parent: parent, listener: listener, parentStoreKey: psk}
+}
+
+// Set implements the KVStore interface. It traces a write operation and
+// delegates the Set call to the parent KVStore.
+func (s *Store) Set(key []byte, value []byte) {
+ types.AssertValidKey(key)
+ s.parent.Set(key, value)
+ s.listener.OnWrite(s.parentStoreKey, key, value, false)
+}
+
+// Delete implements the KVStore interface. It traces a write operation and
+// delegates the Delete call to the parent KVStore.
+func (s *Store) Delete(key []byte) {
+ s.parent.Delete(key)
+ s.listener.OnWrite(s.parentStoreKey, key, nil, true)
+}
+```
+
+### MultiStore interface updates
+
+We will update the `CommitMultiStore` interface to allow us to wrap a `MemoryListener` to a specific `KVStore`.
+Note that the `MemoryListener` will be attached internally by the concrete `rootmulti` implementation.
+
+```go
+type CommitMultiStore interface {
+ ...
+
+ // AddListeners adds a listener for the KVStore belonging to the provided StoreKey
+ AddListeners(keys []StoreKey)
+
+ // PopStateCache returns the accumulated state change messages from MemoryListener
+ PopStateCache() []StoreKVPair
+}
+```
+
+
+### MultiStore implementation updates
+
+We will adjust the `rootmulti` `GetKVStore` method to wrap the returned `KVStore` with a `listenkv.Store` if listening is turned on for that `Store`.
+
+```go
+func (rs *Store) GetKVStore(key types.StoreKey) types.KVStore {
+ store := rs.stores[key].(types.KVStore)
+
+ if rs.TracingEnabled() {
+ store = tracekv.NewStore(store, rs.traceWriter, rs.traceContext)
+ }
+ if rs.ListeningEnabled(key) {
+ store = listenkv.NewStore(store, key, rs.listeners[key])
+ }
+
+ return store
+}
+```
+
+We will implement `AddListeners` to manage KVStore listeners internally and implement `PopStateCache`
+for a means of retrieving the current state.
+
+```go
+// AddListeners adds state change listener for a specific KVStore
+func (rs *Store) AddListeners(keys []types.StoreKey) {
+ listener := types.NewMemoryListener()
+ for i := range keys {
+ rs.listeners[keys[i]] = listener
+ }
+}
+```
+
+```go
+func (rs *Store) PopStateCache() []types.StoreKVPair {
+ var cache []types.StoreKVPair
+ for _, ls := range rs.listeners {
+ cache = append(cache, ls.PopStateCache()...)
+ }
+ sort.SliceStable(cache, func(i, j int) bool {
+ return cache[i].StoreKey < cache[j].StoreKey
+ })
+ return cache
+}
+```
+
+We will also adjust the `rootmulti` `CacheMultiStore` and `CacheMultiStoreWithVersion` methods to enable listening in
+the cache layer.
+
+```go
+func (rs *Store) CacheMultiStore() types.CacheMultiStore {
+ stores := make(map[types.StoreKey]types.CacheWrapper)
+ for k, v := range rs.stores {
+ store := v.(types.KVStore)
+ // Wire the listenkv.Store to allow listeners to observe the writes from the cache store,
+ // set same listeners on cache store will observe duplicated writes.
+ if rs.ListeningEnabled(k) {
+ store = listenkv.NewStore(store, k, rs.listeners[k])
+ }
+ stores[k] = store
+ }
+ return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext())
+}
+```
+
+```go
+func (rs *Store) CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) {
+ // ...
+
+ // Wire the listenkv.Store to allow listeners to observe the writes from the cache store,
+ // set same listeners on cache store will observe duplicated writes.
+ if rs.ListeningEnabled(key) {
+ cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key])
+ }
+
+ cachedStores[key] = cacheStore
+ }
+
+ return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil
+}
+```
+
+### Exposing the data
+
+#### Streaming Service
+
+We will introduce a new `ABCIListener` interface that plugs into the BaseApp and relays ABCI requests and responses
+so that the service can group the state changes with the ABCI requests.
+
+```go
+// baseapp/streaming.go
+
+// ABCIListener is the interface that we're exposing as a streaming service.
+type ABCIListener interface {
+ // ListenFinalizeBlock updates the streaming service with the latest FinalizeBlock messages
+ ListenFinalizeBlock(ctx context.Context, req abci.FinalizeBlockRequest, res abci.FinalizeBlockResponse) error
+ // ListenCommit updates the streaming service with the latest Commit messages and state changes
+ ListenCommit(ctx context.Context, res abci.CommitResponse, changeSet []*StoreKVPair) error
+}
+```
+
+#### BaseApp Registration
+
+We will add a new method to the `BaseApp` to enable the registration of `StreamingService`s:
+
+ ```go
+ // SetStreamingService is used to set a streaming service into the BaseApp hooks and load the listeners into the multistore
+func (app *BaseApp) SetStreamingService(s ABCIListener) {
+ // register the StreamingService within the BaseApp
+ // BaseApp will pass BeginBlock, DeliverTx, and EndBlock requests and responses to the streaming services to update their ABCI context
+ app.abciListeners = append(app.abciListeners, s)
+}
+```
+
+We will add two new fields to the `BaseApp` struct:
+
+```go
+type BaseApp struct {
+
+ ...
+
+ // abciListenersAsync for determining if abciListeners will run asynchronously.
+ // When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node.
+ // When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored.
+ abciListenersAsync bool
+
+ // stopNodeOnABCIListenerErr halts the node when ABCI streaming service listening results in an error.
+ // stopNodeOnABCIListenerErr=true must be paired with abciListenersAsync=false.
+ stopNodeOnABCIListenerErr bool
+}
+```
+
+#### ABCI Event Hooks
+
+We will modify the `FinalizeBlock` and `Commit` methods to pass ABCI requests and responses
+to any streaming service hooks registered with the `BaseApp`.
+
+```go
+func (app *BaseApp) FinalizeBlock(req abci.FinalizeBlockRequest) abci.FinalizeBlockResponse {
+
+ var abciRes abci.FinalizeBlockResponse
+ defer func() {
+ // call the streaming service hook with the FinalizeBlock messages
+ for _, abciListener := range app.abciListeners {
+ ctx := app.finalizeState.ctx
+ blockHeight := ctx.BlockHeight()
+ if app.abciListenersAsync {
+ go func(req abci.FinalizeBlockRequest, res abci.FinalizeBlockResponse) {
+ if err := app.abciListener.FinalizeBlock(blockHeight, req, res); err != nil {
+ app.logger.Error("FinalizeBlock listening hook failed", "height", blockHeight, "err", err)
+ }
+ }(req, abciRes)
+ } else {
+ if err := app.abciListener.ListenFinalizeBlock(blockHeight, req, res); err != nil {
+ app.logger.Error("FinalizeBlock listening hook failed", "height", blockHeight, "err", err)
+ if app.stopNodeOnABCIListenerErr {
+ os.Exit(1)
+ }
+ }
+ }
+ }
+ }()
+
+ ...
+
+ return abciRes
+}
+```
+
+```go
+func (app *BaseApp) Commit() abci.CommitResponse {
+
+ ...
+
+ res := abci.CommitResponse{
+ Data: commitID.Hash,
+ RetainHeight: retainHeight,
+ }
+
+ // call the streaming service hook with the Commit messages
+ for _, abciListener := range app.abciListeners {
+ ctx := app.deliverState.ctx
+ blockHeight := ctx.BlockHeight()
+ changeSet := app.cms.PopStateCache()
+ if app.abciListenersAsync {
+ go func(res abci.CommitResponse, changeSet []store.StoreKVPair) {
+ if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil {
+ app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err)
+ }
+ }(res, changeSet)
+ } else {
+ if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil {
+ app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err)
+ if app.stopNodeOnABCIListenerErr {
+ os.Exit(1)
+ }
+ }
+ }
+ }
+
+ ...
+
+ return res
+}
+```
+
+#### Go Plugin System
+
+We propose a plugin architecture to load and run `Streaming` plugins and other types of implementations. We will introduce a plugin
+system over gRPC that is used to load and run Cosmos-SDK plugins. The plugin system uses [hashicorp/go-plugin](https://github.com/hashicorp/go-plugin).
+Each plugin must have a struct that implements the `plugin.Plugin` interface and an `Impl` interface for processing messages over gRPC.
+Each plugin must also have a message protocol defined for the gRPC service:
+
+```go
+// streaming/plugins/abci/{plugin_version}/interface.go
+
+// Handshake is a common handshake that is shared by streaming and host.
+// This prevents users from executing bad plugins or executing a plugin
+// directory. It is a UX feature, not a security feature.
+var Handshake = plugin.HandshakeConfig{
+ ProtocolVersion: 1,
+ MagicCookieKey: "ABCI_LISTENER_PLUGIN",
+ MagicCookieValue: "ef78114d-7bdf-411c-868f-347c99a78345",
+}
+
+// ListenerPlugin is the base struct for all kinds of go-plugin implementations
+// It will be included in interfaces of different Plugins
+type ABCIListenerPlugin struct {
+ // GRPCPlugin must still implement the Plugin interface
+ plugin.Plugin
+ // Concrete implementation, written in Go. This is only used for plugins
+ // that are written in Go.
+ Impl baseapp.ABCIListener
+}
+
+func (p *ListenerGRPCPlugin) GRPCServer(_ *plugin.GRPCBroker, s *grpc.Server) error {
+ RegisterABCIListenerServiceServer(s, &GRPCServer{Impl: p.Impl})
+ return nil
+}
+
+func (p *ListenerGRPCPlugin) GRPCClient(
+ _ context.Context,
+ _ *plugin.GRPCBroker,
+ c *grpc.ClientConn,
+) (interface{}, error) {
+ return &GRPCClient{client: NewABCIListenerServiceClient(c)}, nil
+}
+```
+
+The `plugin.Plugin` interface has two methods `Client` and `Server`. For our GRPC service these are `GRPCClient` and `GRPCServer`
+The `Impl` field holds the concrete implementation of our `baseapp.ABCIListener` interface written in Go.
+Note: this is only used for plugin implementations written in Go.
+
+The advantage of having such a plugin system is that within each plugin authors can define the message protocol in a way that fits their use case.
+For example, when state change listening is desired, the `ABCIListener` message protocol can be defined as below (*for illustrative purposes only*).
+When state change listening is not desired than `ListenCommit` can be omitted from the protocol.
+
+```protobuf
+syntax = "proto3";
+
+...
+
+message Empty {}
+
+message ListenFinalizeBlockRequest {
+ RequestFinalizeBlock req = 1;
+ ResponseFinalizeBlock res = 2;
+}
+message ListenCommitRequest {
+ int64 block_height = 1;
+ ResponseCommit res = 2;
+ repeated StoreKVPair changeSet = 3;
+}
+
+// plugin that listens to state changes
+service ABCIListenerService {
+ rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty);
+ rpc ListenCommit(ListenCommitRequest) returns (Empty);
+}
+```
+
+```protobuf
+...
+// plugin that doesn't listen to state changes
+service ABCIListenerService {
+ rpc ListenFinalizeBlock(ListenFinalizeBlockRequest) returns (Empty);
+ rpc ListenCommit(ListenCommitRequest) returns (Empty);
+}
+```
+
+Implementing the service above:
+
+```go
+// streaming/plugins/abci/{plugin_version}/grpc.go
+
+var (
+ _ baseapp.ABCIListener = (*GRPCClient)(nil)
+)
+
+// GRPCClient is an implementation of the ABCIListener and ABCIListenerPlugin interfaces that talks over RPC.
+type GRPCClient struct {
+ client ABCIListenerServiceClient
+}
+
+func (m *GRPCClient) ListenFinalizeBlock(goCtx context.Context, req abci.FinalizeBlockRequest, res abci.FinalizeBlockResponse) error {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+ _, err := m.client.ListenDeliverTx(ctx, &ListenDeliverTxRequest{BlockHeight: ctx.BlockHeight(), Req: req, Res: res})
+ return err
+}
+
+func (m *GRPCClient) ListenCommit(goCtx context.Context, res abci.CommitResponse, changeSet []store.StoreKVPair) error {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+ _, err := m.client.ListenCommit(ctx, &ListenCommitRequest{BlockHeight: ctx.BlockHeight(), Res: res, ChangeSet: changeSet})
+ return err
+}
+
+// GRPCServer is the gRPC server that GRPCClient talks to.
+type GRPCServer struct {
+ // This is the real implementation
+ Impl baseapp.ABCIListener
+}
+
+func (m *GRPCServer) ListenFinalizeBlock(ctx context.Context, req *ListenFinalizeBlockRequest) (*Empty, error) {
+ return &Empty{}, m.Impl.ListenFinalizeBlock(ctx, req.Req, req.Res)
+}
+
+func (m *GRPCServer) ListenCommit(ctx context.Context, req *ListenCommitRequest) (*Empty, error) {
+ return &Empty{}, m.Impl.ListenCommit(ctx, req.Res, req.ChangeSet)
+}
+
+```
+
+And the pre-compiled Go plugin `Impl`(*this is only used for plugins that are written in Go*):
+
+```go
+// streaming/plugins/abci/{plugin_version}/impl/plugin.go
+
+// Plugins are pre-compiled and loaded by the plugin system
+
+// ABCIListener is the implementation of the baseapp.ABCIListener interface
+type ABCIListener struct{}
+
+func (m *ABCIListenerPlugin) ListenFinalizeBlock(ctx context.Context, req abci.FinalizeBlockRequest, res abci.FinalizeBlockResponse) error {
+ // send data to external system
+}
+
+func (m *ABCIListenerPlugin) ListenCommit(ctx context.Context, res abci.CommitResponse, changeSet []store.StoreKVPair) error {
+ // send data to external system
+}
+
+func main() {
+ plugin.Serve(&plugin.ServeConfig{
+ HandshakeConfig: grpc_abci_v1.Handshake,
+ Plugins: map[string]plugin.Plugin{
+ "grpc_plugin_v1": &grpc_abci_v1.ABCIListenerGRPCPlugin{Impl: &ABCIListenerPlugin{}},
+ },
+
+ // A non-nil value here enables gRPC serving for this streaming...
+ GRPCServer: plugin.DefaultGRPCServer,
+ })
+}
+```
+
+We will introduce a plugin loading system that will return `(interface{}, error)`.
+This provides the advantage of using versioned plugins where the plugin interface and gRPC protocol change over time.
+In addition, it allows for building independent plugin that can expose different parts of the system over gRPC.
+
+```go
+func NewStreamingPlugin(name string, logLevel string) (interface{}, error) {
+ logger := hclog.New(&hclog.LoggerOptions{
+ Output: hclog.DefaultOutput,
+ Level: toHclogLevel(logLevel),
+ Name: fmt.Sprintf("plugin.%s", name),
+ })
+
+ // We're a host. Start by launching the streaming process.
+ env := os.Getenv(GetPluginEnvKey(name))
+ client := plugin.NewClient(&plugin.ClientConfig{
+ HandshakeConfig: HandshakeMap[name],
+ Plugins: PluginMap,
+ Cmd: exec.Command("sh", "-c", env),
+ Logger: logger,
+ AllowedProtocols: []plugin.Protocol{
+ plugin.ProtocolNetRPC, plugin.ProtocolGRPC},
+ })
+
+ // Connect via RPC
+ rpcClient, err := client.Client()
+ if err != nil {
+ return nil, err
+ }
+
+ // Request streaming plugin
+ return rpcClient.Dispense(name)
+}
+
+```
+
+We propose a `RegisterStreamingPlugin` function for the App to register `NewStreamingPlugin`s with the App's BaseApp.
+Streaming plugins can be of `Any` type; therefore, the function takes in an interface vs a concrete type.
+For example, we could have plugins of `ABCIListener`, `WasmListener` or `IBCListener`. Note that `RegisterStreamingPlugin` function
+is helper function and not a requirement. Plugin registration can easily be moved from the App to the BaseApp directly.
+
+```go
+// baseapp/streaming.go
+
+// RegisterStreamingPlugin registers streaming plugins with the App.
+// This method returns an error if a plugin is not supported.
+func RegisterStreamingPlugin(
+ bApp *BaseApp,
+ appOpts servertypes.AppOptions,
+ keys map[string]*types.KVStoreKey,
+ streamingPlugin interface{},
+) error {
+ switch t := streamingPlugin.(type) {
+ case ABCIListener:
+ registerABCIListenerPlugin(bApp, appOpts, keys, t)
+ default:
+ return fmt.Errorf("unexpected plugin type %T", t)
+ }
+ return nil
+}
+```
+
+```go
+func registerABCIListenerPlugin(
+ bApp *BaseApp,
+ appOpts servertypes.AppOptions,
+ keys map[string]*store.KVStoreKey,
+ abciListener ABCIListener,
+) {
+ asyncKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIAsync)
+ async := cast.ToBool(appOpts.Get(asyncKey))
+ stopNodeOnErrKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIStopNodeOnErrTomlKey)
+ stopNodeOnErr := cast.ToBool(appOpts.Get(stopNodeOnErrKey))
+ keysKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIKeysTomlKey)
+ exposeKeysStr := cast.ToStringSlice(appOpts.Get(keysKey))
+ exposedKeys := exposeStoreKeysSorted(exposeKeysStr, keys)
+ bApp.cms.AddListeners(exposedKeys)
+ app.SetStreamingManager(
+ storetypes.StreamingManager{
+ ABCIListeners: []storetypes.ABCIListener{abciListener},
+ StopNodeOnErr: stopNodeOnErr,
+ },
+ )
+}
+```
+
+```go
+func exposeAll(list []string) bool {
+ for _, ele := range list {
+ if ele == "*" {
+ return true
+ }
+ }
+ return false
+}
+
+func exposeStoreKeys(keysStr []string, keys map[string]*types.KVStoreKey) []types.StoreKey {
+ var exposeStoreKeys []types.StoreKey
+ if exposeAll(keysStr) {
+ exposeStoreKeys = make([]types.StoreKey, 0, len(keys))
+ for _, storeKey := range keys {
+ exposeStoreKeys = append(exposeStoreKeys, storeKey)
+ }
+ } else {
+ exposeStoreKeys = make([]types.StoreKey, 0, len(keysStr))
+ for _, keyStr := range keysStr {
+ if storeKey, ok := keys[keyStr]; ok {
+ exposeStoreKeys = append(exposeStoreKeys, storeKey)
+ }
+ }
+ }
+ // sort storeKeys for deterministic output
+ sort.SliceStable(exposeStoreKeys, func(i, j int) bool {
+ return exposeStoreKeys[i].Name() < exposeStoreKeys[j].Name()
+ })
+
+ return exposeStoreKeys
+}
+```
+
+The `NewStreamingPlugin` and `RegisterStreamingPlugin` functions are used to register a plugin with the App's BaseApp.
+
+e.g. in `NewSimApp`:
+
+```go
+func NewSimApp(
+ logger log.Logger,
+ db dbm.DB,
+ traceStore io.Writer,
+ loadLatest bool,
+ appOpts servertypes.AppOptions,
+ baseAppOptions ...func(*baseapp.BaseApp),
+) *SimApp {
+
+ ...
+
+ keys := sdk.NewKVStoreKeys(
+ authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey,
+ minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey,
+ govtypes.StoreKey, paramstypes.StoreKey, ibchost.StoreKey, upgradetypes.StoreKey,
+ evidencetypes.StoreKey, ibctransfertypes.StoreKey, capabilitytypes.StoreKey,
+ )
+
+ ...
+
+ // register streaming services
+ streamingCfg := cast.ToStringMap(appOpts.Get(baseapp.StreamingTomlKey))
+ for service := range streamingCfg {
+ pluginKey := fmt.Sprintf("%s.%s.%s", baseapp.StreamingTomlKey, service, baseapp.StreamingPluginTomlKey)
+ pluginName := strings.TrimSpace(cast.ToString(appOpts.Get(pluginKey)))
+ if len(pluginName) > 0 {
+ logLevel := cast.ToString(appOpts.Get(flags.FlagLogLevel))
+ plugin, err := streaming.NewStreamingPlugin(pluginName, logLevel)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+ if err := baseapp.RegisterStreamingPlugin(bApp, appOpts, keys, plugin); err != nil {
+ tmos.Exit(err.Error())
+ }
+ }
+ }
+
+ return app
+```
+
+#### Configuration
+
+The plugin system will be configured within an App's TOML configuration files.
+
+```toml
+# gRPC streaming
+[streaming]
+
+# ABCI streaming service
+[streaming.abci]
+
+# The plugin version to use for ABCI listening
+plugin = "abci_v1"
+
+# List of kv store keys to listen to for state changes.
+# Set to ["*"] to expose all keys.
+keys = ["*"]
+
+# Enable abciListeners to run asynchronously.
+# When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node.
+# When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored.
+async = false
+
+# Whether to stop the node on message deliver error.
+stop-node-on-err = true
+```
+
+There will be four parameters for configuring `ABCIListener` plugin: `streaming.abci.plugin`, `streaming.abci.keys`, `streaming.abci.async` and `streaming.abci.stop-node-on-err`.
+`streaming.abci.plugin` is the name of the plugin we want to use for streaming, `streaming.abci.keys` is a set of store keys for stores it listens to,
+`streaming.abci.async` is bool enabling asynchronous listening and `streaming.abci.stop-node-on-err` is a bool that stops the node when true and when operating
+on synchronized mode `streaming.abci.async=false`. Note that `streaming.abci.stop-node-on-err=true` will be ignored if `streaming.abci.async=true`.
+
+The configuration above support additional streaming plugins by adding the plugin to the `[streaming]` configuration section
+and registering the plugin with `RegisterStreamingPlugin` helper function.
+
+Note the that each plugin must include `streaming.{service}.plugin` property as it is a requirement for doing the lookup and registration of the plugin
+with the App. All other properties are unique to the individual services.
+
+#### Encoding and decoding streams
+
+ADR-038 introduces the interfaces and types for streaming state changes out from KVStores, associating this
+data with their related ABCI requests and responses, and registering a service for consuming this data and streaming it to some destination in a final format.
+Instead of prescribing a final data format in this ADR, it is left to a specific plugin implementation to define and document this format.
+We take this approach because flexibility in the final format is necessary to support a wide range of streaming service plugins. For example,
+the data format for a streaming service that writes the data out to a set of files will differ from the data format that is written to a Kafka topic.
+
+## Consequences
+
+These changes will provide a means of subscribing to KVStore state changes in real time.
+
+### Backwards Compatibility
+
+* This ADR changes the `CommitMultiStore` interface, implementations supporting the previous version of this interface will not support the new one
+
+### Positive
+
+* Ability to listen to KVStore state changes in real time and expose these events to external consumers
+
+### Negative
+
+* Changes `CommitMultiStore` interface and its implementations
+
+### Neutral
+
+* Introduces additional—but optional—complexity to configuring and running a cosmos application
+* If an application developer opts to use these features to expose data, they need to be aware of the ramifications/risks of that data exposure as it pertains to the specifics of their application
diff --git a/copy-of-sdk-docs/build/architecture/adr-039-epoched-staking.md b/copy-of-sdk-docs/build/architecture/adr-039-epoched-staking.md
new file mode 100644
index 00000000..bc74b6ab
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-039-epoched-staking.md
@@ -0,0 +1,122 @@
+# ADR 039: Epoched Staking
+
+## Changelog
+
+* 10-Feb-2021: Initial Draft
+
+## Authors
+
+* Dev Ojha (@valardragon)
+* Sunny Aggarwal (@sunnya97)
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR updates the proof of stake module to buffer the staking weight updates for a number of blocks before updating the consensus' staking weights. The length of the buffer is dubbed an epoch. The prior functionality of the staking module is then a special case of the abstracted module, with the epoch being set to 1 block.
+
+## Context
+
+The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was the simplest from an implementation perspective, and because we at the time believed that this would lead to better UX for clients.
+
+An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This epoched proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.
+
+Additionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed.
+
+Furthermore, it has become clearer over time that immediate execution of staking events comes with limitations, such as:
+
+* Threshold based cryptography. One of the main limitations is that because the validator set can change so regularly, it makes the running of multiparty computation by a fixed validator set difficult. Many threshold-based cryptographic features for blockchains such as randomness beacons and threshold decryption require a computationally-expensive DKG process (will take much longer than 1 block to create). To productively use these, we need to guarantee that the result of the DKG will be used for a reasonably long time. It wouldn't be feasible to rerun the DKG every block. By epoching staking, it guarantees we'll only need to run a new DKG once every epoch.
+
+* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more headers in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn.
+
+* Fairness of deterministic leader election. Currently we have no ways of reasoning about fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes)
+
+* Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs.
+
+## Design considerations
+
+### Slashing
+
+There is a design consideration for whether to apply a slash immediately or at the end of an epoch. A slash event should apply to only members who are actually staked during the time of the infraction, namely during the epoch the slash event occurred.
+
+Applying it immediately can be viewed as offering greater consensus layer security, at potential costs to the aforementioned use cases. The benefits of immediate slashing for consensus layer security can be all be obtained by executing the validator jailing immediately (thus removing it from the validator set), and delaying the actual slash change to the validator's weight until the epoch boundary. For the use cases mentioned above, workarounds can be integrated to avoid problems, as follows:
+
+* For threshold based cryptography, this setting will have the threshold cryptography use the original epoch weights, while consensus has an update that lets it more rapidly benefit from additional security. If the threshold based cryptography blocks liveness of the chain, then we have effectively raised the liveness threshold of the remaining validators for the rest of the epoch. (Alternatively, jailed nodes could still contribute shares) This plan will fail in the extreme case that more than 1/3rd of the validators have been jailed within a single epoch. For such an extreme scenario, the chain already have its own custom incident response plan, and defining how to handle the threshold cryptography should be a part of that.
+* For light client efficiency, there can be a bit included in the header indicating an intra-epoch slash (ala https://github.com/tendermint/spec/issues/199).
+* For fairness of deterministic leader election, applying a slash or jailing within an epoch would break the guarantee we were seeking to provide. This then re-introduces a new (but significantly simpler) problem for trying to provide fairness guarantees. Namely, that validators can adversarially elect to remove themselves from the set of proposers. From a security perspective, this could potentially be handled by two different mechanisms (or prove to still be too difficult to achieve). One is making a security statement acknowledging the ability for an adversary to force an ahead-of-time fixed threshold of users to drop out of the proposer set within an epoch. The second method would be to parameterize such that the cost of a slash within the epoch far outweighs benefits due to being a proposer. However, this latter criterion is quite dubious, since being a proposer can have many advantageous side-effects in chains with complex state machines. (Namely, DeFi games such as Fomo3D)
+* For staking derivative design, there is no issue introduced. This does not increase the state size of staking records, since whether a slash has occurred is fully queryable given the validator address.
+
+### Token lockup
+
+When someone makes a transaction to delegate, even though they are not immediately staked, their tokens should be moved into a pool managed by the staking module which will then be used at the end of an epoch. This prevents concerns where they stake, and then spend those tokens not realizing they were already allocated for staking, and thus having their staking tx fail.
+
+### Pipelining the epochs
+
+For threshold based cryptography in particular, we need a pipeline for epoch changes. This is because when we are in epoch N, we want the epoch N+1 weights to be fixed so that the validator set can do the DKG accordingly. So if we are currently in epoch N, the stake weights for epoch N+1 should already be fixed, and new stake changes should be getting applied to epoch N + 2.
+
+This can be handled by making a parameter for the epoch pipeline length. This parameter should not be alterable except during hard forks, to mitigate implementation complexity of switching the pipeline length.
+
+With pipeline length 1, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+1.
+With pipeline length 2, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+2.
+
+### Rewards
+
+Even though all staking updates are applied at epoch boundaries, rewards can still be distributed immediately when they are claimed. This is because they do not affect the current stake weights, as we do not implement auto-bonding of rewards. If such a feature were to be implemented, it would have to be setup so that rewards are auto-bonded at the epoch boundary.
+
+### Parameterizing the epoch length
+
+When choosing the epoch length, there is a trade-off between queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain.
+
+Until an ABCI mechanism for variable block times is introduced, it is ill-advised to be using high epoch lengths due to the computation buildup. This is because when a block's execution time is greater than the expected block time from Tendermint, rounds may increment.
+
+## Decision
+
+**Step-1**: Implement buffering of all staking and slashing messages.
+
+First we create a pool for storing tokens that are being bonded, but should be applied at the epoch boundary called the `EpochDelegationPool`. Then, we have two separate queues, one for staking, one for slashing. We describe what happens on each message being delivered below:
+
+### Staking messages
+
+* **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account.
+* **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch.
+* **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account.
+* **MsgBeginRedelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
+* **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
+
+### Slashing messages
+
+* **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch.
+* **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be set up such that this slash applies immediately.
+
+### Evidence Messages
+
+* **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued.
+
+Then we add methods to the end blockers, to ensure that at the epoch boundary the queues are cleared and delegation updates are applied.
+
+**Step-2**: Implement querying of queued staking txs.
+
+When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events.
+
+As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is doable by maintaining an auxiliary hashmap for indexing upcoming staking events by address)
+
+**Step-3**: Adjust gas
+
+Currently gas represents the cost of executing a transaction when its done immediately. (Merging together costs of p2p overhead, state access overhead, and computational overhead) However, now a transaction can cause computation in a future block, namely at the epoch boundary.
+
+To handle this, we should initially include parameters for estimating the amount of future computation (denominated in gas), and add that as a flat charge needed for the message.
+We leave it out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that they are weighted equally for now.
+
+## Consequences
+
+### Positive
+
+* Abstracts the proof of stake module that allows retaining the existing functionality
+* Enables new features such as validator-set based threshold cryptography
+
+### Negative
+
+* Increases complexity of integrating more complex gas pricing mechanisms, as they now have to consider future execution costs as well.
+* When epoch > 1, validators can no longer leave the network immediately, and must wait until an epoch boundary.
diff --git a/copy-of-sdk-docs/build/architecture/adr-040-storage-and-smt-state-commitments.md b/copy-of-sdk-docs/build/architecture/adr-040-storage-and-smt-state-commitments.md
new file mode 100644
index 00000000..6259e588
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-040-storage-and-smt-state-commitments.md
@@ -0,0 +1,289 @@
+# ADR 040: Storage and SMT State Commitments
+
+## Changelog
+
+* 2020-01-15: Draft
+
+## Status
+
+DRAFT Not Implemented
+
+## Abstract
+
+Sparse Merkle Tree ([SMT](https://osf.io/8mcnh/)) is a version of a Merkle Tree with various storage and performance optimizations. This ADR defines a separation of state commitments from data storage and the Cosmos SDK transition from IAVL to SMT.
+
+## Context
+
+Currently, Cosmos SDK uses IAVL for both state [commitments](https://cryptography.fandom.com/wiki/Commitment_scheme) and data storage.
+
+IAVL has effectively become an orphaned project within the Cosmos ecosystem and it's proven to be an inefficient state commitment data structure.
+In the current design, IAVL is used for both data storage and as a Merkle Tree for state commitments. IAVL is meant to be a standalone Merkleized key/value database, however it's using a KV DB engine to store all tree nodes. So, each node is stored in a separate record in the KV DB. This causes many inefficiencies and problems:
+
+* Each object query requires a tree traversal from the root. Subsequent queries for the same object are cached on the Cosmos SDK level.
+* Each edge traversal requires a DB query.
+* Creating snapshots is [expensive](https://github.com/cosmos/cosmos-sdk/issues/7215#issuecomment-684804950). It takes about 30 seconds to export less than 100 MB of state (as of March 2020).
+* Updates in IAVL may trigger tree reorganization and possible O(log(n)) hashes re-computation, which can become a CPU bottleneck.
+* The node structure is pretty expensive - it contains a standard tree node elements (key, value, left and right element) and additional metadata such as height, version (which is not required by the Cosmos SDK). The entire node is hashed, and that hash is used as the key in the underlying database, [ref](https://github.com/cosmos/iavl/blob/master/docs/node/node.md
+).
+
+Moreover, the IAVL project lacks support and a maintainer and we already see better and well-established alternatives. Instead of optimizing the IAVL, we are looking into other solutions for both storage and state commitments.
+
+## Decision
+
+We propose to separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine. Finally we replace IAVL with [Celestia's SMT](https://github.com/lazyledger/smt). Celestia SMT is based on Diem (called jellyfish) design [*] - it uses a compute-optimized SMT by replacing subtrees with only default values with a single node (same approach is used by Ethereum2) and implements compact proofs.
+
+The storage model presented here doesn't deal with data structure nor serialization. It's a Key-Value database, where both key and value are binaries. The storage user is responsible for data serialization.
+
+### Decouple state commitment from storage
+
+Separation of storage and commitment (by the SMT) will allow the optimization of different components according to their usage and access patterns.
+
+`SC` (SMT) is used to commit to a data and compute Merkle proofs. `SS` is used to directly access data. To avoid collisions, both `SS` and `SC` will use a separate storage namespace (they could use the same database underneath). `SS` will store each record directly (mapping `(key, value)` as `key → value`).
+
+SMT is a merkle tree structure: we don't store keys directly. For every `(key, value)` pair, `hash(key)` is used as leaf path (we hash a key to uniformly distribute leaves in the tree) and `hash(value)` as the leaf contents. The tree structure is specified in more depth [below](#smt-for-state-commitment).
+
+For data access we propose 2 additional KV buckets (implemented as namespaces for the key-value pairs, sometimes called [column family](https://github.com/facebook/rocksdb/wiki/Terminology)):
+
+1. B1: `key → value`: the principal object storage, used by a state machine, behind the Cosmos SDK `KVStore` interface: provides direct access by key and allows prefix iteration (KV DB backend must support it).
+2. B2: `hash(key) → key`: a reverse index to get a key from an SMT path. Internally the SMT will store `(key, value)` as `prefix || hash(key) || hash(value)`. So, we can get an object value by composing `hash(key) → B2 → B1`.
+3. We could use more buckets to optimize the app usage if needed.
+
+We propose to use a KV database for both `SS` and `SC`. The store interface will allow to use the same physical DB backend for both `SS` and `SC` as well two separate DBs. The latter option allows for the separation of `SS` and `SC` into different hardware units, providing support for more complex setup scenarios and improving overall performance: one can use different backends (eg RocksDB and Badger) as well as independently tuning the underlying DB configuration.
+
+### Requirements
+
+State Storage requirements:
+
+* range queries
+* quick (key, value) access
+* creating a snapshot
+* historical versioning
+* pruning (garbage collection)
+
+State Commitment requirements:
+
+* fast updates
+* tree path should be short
+* query historical commitment proofs using ICS-23 standard
+* pruning (garbage collection)
+
+### SMT for State Commitment
+
+A Sparse Merkle tree is based on the idea of a complete Merkle tree of an intractable size. The assumption here is that as the size of the tree is intractable, there would only be a few leaf nodes with valid data blocks relative to the tree size, rendering a sparse tree.
+
+The full specification can be found at [Celestia](https://github.com/celestiaorg/celestia-specs/blob/ec98170398dfc6394423ee79b00b71038879e211/src/specs/data_structures.md#sparse-merkle-tree). In summary:
+
+* The SMT consists of a binary Merkle tree, constructed in the same fashion as described in [Certificate Transparency (RFC-6962)](https://tools.ietf.org/html/rfc6962), but using as the hashing function SHA-2-256 as defined in [FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4).
+* Leaves and internal nodes are hashed differently: the one-byte `0x00` is prepended for leaf nodes while `0x01` is prepended for internal nodes.
+* Default values are given to leaf nodes with empty leaves.
+* While the above rule is sufficient to pre-compute the values of intermediate nodes that are roots of empty subtrees, a further simplification is to extend this default value to all nodes that are roots of empty subtrees. The 32-byte zero is used as the default value. This rule takes precedence over the above one.
+* An internal node that is the root of a subtree that contains exactly one non-empty leaf is replaced by that leaf's leaf node.
+
+### Snapshots for storage sync and state versioning
+
+Below, with simple _snapshot_ we refer to a database snapshot mechanism, not to a _ABCI snapshot sync_. The latter will be referred as _snapshot sync_ (which will directly use DB snapshot as described below).
+
+Database snapshot is a view of DB state at a certain time or transaction. It's not a full copy of a database (it would be too big). Usually a snapshot mechanism is based on a _copy on write_ and it allows DB state to be efficiently delivered at a certain stage.
+Some DB engines support snapshotting. Hence, we propose to reuse that functionality for the state sync and versioning (described below). We limit the supported DB engines to ones which efficiently implement snapshots. In a final section we discuss the evaluated DBs.
+
+One of the Stargate core features is a _snapshot sync_ delivered in the `/snapshot` package. It provides a way to trustlessly sync a blockchain without repeating all transactions from the genesis. This feature is implemented in Cosmos SDK and requires storage support. Currently IAVL is the only supported backend. It works by streaming to a client a snapshot of a `SS` at a certain version together with a header chain.
+
+A new database snapshot will be created in every `EndBlocker` and identified by a block height. The `root` store keeps track of the available snapshots to offer `SS` at a certain version. The `root` store implements the `RootStore` interface described below. In essence, `RootStore` encapsulates a `Committer` interface. `Committer` has a `Commit`, `SetPruning`, `GetPruning` functions which will be used for creating and removing snapshots. The `rootStore.Commit` function creates a new snapshot and increments the version on each call, and checks if it needs to remove old versions. We will need to update the SMT interface to implement the `Committer` interface.
+NOTE: `Commit` must be called exactly once per block. Otherwise we risk going out of sync for the version number and block height.
+NOTE: For the Cosmos SDK storage, we may consider splitting that interface into `Committer` and `PruningCommitter` - only the multiroot should implement `PruningCommitter` (cache and prefix store don't need pruning).
+
+Number of historical versions for `abci.QueryRequest` and state sync snapshots is part of a node configuration, not a chain configuration (configuration implied by the blockchain consensus). A configuration should allow to specify number of past blocks and number of past blocks modulo some number (eg: 100 past blocks and one snapshot every 100 blocks for past 2000 blocks). Archival nodes can keep all past versions.
+
+Pruning old snapshots is effectively done by a database. Whenever we update a record in `SC`, SMT won't update nodes - instead it creates new nodes on the update path, without removing the old one. Since we are snapshotting each block, we need to change that mechanism to immediately remove orphaned nodes from the database. This is a safe operation - snapshots will keep track of the records and make it available when accessing past versions.
+
+To manage the active snapshots we will either use a DB _max number of snapshots_ option (if available), or we will remove DB snapshots in the `EndBlocker`. The latter option can be done efficiently by identifying snapshots with block height and calling a store function to remove past versions.
+
+#### Accessing old state versions
+
+One of the functional requirements is to access old state. This is done through `abci.QueryRequest` structure. The version is specified by a block height (so we query for an object by a key `K` at block height `H`). The number of old versions supported for `abci.QueryRequest` is configurable. Accessing an old state is done by using available snapshots.
+`abci.QueryRequest` doesn't need old state of `SC` unless the `prove=true` parameter is set. The SMT merkle proof must be included in the `abci.QueryResponse` only if both `SC` and `SS` have a snapshot for requested version.
+
+Moreover, Cosmos SDK could provide a way to directly access a historical state. However, a state machine shouldn't do that - since the number of snapshots is configurable, it would lead to nondeterministic execution.
+
+We positively [validated](https://github.com/cosmos/cosmos-sdk/discussions/8297) a versioning and snapshot mechanism for querying old state with regards to the database we evaluated.
+
+### State Proofs
+
+For any object stored in State Store (SS), we have corresponding object in `SC`. A proof for object `V` identified by a key `K` is a branch of `SC`, where the path corresponds to the key `hash(K)`, and the leaf is `hash(K, V)`.
+
+### Rollbacks
+
+We need to be able to process transactions and roll-back state updates if a transaction fails. This can be done in the following way: during transaction processing, we keep all state change requests (writes) in a `CacheWrapper` abstraction (as it's done today). Once we finish the block processing, in the `Endblocker`, we commit a root store - at that time, all changes are written to the SMT and to the `SS` and a snapshot is created.
+
+### Committing to an object without saving it
+
+We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly.
+
+### Refactor MultiStore
+
+The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)).
+
+We propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity.
+
+```go
+// Used where read-only access to versions is needed.
+type BasicRootStore interface {
+ Store
+ GetKVStore(StoreKey) KVStore
+ CacheRootStore() CacheRootStore
+}
+
+// Used as the main app state, replacing CommitMultiStore.
+type CommitRootStore interface {
+ BasicRootStore
+ Committer
+ Snapshotter
+
+ GetVersion(uint64) (BasicRootStore, error)
+ SetInitialVersion(uint64) error
+
+ ... // Trace and Listen methods
+}
+
+// Replaces CacheMultiStore for branched state.
+type CacheRootStore interface {
+ BasicRootStore
+ Write()
+
+ ... // Trace and Listen methods
+}
+
+// Example of constructor parameters for the concrete type.
+type RootStoreConfig struct {
+ Upgrades *StoreUpgrades
+ InitialVersion uint64
+
+ ReservePrefix(StoreKey, StoreType)
+}
+```
+
+
+
+
+In contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores.
+
+NOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface.
+
+#### Compatibility support
+
+To ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`.
+
+The new `RootStore` and supporting types can be implemented in a `store/v2alpha1` package to avoid breaking existing code.
+
+#### Merkle Proofs and IBC
+
+Currently, an IBC (v1.0) Merkle proof path consists of two elements (`["", ""]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained.
+The root hash of the proof for `""` is hashed with the `""` to validate against the App Hash.
+
+This is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a "no-op" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `"ibc"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible.
+
+As a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `"ibc"`.
+
+The workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase.
+
+The presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs.
+
+### Optimization: compress module key prefixes
+
+We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely:
+
+* each module has it's own namespace;
+* when accessing a module namespace we create a KVStore with embedded prefix;
+* that prefix will be compressed only when accessing and managing `SS`.
+
+We need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key.
+
+TODO: need to make decision about the key compression.
+
+## Optimization: SS key compression
+
+Some objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints.
+
+TODO: finalize this or move to another ADR.
+
+## Migration
+
+Using the new store will require a migration. 2 Migrations are proposed:
+
+1. Genesis export -- it will reset the blockchain history.
+2. In place migration: we can reuse `UpgradeKeeper.SetUpgradeHandler` to provide the migration logic:
+
+```go
+app.UpgradeKeeper.SetUpgradeHandler("adr-40", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {
+
+ storev2.Migrate(iavlstore, v2.store)
+
+ // RunMigrations returns the VersionMap
+ // with the updated module ConsensusVersions
+ return app.mm.RunMigrations(ctx, vm)
+})
+```
+
+The `Migrate` function will read all entries from a store/v1 DB and save them to the AD-40 combined KV store.
+Cache layer should not be used and the operation must finish with a single Commit call.
+
+Inserting records to the `SC` (SMT) component is the bottleneck. Unfortunately SMT doesn't support batch transactions.
+Adding batch transactions to `SC` layer is considered as a feature after the main release.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR doesn't introduce any Cosmos SDK level API changes.
+
+We change the storage layout of the state machine, a storage hard fork and network upgrade is required to incorporate these changes. SMT provides a merkle proof functionality, however it is not compatible with ICS23. Updating the proofs for ICS23 compatibility is required.
+
+### Positive
+
+* Decoupling state from state commitment introduce better engineering opportunities for further optimizations and better storage patterns.
+* Performance improvements.
+* Joining SMT based camp which has wider and proven adoption than IAVL. Example projects which decided on SMT: Ethereum2, Diem (Libra), Trillan, Tezos, Celestia.
+* Multistore removal fixes a longstanding issue with the current MultiStore design.
+* Simplifies merkle proofs - all modules, except IBC, have only one pass for merkle proof.
+
+### Negative
+
+* Storage migration
+* LL SMT doesn't support pruning - we will need to add and test that functionality.
+* `SS` keys will have an overhead of a key prefix. This doesn't impact `SC` because all keys in `SC` have same size (they are hashed).
+
+### Neutral
+
+* Deprecating IAVL, which is one of the core proposals of Cosmos Whitepaper.
+
+## Alternative designs
+
+Most of the alternative designs were evaluated in [state commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h).
+
+Ethereum research published [Verkle Trie](https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1.html) - an idea of combining polynomial commitments with merkle tree in order to reduce the tree height. This concept has a very good potential, but we think it's too early to implement it. The current, SMT based design could be easily updated to the Verkle Trie once other research implement all necessary libraries. The main advantage of the design described in this ADR is the separation of state commitments from the data storage and designing a more powerful interface.
+
+## Further Discussions
+
+### Evaluated KV Databases
+
+We verified existing databases KV databases for evaluating snapshot support. The following databases provide efficient snapshot mechanism: Badger, RocksDB, [Pebble](https://github.com/cockroachdb/pebble). Databases which don't provide such support or are not production ready: boltdb, leveldb, goleveldb, membdb, lmdb.
+
+### RDBMS
+
+Use of RDBMS instead of simple KV store for state. Use of RDBMS will require a Cosmos SDK API breaking change (`KVStore` interface) and will allow better data extraction and indexing solutions. Instead of saving an object as a single blob of bytes, we could save it as record in a table in the state storage layer, and as a `hash(key, protobuf(object))` in the SMT as outlined above. To verify that an object registered in RDBMS is same as the one committed to SMT, one will need to load it from RDBMS, marshal using protobuf, hash and do SMT search.
+
+### Off Chain Store
+
+We were discussing use case where modules can use a support database, which is not automatically committed. Module will responsible for having a sound storage model and can optionally use the feature discussed in __Committing to an object without saving it_ section.
+
+## References
+
+* [IAVL What's Next?](https://github.com/cosmos/cosmos-sdk/issues/7100)
+* [IAVL overview](https://docs.google.com/document/d/16Z_hW2rSAmoyMENO-RlAhQjAG3mSNKsQueMnKpmcBv0/edit#heading=h.yd2th7x3o1iv) of it's state v0.15
+* [State commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h)
+* [Celestia (LazyLedger) SMT](https://github.com/lazyledger/smt)
+* Facebook Diem (Libra) SMT [design](https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf)
+* [Trillian Revocation Transparency](https://github.com/google/trillian/blob/master/docs/papers/RevocationTransparency.pdf), [Trillian Verifiable Data Structures](https://github.com/google/trillian/blob/master/docs/papers/VerifiableDataStructures.pdf).
+* Design and implementation [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297).
+* [How to Upgrade IBC Chains and their Clients](https://ibc.cosmos.network/main/ibc/upgrades/quick-guide/)
+* [ADR-40 Effect on IBC](https://github.com/cosmos/ibc-go/discussions/256)
diff --git a/copy-of-sdk-docs/build/architecture/adr-041-in-place-store-migrations.md b/copy-of-sdk-docs/build/architecture/adr-041-in-place-store-migrations.md
new file mode 100644
index 00000000..15c79589
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-041-in-place-store-migrations.md
@@ -0,0 +1,167 @@
+# ADR 041: In-Place Store Migrations
+
+## Changelog
+
+* 17.02.2021: Initial Draft
+
+## Status
+
+Accepted
+
+## Abstract
+
+This ADR introduces a mechanism to perform in-place state store migrations during chain software upgrades.
+
+## Context
+
+When a chain upgrade introduces state-breaking changes inside modules, the current procedure consists of exporting the whole state into a JSON file (via the `simd export` command), running migration scripts on the JSON file (`simd genesis migrate` command), clearing the stores (`simd unsafe-reset-all` command), and starting a new chain with the migrated JSON file as new genesis (optionally with a custom initial block height). An example of such a procedure can be seen [in the Cosmos Hub 3->4 migration guide](https://github.com/cosmos/gaia/blob/v4.0.3/docs/migration/cosmoshub-3.md#upgrade-procedure).
+
+This procedure is cumbersome for multiple reasons:
+
+* The procedure takes time. It can take hours to run the `export` command, plus some additional hours to run `InitChain` on the fresh chain using the migrated JSON.
+* The exported JSON file can be heavy (~100MB-1GB), making it difficult to view, edit and transfer, which in turn introduces additional work to solve these problems (such as [streaming genesis](https://github.com/cosmos/cosmos-sdk/issues/6936)).
+
+## Decision
+
+We propose a migration procedure based on modifying the KV store in-place without involving the JSON export-process-import flow described above.
+
+### Module `ConsensusVersion`
+
+We introduce a new method on the `AppModule` interface:
+
+```go
+type AppModule interface {
+ // --snip--
+ ConsensusVersion() uint64
+}
+```
+
+This methods returns an `uint64` which serves as state-breaking version of the module. It MUST be incremented on each consensus-breaking change introduced by the module. To avoid potential errors with default values, the initial version of a module MUST be set to 1. In the Cosmos SDK, version 1 corresponds to the modules in the v0.41 series.
+
+### Module-Specific Migration Functions
+
+For each consensus-breaking change introduced by the module, a migration script from ConsensusVersion `N` to version `N+1` MUST be registered in the `Configurator` using its newly-added `RegisterMigration` method. All modules receive a reference to the configurator in their `RegisterServices` method on `AppModule`, and this is where the migration functions should be registered. The migration functions should be registered in increasing order.
+
+```go
+func (am AppModule) RegisterServices(cfg module.Configurator) {
+ // --snip--
+ cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error {
+ // Perform in-place store migrations from ConsensusVersion 1 to 2.
+ })
+ cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error {
+ // Perform in-place store migrations from ConsensusVersion 2 to 3.
+ })
+ // etc.
+}
+```
+
+For example, if the new ConsensusVersion of a module is `N` , then `N-1` migration functions MUST be registered in the configurator.
+
+In the Cosmos SDK, the migration functions are handled by each module's keeper, because the keeper holds the `sdk.StoreKey` used to perform in-place store migrations. To not overload the keeper, a `Migrator` wrapper is used by each module to handle the migration functions:
+
+```go
+// Migrator is a struct for handling in-place store migrations.
+type Migrator struct {
+ BaseKeeper
+}
+```
+
+Migration functions should live inside the `migrations/` folder of each module, and be called by the Migrator's methods. We propose the format `Migrate{M}to{N}` for method names.
+
+```go
+// Migrate1to2 migrates from version 1 to 2.
+func (m Migrator) Migrate1to2(ctx sdk.Context) error {
+ return v2bank.MigrateStore(ctx, m.keeper.storeKey) // v043bank is package `x/bank/migrations/v2`.
+}
+```
+
+Each module's migration functions are specific to the module's store evolutions, and are not described in this ADR. An example of x/bank store key migrations after the introduction of ADR-028 length-prefixed addresses can be seen in this [store.go code](https://github.com/cosmos/cosmos-sdk/blob/36f68eb9e041e20a5bb47e216ac5eb8b91f95471/x/bank/legacy/v043/store.go#L41-L62).
+
+### Tracking Module Versions in `x/upgrade`
+
+We introduce a new prefix store in `x/upgrade`'s store. This store will track each module's current version, it can be modelized as a `map[string]uint64` of module name to module ConsensusVersion, and will be used when running the migrations (see next section for details). The key prefix used is `0x1`, and the key/value format is:
+
+```text
+0x2 | {bytes(module_name)} => BigEndian(module_consensus_version)
+```
+
+The initial state of the store is set from `app.go`'s `InitChainer` method.
+
+The UpgradeHandler signature needs to be updated to take a `VersionMap`, as well as return an upgraded `VersionMap` and an error:
+
+```diff
+- type UpgradeHandler func(ctx sdk.Context, plan Plan)
++ type UpgradeHandler func(ctx sdk.Context, plan Plan, versionMap VersionMap) (VersionMap, error)
+```
+
+To apply an upgrade, we query the `VersionMap` from the `x/upgrade` store and pass it into the handler. The handler runs the actual migration functions (see next section), and if successful, returns an updated `VersionMap` to be stored in state.
+
+```diff
+func (k UpgradeKeeper) ApplyUpgrade(ctx sdk.Context, plan types.Plan) {
+ // --snip--
+- handler(ctx, plan)
++ updatedVM, err := handler(ctx, plan, k.GetModuleVersionMap(ctx)) // k.GetModuleVersionMap() fetches the VersionMap stored in state.
++ if err != nil {
++ return err
++ }
++
++ // Set the updated consensus versions to state
++ k.SetModuleVersionMap(ctx, updatedVM)
+}
+```
+
+A gRPC query endpoint to query the `VersionMap` stored in `x/upgrade`'s state will also be added, so that app developers can double-check the `VersionMap` before the upgrade handler runs.
+
+### Running Migrations
+
+Once all the migration handlers are registered inside the configurator (which happens at startup), running migrations can happen by calling the `RunMigrations` method on `module.Manager`. This function will loop through all modules, and for each module:
+
+* Get the old ConsensusVersion of the module from its `VersionMap` argument (let's call it `M`).
+* Fetch the new ConsensusVersion of the module from the `ConsensusVersion()` method on `AppModule` (call it `N`).
+* If `N>M`, run all registered migrations for the module sequentially `M -> M+1 -> M+2...` until `N`.
+ * There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store.
+
+If a required migration is missing (e.g. if it has not been registered in the `Configurator`), then the `RunMigrations` function will error.
+
+In practice, the `RunMigrations` method should be called from inside an `UpgradeHandler`.
+
+```go
+app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {
+ return app.mm.RunMigrations(ctx, vm)
+})
+```
+
+Assuming a chain upgrades at block `n`, the procedure should run as follows:
+
+* the old binary will halt in `BeginBlock` when starting block `N`. In its store, the ConsensusVersions of the old binary's modules are stored.
+* the new binary will start at block `N`. The UpgradeHandler is set in the new binary, so will run at `BeginBlock` of the new binary. Inside `x/upgrade`'s `ApplyUpgrade`, the `VersionMap` will be retrieved from the (old binary's) store, and passed into the `RunMigrations` function, migrating all module stores in-place before the modules' own `BeginBlock`s.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR introduces a new method `ConsensusVersion()` on `AppModule`, which all modules need to implement. It also alters the UpgradeHandler function signature. As such, it is not backwards-compatible.
+
+While modules MUST register their migration functions when bumping ConsensusVersions, running those scripts using an upgrade handler is optional. An application may perfectly well decide to not call the `RunMigrations` inside its upgrade handler, and continue using the legacy JSON migration path.
+
+### Positive
+
+* Perform chain upgrades without manipulating JSON files.
+* While no benchmark has been made yet, it is probable that in-place store migrations will take less time than JSON migrations. The main reason supporting this claim is that both the `simd export` command on the old binary and the `InitChain` function on the new binary will be skipped.
+
+### Negative
+
+* Module developers MUST correctly track consensus-breaking changes in their modules. If a consensus-breaking change is introduced in a module without its corresponding `ConsensusVersion()` bump, then the `RunMigrations` function won't detect the migration, and the chain upgrade might be unsuccessful. Documentation should clearly reflect this.
+
+### Neutral
+
+* The Cosmos SDK will continue to support JSON migrations via the existing `simd export` and `simd genesis migrate` commands.
+* The current ADR does not allow creating, renaming or deleting stores, only modifying existing store keys and values. The Cosmos SDK already has the `StoreLoader` for those operations.
+
+## Further Discussions
+
+## References
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/8429
+* Implementation of `ConsensusVersion` and `RunMigrations`: https://github.com/cosmos/cosmos-sdk/pull/8485
+* Issue discussing `x/upgrade` design: https://github.com/cosmos/cosmos-sdk/issues/8514
diff --git a/copy-of-sdk-docs/build/architecture/adr-042-group-module.md b/copy-of-sdk-docs/build/architecture/adr-042-group-module.md
new file mode 100644
index 00000000..03fbe34b
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-042-group-module.md
@@ -0,0 +1,279 @@
+# ADR 042: Group Module
+
+## Changelog
+
+* 2020/04/09: Initial Draft
+
+## Status
+
+Draft
+
+## Abstract
+
+This ADR defines the `x/group` module which allows the creation and management of on-chain multi-signature accounts and enables voting for message execution based on configurable decision policies.
+
+## Context
+
+The legacy amino multi-signature mechanism of the Cosmos SDK has certain limitations:
+
+* Key rotation is not possible, although this can be solved with [account rekeying](adr-034-account-rekeying.md).
+* Thresholds can't be changed.
+* UX is cumbersome for non-technical users ([#5661](https://github.com/cosmos/cosmos-sdk/issues/5661)).
+* It requires `legacy_amino` sign mode ([#8141](https://github.com/cosmos/cosmos-sdk/issues/8141)).
+
+While the group module is not meant to be a total replacement for the current multi-signature accounts, it provides a solution to the limitations described above, with a more flexible key management system where keys can be added, updated or removed, as well as configurable thresholds.
+It's meant to be used with other access control modules such as [`x/feegrant`](./adr-029-fee-grant-module.md) and [`x/authz`](adr-030-authz-module.md) to simplify key management for individuals and organizations.
+
+The proof of concept of the group module can be found in https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos/group/v1 and https://github.com/cosmos/cosmos-sdk/tree/main/x/group.
+
+## Decision
+
+We propose merging the `x/group` module with its supporting [ORM/Table Store package](https://github.com/cosmos/cosmos-sdk/tree/main/x/group/internal/orm) ([#7098](https://github.com/cosmos/cosmos-sdk/issues/7098)) into the Cosmos SDK and continuing development here. There will be a dedicated ADR for the ORM package.
+
+### Group
+
+A group is a composition of accounts with associated weights. It is not
+an account and doesn't have a balance. It doesn't in and of itself have any
+sort of voting or decision weight.
+Group members can create proposals and vote on them through group accounts using different decision policies.
+
+It has an `admin` account which can manage members in the group, update the group
+metadata and set a new admin.
+
+```protobuf
+message GroupInfo {
+
+ // group_id is the unique ID of this group.
+ uint64 group_id = 1;
+
+ // admin is the account address of the group's admin.
+ string admin = 2;
+
+ // metadata is any arbitrary metadata to attached to the group.
+ bytes metadata = 3;
+
+ // version is used to track changes to a group's membership structure that
+ // would break existing proposals. Whenever a member weight has changed,
+ // or any member is added or removed, the version is incremented and will
+ // invalidate all proposals from older versions.
+ uint64 version = 4;
+
+ // total_weight is the sum of the group members' weights.
+ string total_weight = 5;
+}
+```
+
+```protobuf
+message GroupMember {
+
+ // group_id is the unique ID of the group.
+ uint64 group_id = 1;
+
+ // member is the member data.
+ Member member = 2;
+}
+
+// Member represents a group member with an account address,
+// non-zero weight and metadata.
+message Member {
+
+ // address is the member's account address.
+ string address = 1;
+
+ // weight is the member's voting weight that should be greater than 0.
+ string weight = 2;
+
+ // metadata is any arbitrary metadata to attached to the member.
+ bytes metadata = 3;
+}
+```
+
+### Group Account
+
+A group account is an account associated with a group and a decision policy.
+A group account does have a balance.
+
+Group accounts are abstracted from groups because a single group may have
+multiple decision policies for different types of actions. Managing group
+membership separately from decision policies results in the least overhead
+and keeps membership consistent across different policies. The pattern that
+is recommended is to have a single master group account for a given group,
+and then to create separate group accounts with different decision policies
+and delegate the desired permissions from the master account to
+those "sub-accounts" using the [`x/authz` module](adr-030-authz-module.md).
+
+```protobuf
+message GroupAccountInfo {
+
+ // address is the group account address.
+ string address = 1;
+
+ // group_id is the ID of the Group the GroupAccount belongs to.
+ uint64 group_id = 2;
+
+ // admin is the account address of the group admin.
+ string admin = 3;
+
+ // metadata is any arbitrary metadata of this group account.
+ bytes metadata = 4;
+
+ // version is used to track changes to a group's GroupAccountInfo structure that
+ // invalidates active proposal from old versions.
+ uint64 version = 5;
+
+ // decision_policy specifies the group account's decision policy.
+ google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"];
+}
+```
+
+Similarly to a group admin, a group account admin can update its metadata, decision policy or set a new group account admin.
+
+A group account can also be an admin or a member of a group.
+For instance, a group admin could be another group account which could "elects" the members or it could be the same group that elects itself.
+
+### Decision Policy
+
+A decision policy is the mechanism by which members of a group can vote on
+proposals.
+
+All decision policies should have a minimum and maximum voting window.
+The minimum voting window is the minimum duration that must pass in order
+for a proposal to potentially pass, and it may be set to 0. The maximum voting
+window is the maximum time that a proposal may be voted on and executed if
+it reached enough support before it is closed.
+Both of these values must be less than a chain-wide max voting window parameter.
+
+We define the `DecisionPolicy` interface that all decision policies must implement:
+
+```go
+type DecisionPolicy interface {
+ codec.ProtoMarshaler
+
+ ValidateBasic() error
+ GetTimeout() types.Duration
+ Allow(tally Tally, totalPower string, votingDuration time.Duration) (DecisionPolicyResult, error)
+ Validate(g GroupInfo) error
+}
+
+type DecisionPolicyResult struct {
+ Allow bool
+ Final bool
+}
+```
+
+#### Threshold decision policy
+
+A threshold decision policy defines a minimum support votes (_yes_), based on a tally
+of voter weights, for a proposal to pass. For
+this decision policy, abstain and veto are treated as no support (_no_).
+
+```protobuf
+message ThresholdDecisionPolicy {
+
+ // threshold is the minimum weighted sum of support votes for a proposal to succeed.
+ string threshold = 1;
+
+ // voting_period is the duration from submission of a proposal to the end of voting period
+ // Within this period, votes and exec messages can be submitted.
+ google.protobuf.Duration voting_period = 2 [(gogoproto.nullable) = false];
+}
+```
+
+### Proposal
+
+Any member of a group can submit a proposal for a group account to decide upon.
+A proposal consists of a set of `sdk.Msg`s that will be executed if the proposal
+passes as well as any metadata associated with the proposal. These `sdk.Msg`s get validated as part of the `Msg/CreateProposal` request validation. They should also have their signer set as the group account.
+
+Internally, a proposal also tracks:
+
+* its current `Status`: submitted, closed or aborted
+* its `Result`: unfinalized, accepted or rejected
+* its `VoteState` in the form of a `Tally`, which is calculated on new votes and when executing the proposal.
+
+```protobuf
+// Tally represents the sum of weighted votes.
+message Tally {
+ option (gogoproto.goproto_getters) = false;
+
+ // yes_count is the weighted sum of yes votes.
+ string yes_count = 1;
+
+ // no_count is the weighted sum of no votes.
+ string no_count = 2;
+
+ // abstain_count is the weighted sum of abstainers.
+ string abstain_count = 3;
+
+ // veto_count is the weighted sum of vetoes.
+ string veto_count = 4;
+}
+```
+
+### Voting
+
+Members of a group can vote on proposals. There are four choices to choose while voting - yes, no, abstain and veto. Not
+all decision policies will support them. Votes can contain some optional metadata.
+In the current implementation, the voting window begins as soon as a proposal
+is submitted.
+
+Voting internally updates the proposal `VoteState` as well as `Status` and `Result` if needed.
+
+### Executing Proposals
+
+Proposals will not be automatically executed by the chain in this current design,
+but rather a user must submit a `Msg/Exec` transaction to attempt to execute the
+proposal based on the current votes and decision policy. A future upgrade could
+automate this and have the group account (or a fee granter) pay.
+
+#### Changing Group Membership
+
+In the current implementation, updating a group or a group account after submitting a proposal will make it invalid. It will simply fail if someone calls `Msg/Exec` and will eventually be garbage collected.
+
+### Notes on current implementation
+
+This section outlines the current implementation used in the proof of concept of the group module but this could be subject to changes and iterated on.
+
+#### ORM
+
+The [ORM package](https://github.com/cosmos/cosmos-sdk/discussions/9156) defines tables, sequences and secondary indexes which are used in the group module.
+
+Groups are stored in state as part of a `groupTable`, the `group_id` being an auto-increment integer. Group members are stored in a `groupMemberTable`.
+
+Group accounts are stored in a `groupAccountTable`. The group account address is generated based on an auto-increment integer which is used to derive the group module `RootModuleKey` into a `DerivedModuleKey`, as stated in [ADR-033](adr-033-protobuf-inter-module-comm.md#modulekeys-and-moduleids). The group account is added as a new `ModuleAccount` through `x/auth`.
+
+Proposals are stored as part of the `proposalTable` using the `Proposal` type. The `proposal_id` is an auto-increment integer.
+
+Votes are stored in the `voteTable`. The primary key is based on the vote's `proposal_id` and `voter` account address.
+
+#### ADR-033 to route proposal messages
+
+Inter-module communication introduced by [ADR-033](adr-033-protobuf-inter-module-comm.md) can be used to route a proposal's messages using the `DerivedModuleKey` corresponding to the proposal's group account.
+
+## Consequences
+
+### Positive
+
+* Improved UX for multi-signature accounts allowing key rotation and custom decision policies.
+
+### Negative
+
+### Neutral
+
+* It uses ADR 033 so it will need to be implemented within the Cosmos SDK, but this doesn't imply necessarily any large refactoring of existing Cosmos SDK modules.
+* The current implementation of the group module uses the ORM package.
+
+## Further Discussions
+
+* Convergence of `/group` and `x/gov` as both support proposals and voting: https://github.com/cosmos/cosmos-sdk/discussions/9066
+* `x/group` possible future improvements:
+ * Execute proposals on submission (https://github.com/regen-network/regen-ledger/issues/288)
+ * Withdraw a proposal (https://github.com/regen-network/cosmos-modules/issues/41)
+ * Make `Tally` more flexible and support non-binary choices
+
+## References
+
+* Initial specification:
+ * https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module
+ * [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236)
+* Proposal to add `x/group` into the Cosmos SDK: [#7633](https://github.com/cosmos/cosmos-sdk/issues/7633)
diff --git a/copy-of-sdk-docs/build/architecture/adr-043-nft-module.md b/copy-of-sdk-docs/build/architecture/adr-043-nft-module.md
new file mode 100644
index 00000000..7c8dfcd1
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-043-nft-module.md
@@ -0,0 +1,349 @@
+# ADR 43: NFT Module
+
+## Changelog
+
+* 2021-05-01: Initial Draft
+* 2021-07-02: Review updates
+* 2022-06-15: Add batch operation
+* 2022-11-11: Remove strict validation of classID and tokenID
+
+## Status
+
+PROPOSED
+
+## Abstract
+
+This ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly "compatible" with ERC721. **Applications using the `x/nft` module must implement the following functions**:
+
+* `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module.
+* `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module.
+* `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module.
+* `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module.
+* `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module.
+
+## Context
+
+NFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in https://github.com/cosmos/cosmos-sdk/discussions/9065.
+
+As discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered:
+
+* irismod/nft and modules/incubator/nft
+* CW721
+* DID NFTs
+* interNFT
+
+Since functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types.
+
+Considering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic.
+This design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module.
+
+The current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft).
+
+## Decision
+
+We create a `x/nft` module, which contains the following functionality:
+
+* Store NFTs and track their ownership.
+* Expose `Keeper` interface for composing modules to transfer, mint and burn NFTs.
+* Expose external `Message` interface for users to transfer ownership of their NFTs.
+* Query NFTs and their supply information.
+
+The proposed module is a base module for NFT app logic. It's goal it to provide a common layer for storage, basic transfer functionality and IBC. The module should not be used as a standalone.
+Instead an app should create a specialized module to handle app specific logic (eg: NFT ID construction, royalty), user level minting and burning. Moreover an app specialized module should handle auxiliary data to support the app logic (eg indexes, ORM, business data).
+
+All data carried over IBC must be part of the `NFT` or `Class` type described below. The app specific NFT data should be encoded in `NFT.data` for cross-chain integrity. Other objects related to NFT, which are not important for integrity can be part of the app specific module.
+
+### Types
+
+We propose two main types:
+
+* `Class` -- describes NFT class. We can think about it as a smart contract address.
+* `NFT` -- object representing unique, non fungible asset. Each NFT is associated with a Class.
+
+#### Class
+
+NFT **Class** is comparable to an ERC-721 smart contract (provides description of a smart contract), under which a collection of NFTs can be created and managed.
+
+```protobuf
+message Class {
+ string id = 1;
+ string name = 2;
+ string symbol = 3;
+ string description = 4;
+ string uri = 5;
+ string uri_hash = 6;
+ google.protobuf.Any data = 7;
+}
+```
+
+* `id` is used as the primary index for storing the class; _required_
+* `name` is a descriptive name of the NFT class; _optional_
+* `symbol` is the symbol usually shown on exchanges for the NFT class; _optional_
+* `description` is a detailed description of the NFT class; _optional_
+* `uri` is a URI for the class metadata stored off chain. It should be a JSON file that contains metadata about the NFT class and NFT data schema ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); _optional_
+* `uri_hash` is a hash of the document pointed by uri; _optional_
+* `data` is app specific metadata of the class; _optional_
+
+#### NFT
+
+We define a general model for `NFT` as follows.
+
+```protobuf
+message NFT {
+ string class_id = 1;
+ string id = 2;
+ string uri = 3;
+ string uri_hash = 4;
+ google.protobuf.Any data = 10;
+}
+```
+
+* `class_id` is the identifier of the NFT class where the NFT belongs; _required_
+* `id` is an identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_
+
+ ```text
+ {class_id}/{id} --> NFT (bytes)
+ ```
+
+* `uri` is a URI for the NFT metadata stored off chain. Should point to a JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_
+* `uri_hash` is a hash of the document pointed by uri; _optional_
+* `data` is an app specific data of the NFT. CAN be used by composing modules to specify additional properties of the NFT; _optional_
+
+This ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality.
+
+### `Keeper` Interface
+
+```go
+type Keeper interface {
+ NewClass(ctx sdk.Context,class Class)
+ UpdateClass(ctx sdk.Context,class Class)
+
+ Mint(ctx sdk.Context,nft NFT,receiver sdk.AccAddress) // updates totalSupply
+ BatchMint(ctx sdk.Context, tokens []NFT,receiver sdk.AccAddress) error
+
+ Burn(ctx sdk.Context, classId string, nftId string) // updates totalSupply
+ BatchBurn(ctx sdk.Context, classID string, nftIDs []string) error
+
+ Update(ctx sdk.Context, nft NFT)
+ BatchUpdate(ctx sdk.Context, tokens []NFT) error
+
+ Transfer(ctx sdk.Context, classId string, nftId string, receiver sdk.AccAddress)
+ BatchTransfer(ctx sdk.Context, classID string, nftIDs []string, receiver sdk.AccAddress) error
+
+ GetClass(ctx sdk.Context, classId string) Class
+ GetClasses(ctx sdk.Context) []Class
+
+ GetNFT(ctx sdk.Context, classId string, nftId string) NFT
+ GetNFTsOfClassByOwner(ctx sdk.Context, classId string, owner sdk.AccAddress) []NFT
+ GetNFTsOfClass(ctx sdk.Context, classId string) []NFT
+
+ GetOwner(ctx sdk.Context, classId string, nftId string) sdk.AccAddress
+ GetBalance(ctx sdk.Context, classId string, owner sdk.AccAddress) uint64
+ GetTotalSupply(ctx sdk.Context, classId string) uint64
+}
+```
+
+Other business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`.
+
+### `Msg` Service
+
+```protobuf
+service Msg {
+ rpc Send(MsgSend) returns (MsgSendResponse);
+}
+
+message MsgSend {
+ string class_id = 1;
+ string id = 2;
+ string sender = 3;
+ string receiver = 4;
+}
+message MsgSendResponse {}
+```
+
+`MsgSend` can be used to transfer the ownership of an NFT to another address.
+
+The implementation outline of the server is as follows:
+
+```go
+type msgServer struct{
+ k Keeper
+}
+
+func (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) {
+ // check current ownership
+ assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id))
+
+ // transfer ownership
+ m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver)
+
+ return &types.MsgSendResponse{}, nil
+}
+```
+
+The query service methods for the `x/nft` module are:
+
+```protobuf
+service Query {
+ // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721
+ rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/balance/{owner}/{class_id}";
+ }
+
+ // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721
+ rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/owner/{class_id}/{id}";
+ }
+
+ // Supply queries the number of NFTs from the given class, same as totalSupply of ERC721.
+ rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/supply/{class_id}";
+ }
+
+ // NFTs queries all NFTs of a given class or owner,choose at least one of the two, similar to tokenByIndex in ERC721Enumerable
+ rpc NFTs(QueryNFTsRequest) returns (QueryNFTsResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/nfts";
+ }
+
+ // NFT queries an NFT based on its class and id.
+ rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}/{id}";
+ }
+
+ // Class queries an NFT class based on its id
+ rpc Class(QueryClassRequest) returns (QueryClassResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/classes/{class_id}";
+ }
+
+ // Classes queries all NFT classes
+ rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/classes";
+ }
+}
+
+// QueryBalanceRequest is the request type for the Query/Balance RPC method
+message QueryBalanceRequest {
+ string class_id = 1;
+ string owner = 2;
+}
+
+// QueryBalanceResponse is the response type for the Query/Balance RPC method
+message QueryBalanceResponse {
+ uint64 amount = 1;
+}
+
+// QueryOwnerRequest is the request type for the Query/Owner RPC method
+message QueryOwnerRequest {
+ string class_id = 1;
+ string id = 2;
+}
+
+// QueryOwnerResponse is the response type for the Query/Owner RPC method
+message QueryOwnerResponse {
+ string owner = 1;
+}
+
+// QuerySupplyRequest is the request type for the Query/Supply RPC method
+message QuerySupplyRequest {
+ string class_id = 1;
+}
+
+// QuerySupplyResponse is the response type for the Query/Supply RPC method
+message QuerySupplyResponse {
+ uint64 amount = 1;
+}
+
+// QueryNFTsRequest is the request type for the Query/NFTs RPC method
+message QueryNFTsRequest {
+ string class_id = 1;
+ string owner = 2;
+ cosmos.base.query.v1beta1.PageRequest pagination = 3;
+}
+
+// QueryNFTsResponse is the response type for the Query/NFTs RPC methods
+message QueryNFTsResponse {
+ repeated cosmos.nft.v1beta1.NFT nfts = 1;
+ cosmos.base.query.v1beta1.PageResponse pagination = 2;
+}
+
+// QueryNFTRequest is the request type for the Query/NFT RPC method
+message QueryNFTRequest {
+ string class_id = 1;
+ string id = 2;
+}
+
+// QueryNFTResponse is the response type for the Query/NFT RPC method
+message QueryNFTResponse {
+ cosmos.nft.v1beta1.NFT nft = 1;
+}
+
+// QueryClassRequest is the request type for the Query/Class RPC method
+message QueryClassRequest {
+ string class_id = 1;
+}
+
+// QueryClassResponse is the response type for the Query/Class RPC method
+message QueryClassResponse {
+ cosmos.nft.v1beta1.Class class = 1;
+}
+
+// QueryClassesRequest is the request type for the Query/Classes RPC method
+message QueryClassesRequest {
+ // pagination defines an optional pagination for the request.
+ cosmos.base.query.v1beta1.PageRequest pagination = 1;
+}
+
+// QueryClassesResponse is the response type for the Query/Classes RPC method
+message QueryClassesResponse {
+ repeated cosmos.nft.v1beta1.Class classes = 1;
+ cosmos.base.query.v1beta1.PageResponse pagination = 2;
+}
+```
+
+### Interoperability
+
+Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side.
+IBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it.
+
+For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC.
+
+## Consequences
+
+### Backward Compatibility
+
+No backward incompatibilities.
+
+### Forward Compatibility
+
+This specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe.
+
+### Positive
+
+* NFT identifiers available on Cosmos Hub.
+* Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721.
+* NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge
+
+### Negative
+
+* New IBC app is required for x/nft
+* CW721 adapter is required
+
+### Neutral
+
+* Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties.
+
+## Further Discussions
+
+For other kinds of applications on the Hub, more app-specific modules can be developed in the future:
+
+* `x/nft/custody`: custody of NFTs to support trading functionality.
+* `x/nft/marketplace`: selling and buying NFTs using sdk.Coins.
+* `x/fractional`: a module to split an ownership of an asset (NFT or other assets) for multiple stakeholder. `x/group` should work for most of the cases.
+
+Other networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases.
+
+## References
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/9065
+* x/nft: initialize module: https://github.com/cosmos/cosmos-sdk/pull/9174
+* [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-033-protobuf-inter-module-comm.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-044-protobuf-updates-guidelines.md b/copy-of-sdk-docs/build/architecture/adr-044-protobuf-updates-guidelines.md
new file mode 100644
index 00000000..595b16de
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-044-protobuf-updates-guidelines.md
@@ -0,0 +1,129 @@
+# ADR 044: Guidelines for Updating Protobuf Definitions
+
+## Changelog
+
+* 28.06.2021: Initial Draft
+* 02.12.2021: Add `Since:` comment for new fields
+* 21.07.2022: Remove the rule of no new `Msg` in the same proto version.
+
+## Status
+
+Draft
+
+## Abstract
+
+This ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers.
+
+## Context
+
+The Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations.
+
+When making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example:
+
+* Adding fields to `Msg`s. Adding fields is not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node.
+* Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue.
+
+Moreover, module developers often face other questions around Protobuf definitions such as "Can I rename a field?" or "Can I deprecate a field?" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions.
+
+## Decision
+
+We decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions:
+
+* `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs.
+* `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments.
+* `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix.
+* `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix.
+* `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility.
+
+On top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK.
+
+### Updating Protobuf Definition Without Bumping Version
+
+#### 1. Module developers MAY add new Protobuf definitions
+
+Module developers MAY add new `message`s, new `Service`s, new `rpc` endpoints, and new fields to existing messages. This recommendation follows the Protobuf specification, but is added in this document for clarity, as the SDK requires one additional change.
+
+The SDK requires the Protobuf comment of the new addition to contain one line with the following format:
+
+```protobuf
+// Since: cosmos-sdk {, ...}
+```
+
+Where each `version` denotes a minor ("0.45") or patch ("0.44.5") version from which the field is available. This will greatly help client libraries, who can optionally use reflection or custom code generation to show/hide these fields depending on the targeted node version.
+
+As examples, the following comments are valid:
+
+```protobuf
+// Since: cosmos-sdk 0.44
+
+// Since: cosmos-sdk 0.42.11, 0.44.5
+```
+
+and the following ones are NOT valid:
+
+```protobuf
+// Since cosmos-sdk v0.44
+
+// since: cosmos-sdk 0.44
+
+// Since: cosmos-sdk 0.42.11 0.44.5
+
+// Since: Cosmos SDK 0.42.11, 0.44.5
+```
+
+#### 2. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields
+
+Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version).
+
+As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically:
+
+* The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty.
+* The Cosmos SDK now supports [governance split votes](./adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`.
+
+#### 3. Fields MUST NOT be renamed
+
+Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI.
+
+### Incrementing Protobuf Package Version
+
+TODO, needs architecture review. Some topics:
+
+* Bumping versions frequency
+* When bumping versions, should the Cosmos SDK support both versions?
+ * i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions?
+* mention ADR-023 Protobuf naming
+
+## Consequences
+
+> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.
+
+### Backwards Compatibility
+
+> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright.
+
+### Positive
+
+* less pain to tool developers
+* more compatibility in the ecosystem
+* ...
+
+### Negative
+
+{negative consequences}
+
+### Neutral
+
+* more rigor in Protobuf review
+
+## Further Discussions
+
+This ADR is still in the DRAFT stage, and the "Incrementing Protobuf Package Version" will be filled in once we make a decision on how to correctly do it.
+
+## Test Cases [optional]
+
+Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.
+
+## References
+
+* [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1
+* [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes
diff --git a/copy-of-sdk-docs/build/architecture/adr-045-check-delivertx-middlewares.md b/copy-of-sdk-docs/build/architecture/adr-045-check-delivertx-middlewares.md
new file mode 100644
index 00000000..f55c2159
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-045-check-delivertx-middlewares.md
@@ -0,0 +1,312 @@
+# ADR 045: BaseApp `{Check,Deliver}Tx` as Middlewares
+
+## Changelog
+
+* 20.08.2021: Initial draft.
+* 07.12.2021: Update `tx.Handler` interface ([\#10693](https://github.com/cosmos/cosmos-sdk/pull/10693)).
+* 17.05.2022: ADR is abandoned, as middlewares are deemed too hard to reason about.
+
+## Status
+
+ABANDONED. Replacement is being discussed in [#11955](https://github.com/cosmos/cosmos-sdk/issues/11955).
+
+## Abstract
+
+This ADR replaces the current BaseApp `runTx` and antehandlers design with a middleware-based design.
+
+## Context
+
+BaseApp's implementation of ABCI `{Check,Deliver}Tx()` and its own `Simulate()` method call the `runTx` method under the hood, which first runs antehandlers, then executes `Msg`s. However, the [transaction Tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [refunding unused gas](https://github.com/cosmos/cosmos-sdk/issues/2150) use cases require custom logic to be run after the `Msg`s execution. There is currently no way to achieve this.
+
+A naive solution would be to add post-`Msg` hooks to BaseApp. However, the Cosmos SDK team thinks in parallel about the bigger picture of making app wiring simpler ([#9181](https://github.com/cosmos/cosmos-sdk/discussions/9182)), which includes making BaseApp more lightweight and modular.
+
+## Decision
+
+We decide to transform Baseapp's implementation of ABCI `{Check,Deliver}Tx` and its own `Simulate` methods to use a middleware-based design.
+
+The two following interfaces are the base of the middleware design, and are defined in `types/tx`:
+
+```go
+type Handler interface {
+ CheckTx(ctx context.Context, req Request, checkReq RequestCheckTx) (Response, ResponseCheckTx, error)
+ DeliverTx(ctx context.Context, req Request) (Response, error)
+ SimulateTx(ctx context.Context, req Request (Response, error)
+}
+
+type Middleware func(Handler) Handler
+```
+
+where we define the following arguments and return types:
+
+```go
+type Request struct {
+ Tx sdk.Tx
+ TxBytes []byte
+}
+
+type Response struct {
+ GasWanted uint64
+ GasUsed uint64
+ // MsgResponses is an array containing each Msg service handler's response
+ // type, packed in an Any. This will get proto-serialized into the `Data` field
+ // in the ABCI Check/DeliverTx responses.
+ MsgResponses []*codectypes.Any
+ Log string
+ Events []abci.Event
+}
+
+type RequestCheckTx struct {
+ Type abci.CheckTxType
+}
+
+type ResponseCheckTx struct {
+ Priority int64
+}
+```
+
+Please note that because CheckTx handles separate logic related to mempool prioritization, its signature is different than DeliverTx and SimulateTx.
+
+BaseApp holds a reference to a `tx.Handler`:
+
+```go
+type BaseApp struct {
+ // other fields
+ txHandler tx.Handler
+}
+```
+
+Baseapp's ABCI `{Check,Deliver}Tx()` and `Simulate()` methods simply call `app.txHandler.{Check,Deliver,Simulate}Tx()` with the relevant arguments. For example, for `DeliverTx`:
+
+```go
+func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
+ var abciRes abci.ResponseDeliverTx
+ ctx := app.getContextForTx(runTxModeDeliver, req.Tx)
+ res, err := app.txHandler.DeliverTx(ctx, tx.Request{TxBytes: req.Tx})
+ if err != nil {
+ abciRes = sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace)
+ return abciRes
+ }
+
+ abciRes, err = convertTxResponseToDeliverTx(res)
+ if err != nil {
+ return sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace)
+ }
+
+ return abciRes
+}
+
+// convertTxResponseToDeliverTx converts a tx.Response into a abci.ResponseDeliverTx.
+func convertTxResponseToDeliverTx(txRes tx.Response) (abci.ResponseDeliverTx, error) {
+ data, err := makeABCIData(txRes)
+ if err != nil {
+ return abci.ResponseDeliverTx{}, nil
+ }
+
+ return abci.ResponseDeliverTx{
+ Data: data,
+ Log: txRes.Log,
+ Events: txRes.Events,
+ }, nil
+}
+
+// makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx.
+func makeABCIData(txRes tx.Response) ([]byte, error) {
+ return proto.Marshal(&sdk.TxMsgData{MsgResponses: txRes.MsgResponses})
+}
+```
+
+The implementations are similar for `BaseApp.CheckTx` and `BaseApp.Simulate`.
+
+`baseapp.txHandler`'s three methods' implementations can obviously be monolithic functions, but for modularity we propose a middleware composition design, where a middleware is simply a function that takes a `tx.Handler`, and returns another `tx.Handler` wrapped around the previous one.
+
+### Implementing a Middleware
+
+In practice, middlewares are created by Go function that takes as arguments some parameters needed for the middleware, and returns a `tx.Middleware`.
+
+For example, for creating an arbitrary `MyMiddleware`, we can implement:
+
+```go
+// myTxHandler is the tx.Handler of this middleware. Note that it holds a
+// reference to the next tx.Handler in the stack.
+type myTxHandler struct {
+ // next is the next tx.Handler in the middleware stack.
+ next tx.Handler
+ // some other fields that are relevant to the middleware can be added here
+}
+
+// NewMyMiddleware returns a middleware that does this and that.
+func NewMyMiddleware(arg1, arg2) tx.Middleware {
+ return func (txh tx.Handler) tx.Handler {
+ return myTxHandler{
+ next: txh,
+ // optionally, set arg1, arg2... if they are needed in the middleware
+ }
+ }
+}
+
+// Assert myTxHandler is a tx.Handler.
+var _ tx.Handler = myTxHandler{}
+
+func (h myTxHandler) CheckTx(ctx context.Context, req Request, checkReq RequestcheckTx) (Response, ResponseCheckTx, error) {
+ // CheckTx specific pre-processing logic
+
+ // run the next middleware
+ res, checkRes, err := txh.next.CheckTx(ctx, req, checkReq)
+
+ // CheckTx specific post-processing logic
+
+ return res, checkRes, err
+}
+
+func (h myTxHandler) DeliverTx(ctx context.Context, req Request) (Response, error) {
+ // DeliverTx specific pre-processing logic
+
+ // run the next middleware
+ res, err := txh.next.DeliverTx(ctx, tx, req)
+
+ // DeliverTx specific post-processing logic
+
+ return res, err
+}
+
+func (h myTxHandler) SimulateTx(ctx context.Context, req Request) (Response, error) {
+ // SimulateTx specific pre-processing logic
+
+ // run the next middleware
+ res, err := txh.next.SimulateTx(ctx, tx, req)
+
+ // SimulateTx specific post-processing logic
+
+ return res, err
+}
+```
+
+### Composing Middlewares
+
+While BaseApp simply holds a reference to a `tx.Handler`, this `tx.Handler` itself is defined using a middleware stack. The Cosmos SDK exposes a base (i.e. innermost) `tx.Handler` called `RunMsgsTxHandler`, which executes messages.
+
+Then, the app developer can compose multiple middlewares on top of the base `tx.Handler`. Each middleware can run pre-and-post-processing logic around its next middleware, as described in the section above. Conceptually, as an example, given the middlewares `A`, `B`, and `C` and the base `tx.Handler` `H` the stack looks like:
+
+```text
+A.pre
+ B.pre
+ C.pre
+ H # The base tx.handler, for example `RunMsgsTxHandler`
+ C.post
+ B.post
+A.post
+```
+
+We define a `ComposeMiddlewares` function for composing middlewares. It takes the base handler as first argument, and middlewares in the "outer to inner" order. For the above stack, the final `tx.Handler` is:
+
+```go
+txHandler := middleware.ComposeMiddlewares(H, A, B, C)
+```
+
+The middleware is set in BaseApp via its `SetTxHandler` setter:
+
+```go
+// simapp/app.go
+
+txHandler := middleware.ComposeMiddlewares(...)
+app.SetTxHandler(txHandler)
+```
+
+The app developer can define their own middlewares, or use the Cosmos SDK's pre-defined middlewares from `middleware.NewDefaultTxHandler()`.
+
+### Middlewares Maintained by the Cosmos SDK
+
+While the app developer can define and compose the middlewares of their choice, the Cosmos SDK provides a set of middlewares that caters for the ecosystem's most common use cases. These middlewares are:
+
+| Middleware | Description |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RunMsgsTxHandler | This is the base `tx.Handler`. It replaces the old baseapp's `runMsgs`, and executes a transaction's `Msg`s. |
+| TxDecoderMiddleware | This middleware takes in transaction raw bytes, and decodes them into a `sdk.Tx`. It replaces the `baseapp.txDecoder` field, so that BaseApp stays as thin as possible. Since most middlewares read the contents of the `sdk.Tx`, the TxDecoderMiddleware should be run first in the middleware stack. |
+| {Antehandlers} | Each antehandler is converted to its own middleware. These middlewares perform signature verification, fee deductions and other validations on the incoming transaction. |
+| IndexEventsTxMiddleware | This is a simple middleware that chooses which events to index in Tendermint. Replaces `baseapp.indexEvents` (which unfortunately still exists in baseapp too, because it's used to index Begin/EndBlock events) |
+| RecoveryTxMiddleware | This index recovers from panics. It replaces baseapp.runTx's panic recovery described in [ADR-022](./adr-022-custom-panic-handling.md). |
+| GasTxMiddleware | This replaces the [`Setup`](https://github.com/cosmos/cosmos-sdk/blob/v0.43.0/x/auth/ante/setup.go) Antehandler. It sets a GasMeter on sdk.Context. Note that before, GasMeter was set on sdk.Context inside the antehandlers, and there was some mess around the fact that antehandlers had their own panic recovery system so that the GasMeter could be read by baseapp's recovery system. Now, this mess is all removed: one middleware sets GasMeter, another one handles recovery. |
+
+### Similarities and Differences between Antehandlers and Middlewares
+
+The middleware-based design builds upon the existing antehandlers design described in [ADR-010](./adr-010-modular-antehandler.md). Even though the final decision of ADR-010 was to go with the "Simple Decorators" approach, the middleware design is actually very similar to the other [Decorator Pattern](./adr-010-modular-antehandler.md#decorator-pattern) proposal, also used in [weave](https://github.com/iov-one/weave).
+
+#### Similarities with Antehandlers
+
+* Designed as chaining/composing small modular pieces.
+* Allow code reuse for `{Check,Deliver}Tx` and for `Simulate`.
+* Set up in `app.go`, and easily customizable by app developers.
+* Order is important.
+
+#### Differences with Antehandlers
+
+* The Antehandlers are run before `Msg` execution, whereas middlewares can run before and after.
+* The middleware approach uses separate methods for `{Check,Deliver,Simulate}Tx`, whereas the antehandlers pass a `simulate bool` flag and uses the `sdkCtx.Is{Check,Recheck}Tx()` flags to determine in which transaction mode we are.
+* The middleware design lets each middleware hold a reference to the next middleware, whereas the antehandlers pass a `next` argument in the `AnteHandle` method.
+* The middleware design use Go's standard `context.Context`, whereas the antehandlers use `sdk.Context`.
+
+## Consequences
+
+### Backwards Compatibility
+
+Since this refactor removes some logic away from BaseApp and into middlewares, it introduces API-breaking changes for app developers. Most notably, instead of creating an antehandler chain in `app.go`, app developers need to create a middleware stack:
+
+```diff
+- anteHandler, err := ante.NewAnteHandler(
+- ante.HandlerOptions{
+- AccountKeeper: app.AccountKeeper,
+- BankKeeper: app.BankKeeper,
+- SignModeHandler: encodingConfig.TxConfig.SignModeHandler(),
+- FeegrantKeeper: app.FeeGrantKeeper,
+- SigGasConsumer: ante.DefaultSigVerificationGasConsumer,
+- },
+-)
++txHandler, err := authmiddleware.NewDefaultTxHandler(authmiddleware.TxHandlerOptions{
++ Debug: app.Trace(),
++ IndexEvents: indexEvents,
++ LegacyRouter: app.legacyRouter,
++ MsgServiceRouter: app.msgSvcRouter,
++ LegacyAnteHandler: anteHandler,
++ TxDecoder: encodingConfig.TxConfig.TxDecoder,
++})
+if err != nil {
+ panic(err)
+}
+- app.SetAnteHandler(anteHandler)
++ app.SetTxHandler(txHandler)
+```
+
+Other more minor API breaking changes will also be provided in the CHANGELOG. As usual, the Cosmos SDK will provide a release migration document for app developers.
+
+This ADR does not introduce any state-machine-, client- or CLI-breaking changes.
+
+### Positive
+
+* Allow custom logic to be run before an after `Msg` execution. This enables the [tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [gas refund](https://github.com/cosmos/cosmos-sdk/issues/2150) uses cases, and possibly other ones.
+* Make BaseApp more lightweight, and defer complex logic to small modular components.
+* Separate paths for `{Check,Deliver,Simulate}Tx` with different returns types. This allows for improved readability (replace `if sdkCtx.IsRecheckTx() && !simulate {...}` with separate methods) and more flexibility (e.g. returning a `priority` in `ResponseCheckTx`).
+
+### Negative
+
+* It is hard to understand at first glance the state updates that would occur after a middleware runs given the `sdk.Context` and `tx`. A middleware can have an arbitrary number of nested middleware being called within its function body, each possibly doing some pre- and post-processing before calling the next middleware on the chain. Thus to understand what a middleware is doing, one must also understand what every other middleware further along the chain is also doing, and the order of middlewares matters. This can get quite complicated to understand.
+* API-breaking changes for app developers.
+
+### Neutral
+
+No neutral consequences.
+
+## Further Discussions
+
+* [#9934](https://github.com/cosmos/cosmos-sdk/discussions/9934) Decomposing BaseApp's other ABCI methods into middlewares.
+* Replace `sdk.Tx` interface with the concrete protobuf Tx type in the `tx.Handler` methods signature.
+
+## Test Cases
+
+We update the existing baseapp and antehandlers tests to use the new middleware API, but keep the same test cases and logic, to avoid introducing regressions. Existing CLI tests will also be left untouched.
+
+For new middlewares, we introduce unit tests. Since middlewares are purposefully small, unit tests suit well.
+
+## References
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/9585
+* Implementation: [#9920 BaseApp refactor](https://github.com/cosmos/cosmos-sdk/pull/9920) and [#10028 Antehandlers migration](https://github.com/cosmos/cosmos-sdk/pull/10028)
diff --git a/copy-of-sdk-docs/build/architecture/adr-046-module-params.md b/copy-of-sdk-docs/build/architecture/adr-046-module-params.md
new file mode 100644
index 00000000..10bb65cd
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-046-module-params.md
@@ -0,0 +1,184 @@
+# ADR 046: Module Params
+
+## Changelog
+
+* Sep 22, 2021: Initial Draft
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR describes an alternative approach to how Cosmos SDK modules use, interact,
+and store their respective parameters.
+
+## Context
+
+Currently, in the Cosmos SDK, modules that require the use of parameters use the
+`x/params` module. The `x/params` works by having modules define parameters,
+typically via a simple `Params` structure, and registering that structure in
+the `x/params` module via a unique `Subspace` that belongs to the respective
+registering module. The registering module then has unique access to its respective
+`Subspace`. Through this `Subspace`, the module can get and set its `Params`
+structure.
+
+In addition, the Cosmos SDK's `x/gov` module has direct support for changing
+parameters on-chain via a `ParamChangeProposal` governance proposal type, where
+stakeholders can vote on suggested parameter changes.
+
+There are various tradeoffs to using the `x/params` module to manage individual
+module parameters. Namely, managing parameters essentially comes for "free" in
+that developers only need to define the `Params` struct, the `Subspace`, and the
+various auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However,
+there are some notable drawbacks. These drawbacks include the fact that parameters
+are serialized in state via JSON which is extremely slow. In addition, parameter
+changes via `ParamChangeProposal` governance proposals have no way of reading from
+or writing to state. In other words, it is currently not possible to have any
+state transitions in the application during an attempt to change param(s).
+
+## Decision
+
+We will build off of the alignment of `x/gov` and `x/authz` work per
+[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers
+will create one or more unique parameter data structures that must be serialized
+to state. The Param data structures must implement `sdk.Msg` interface with respective
+Protobuf Msg service method which will validate and update the parameters with all
+necessary changes. The `x/gov` module via the work done in
+[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param
+messages, which will be handled by Protobuf Msg services.
+
+Note, it is up to developers to decide how to structure their parameters and
+the respective `sdk.Msg` messages. Consider the parameters currently defined in
+`x/auth` using the `x/params` module for parameter management:
+
+```protobuf
+message Params {
+ uint64 max_memo_characters = 1;
+ uint64 tx_sig_limit = 2;
+ uint64 tx_size_cost_per_byte = 3;
+ uint64 sig_verify_cost_ed25519 = 4;
+ uint64 sig_verify_cost_secp256k1 = 5;
+}
+```
+
+Developers can choose to either create a unique data structure for every field in
+`Params` or they can create a single `Params` structure as outlined above in the
+case of `x/auth`.
+
+In the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single
+field along with a handler. This can become burdensome if there are a lot of
+parameter fields. In the latter case, there is only a single data structure and
+thus only a single message handler, however, the message handler might have to be
+more sophisticated in that it might need to understand what parameters are being
+changed vs what parameters are untouched.
+
+Params change proposals are made using the `x/gov` module. Execution is done through
+`x/authz` authorization to the root `x/gov` module's account.
+
+Continuing to use `x/auth`, we demonstrate a more complete example:
+
+```go
+type Params struct {
+ MaxMemoCharacters uint64
+ TxSigLimit uint64
+ TxSizeCostPerByte uint64
+ SigVerifyCostED25519 uint64
+ SigVerifyCostSecp256k1 uint64
+}
+
+type MsgUpdateParams struct {
+ MaxMemoCharacters uint64
+ TxSigLimit uint64
+ TxSizeCostPerByte uint64
+ SigVerifyCostED25519 uint64
+ SigVerifyCostSecp256k1 uint64
+}
+
+type MsgUpdateParamsResponse struct {}
+
+func (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+
+ // verification logic...
+
+ // persist params
+ params := ParamsFromMsg(msg)
+ ms.SaveParams(ctx, params)
+
+ return &types.MsgUpdateParamsResponse{}, nil
+}
+
+func ParamsFromMsg(msg *types.MsgUpdateParams) Params {
+ // ...
+}
+```
+
+A gRPC `Service` query should also be provided, for example:
+
+```protobuf
+service Query {
+ // ...
+
+ rpc Params(QueryParamsRequest) returns (QueryParamsResponse) {
+ option (google.api.http).get = "/cosmos//v1beta1/params";
+ }
+}
+
+message QueryParamsResponse {
+ Params params = 1 [(gogoproto.nullable) = false];
+}
+```
+
+## Consequences
+
+As a result of implementing the module parameter methodology, we gain the ability
+for module parameter changes to be stateful and extensible to fit nearly every
+application's use case. We will be able to emit events (and trigger hooks registered
+to that events using the work proposed in [event hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)),
+call other Msg service methods or perform migration.
+In addition, there will be significant gains in performance when it comes to reading
+and writing parameters from and to state, especially if a specific set of parameters
+are read on a consistent basis.
+
+However, this methodology will require developers to implement more types and
+Msg service methods which can become burdensome if many parameters exist. In addition,
+developers are required to implement persistence logics of module parameters.
+However, this should be trivial.
+
+### Backwards Compatibility
+
+The new method for working with module parameters is naturally not backwards
+compatible with the existing `x/params` module. However, the `x/params` will
+remain in the Cosmos SDK and will be marked as deprecated with no additional
+functionality being added apart from potential bug fixes. Note, the `x/params`
+module may be removed entirely in a future release.
+
+### Positive
+
+* Module parameters are serialized more efficiently
+* Modules are able to react on parameters changes and perform additional actions.
+* Special events can be emitted, allowing hooks to be triggered.
+
+### Negative
+
+* Module parameters become slightly more burdensome for module developers:
+ * Modules are now responsible for persisting and retrieving parameter state
+ * Modules are now required to have unique message handlers to handle parameter
+ changes per unique parameter data structure.
+
+### Neutral
+
+* Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed
+ and merged.
+
+
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/pull/9810
+* https://github.com/cosmos/cosmos-sdk/issues/9438
+* https://github.com/cosmos/cosmos-sdk/discussions/9913
diff --git a/copy-of-sdk-docs/build/architecture/adr-047-extend-upgrade-plan.md b/copy-of-sdk-docs/build/architecture/adr-047-extend-upgrade-plan.md
new file mode 100644
index 00000000..610feccc
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-047-extend-upgrade-plan.md
@@ -0,0 +1,254 @@
+# ADR 047: Extend Upgrade Plan
+
+## Changelog
+
+* Nov, 23, 2021: Initial Draft
+* May, 16, 2023: Proposal ABANDONED. `pre_run` and `post_run` are not necessary anymore and adding the `artifacts` brings minor benefits.
+
+## Status
+
+ABANDONED
+
+## Abstract
+
+This ADR expands the existing x/upgrade `Plan` proto message to include new fields for defining pre-run and post-run processes within upgrade tooling.
+It also defines a structure for providing downloadable artifacts involved in an upgrade.
+
+## Context
+
+The `upgrade` module in conjunction with Cosmovisor are designed to facilitate and automate a blockchain's transition from one version to another.
+
+Users submit a software upgrade governance proposal containing an upgrade `Plan`.
+The [Plan](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto#L12) currently contains the following fields:
+
+* `name`: A short string identifying the new version.
+* `height`: The chain height at which the upgrade is to be performed.
+* `info`: A string containing information about the upgrade.
+
+The `info` string can be anything.
+However, Cosmovisor will try to use the `info` field to automatically download a new version of the blockchain executable.
+For the auto-download to work, Cosmovisor expects it to be either a stringified JSON object (with a specific structure defined through documentation), or a URL that will return such JSON.
+The JSON object identifies URLs used to download the new blockchain executable for different platforms (OS and Architecture, e.g. "linux/amd64").
+Such a URL can either return the executable file directly or can return an archive containing the executable and possibly other assets.
+
+If the URL returns an archive, it is decompressed into `{DAEMON_HOME}/cosmovisor/{upgrade name}`.
+Then, if `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}` does not exist, but `{DAEMON_HOME}/cosmovisor/{upgrade name}/{DAEMON_NAME}` does, the latter is copied to the former.
+If the URL returns something other than an archive, it is downloaded to `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}`.
+
+If an upgrade height is reached and the new version of the executable version isn't available, Cosmovisor will stop running.
+
+Both `DAEMON_HOME` and `DAEMON_NAME` are [environment variables used to configure Cosmovisor](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md#command-line-arguments-and-environment-variables).
+
+Currently, there is no mechanism that makes Cosmovisor run a command after the upgraded chain has been restarted.
+
+The current upgrade process has this timeline:
+
+1. An upgrade governance proposal is submitted and approved.
+1. The upgrade height is reached.
+1. The `x/upgrade` module writes the `upgrade_info.json` file.
+1. The chain halts.
+1. Cosmovisor backs up the data directory (if set up to do so).
+1. Cosmovisor downloads the new executable (if not already in place).
+1. Cosmovisor executes the `${DAEMON_NAME} pre-upgrade`.
+1. Cosmovisor restarts the app using the new version and same args originally provided.
+
+## Decision
+
+### Protobuf Updates
+
+We will update the `x/upgrade.Plan` message for providing upgrade instructions.
+The upgrade instructions will contain a list of artifacts available for each platform.
+It allows for the definition of a pre-run and post-run commands.
+These commands are not consensus guaranteed; they will be executed by Cosmovisor (or other) during its upgrade handling.
+
+```protobuf
+message Plan {
+ // ... (existing fields)
+
+ UpgradeInstructions instructions = 6;
+}
+```
+
+The new `UpgradeInstructions instructions` field MUST be optional.
+
+```protobuf
+message UpgradeInstructions {
+ string pre_run = 1;
+ string post_run = 2;
+ repeated Artifact artifacts = 3;
+ string description = 4;
+}
+```
+
+All fields in the `UpgradeInstructions` are optional.
+
+* `pre_run` is a command to run prior to the upgraded chain restarting.
+ If defined, it will be executed after halting and downloading the new artifact but before restarting the upgraded chain.
+ The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`.
+ This command MUST behave the same as the current [pre-upgrade](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) command.
+ It does not take in any command-line arguments and is expected to terminate with the following exit codes:
+
+ | Exit status code | How it is handled in Cosmovisor |
+ |------------------|---------------------------------------------------------------------------------------------------------------------|
+ | `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. |
+ | `1` | Default exit code when `pre-upgrade` command has not been implemented. |
+ | `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. |
+ | `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. |
+ If defined, then the app supervisors (e.g. Cosmovisor) MUST NOT run `app pre-run`.
+
+* `post_run` is a command to run after the upgraded chain has been started. If defined, this command MUST be only executed at most once by an upgrading node.
+ The output and exit code SHOULD be logged but SHOULD NOT affect the running of the upgraded chain.
+ The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`.
+* `artifacts` define items to be downloaded.
+ It SHOULD have only one entry per platform.
+* `description` contains human-readable information about the upgrade and might contain references to external resources.
+ It SHOULD NOT be used for structured processing information.
+
+```protobuf
+message Artifact {
+ string platform = 1;
+ string url = 2;
+ string checksum = 3;
+ string checksum_algo = 4;
+}
+```
+
+* `platform` is a required string that SHOULD be in the format `{OS}/{CPU}`, e.g. `"linux/amd64"`.
+ The string `"any"` SHOULD also be allowed.
+ An `Artifact` with a `platform` of `"any"` SHOULD be used as a fallback when a specific `{OS}/{CPU}` entry is not found.
+ That is, if an `Artifact` exists with a `platform` that matches the system's OS and CPU, that should be used;
+ otherwise, if an `Artifact` exists with a `platform` of `any`, that should be used;
+ otherwise no artifact should be downloaded.
+* `url` is a required URL string that MUST conform to [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt).
+ A request to this `url` MUST return either an executable file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`.
+ The URL should not contain checksum - it should be specified by the `checksum` attribute.
+* `checksum` is a checksum of the expected result of a request to the `url`.
+ It is not required, but is recommended.
+ If provided, it MUST be a hex encoded checksum string.
+ Tools utilizing these `UpgradeInstructions` MUST fail if a `checksum` is provided but is different from the checksum of the result returned by the `url`.
+* `checksum_algo` is a string identifying the algorithm used to generate the `checksum`.
+ Recommended algorithms: `sha256`, `sha512`.
+ Algorithms also supported (but not recommended): `sha1`, `md5`.
+ If a `checksum` is provided, a `checksum_algo` MUST also be provided.
+
+A `url` is not required to contain a `checksum` query parameter.
+If the `url` does contain a `checksum` query parameter, the `checksum` and `checksum_algo` fields MUST also be populated, and their values MUST match the value of the query parameter.
+For example, if the `url` is `"https://example.com?checksum=md5:d41d8cd98f00b204e9800998ecf8427e"`, then the `checksum` field must be `"d41d8cd98f00b204e9800998ecf8427e"` and the `checksum_algo` field must be `"md5"`.
+
+### Upgrade Module Updates
+
+If an upgrade `Plan` does not use the new `UpgradeInstructions` field, existing functionality will be maintained.
+The parsing of the `info` field as either a URL or `binaries` JSON will be deprecated.
+During validation, if the `info` field is used as such, a warning will be issued, but not an error.
+
+We will update the creation of the `upgrade-info.json` file to include the `UpgradeInstructions`.
+
+We will update the optional validation available via CLI to account for the new `Plan` structure.
+We will add the following validation:
+
+1. If `UpgradeInstructions` are provided:
+ 1. There MUST be at least one entry in `artifacts`.
+ 1. All of the `artifacts` MUST have a unique `platform`.
+ 1. For each `Artifact`, if the `url` contains a `checksum` query parameter:
+ 1. The `checksum` query parameter value MUST be in the format of `{checksum_algo}:{checksum}`.
+ 1. The `{checksum}` from the query parameter MUST equal the `checksum` provided in the `Artifact`.
+ 1. The `{checksum_algo}` from the query parameter MUST equal the `checksum_algo` provided in the `Artifact`.
+1. The following validation is currently done using the `info` field. We will apply similar validation to the `UpgradeInstructions`.
+ For each `Artifact`:
+ 1. The `platform` MUST have the format `{OS}/{CPU}` or be `"any"`.
+ 1. The `url` field MUST NOT be empty.
+ 1. The `url` field MUST be a proper URL.
+ 1. A `checksum` MUST be provided either in the `checksum` field or as a query parameter in the `url`.
+ 1. If the `checksum` field has a value and the `url` also has a `checksum` query parameter, the two values MUST be equal.
+ 1. The `url` MUST return either a file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`.
+ 1. If a `checksum` is provided (in the field or as a query param), the checksum of the result of the `url` MUST equal the provided checksum.
+
+Downloading of an `Artifact` will happen the same way that URLs from `info` are currently downloaded.
+
+### Cosmovisor Updates
+
+If the `upgrade-info.json` file does not contain any `UpgradeInstructions`, existing functionality will be maintained.
+
+We will update Cosmovisor to look for and handle the new `UpgradeInstructions` in `upgrade-info.json`.
+If the `UpgradeInstructions` are provided, we will do the following:
+
+1. The `info` field will be ignored.
+1. The `artifacts` field will be used to identify the artifact to download based on the `platform` that Cosmovisor is running in.
+1. If a `checksum` is provided (either in the field or as a query param in the `url`), and the downloaded artifact has a different checksum, the upgrade process will be interrupted and Cosmovisor will exit with an error.
+1. If a `pre_run` command is defined, it will be executed at the same point in the process where the `app pre-upgrade` command would have been executed.
+ It will be executed using the same environment as other commands run by Cosmovisor.
+1. If a `post_run` command is defined, it will be executed after executing the command that restarts the chain.
+ It will be executed in a background process using the same environment as the other commands.
+ Any output generated by the command will be logged.
+ Once complete, the exit code will be logged.
+
+We will deprecate the use of the `info` field for anything other than human readable information.
+A warning will be logged if the `info` field is used to define the assets (either by URL or JSON).
+
+The new upgrade timeline is very similar to the current one. Changes are in bold:
+
+1. An upgrade governance proposal is submitted and approved.
+1. The upgrade height is reached.
+1. The `x/upgrade` module writes the `upgrade_info.json` file **(now possibly with `UpgradeInstructions`)**.
+1. The chain halts.
+1. Cosmovisor backs up the data directory (if set up to do so).
+1. Cosmovisor downloads the new executable (if not already in place).
+1. Cosmovisor executes **the `pre_run` command if provided**, or else the `${DAEMON_NAME} pre-upgrade` command.
+1. Cosmovisor restarts the app using the new version and same args originally provided.
+1. **Cosmovisor immediately runs the `post_run` command in a detached process.**
+
+## Consequences
+
+### Backwards Compatibility
+
+Since the only change to existing definitions is the addition of the `instructions` field to the `Plan` message, and that field is optional, there are no backwards incompatibilities with respects to the proto messages.
+Additionally, current behavior will be maintained when no `UpgradeInstructions` are provided, so there are no backwards incompatibilities with respects to either the upgrade module or Cosmovisor.
+
+### Forwards Compatibility
+
+In order to utilize the `UpgradeInstructions` as part of a software upgrade, both of the following must be true:
+
+1. The chain must already be using a sufficiently advanced version of the Cosmos SDK.
+1. The chain's nodes must be using a sufficiently advanced version of Cosmovisor.
+
+### Positive
+
+1. The structure for defining artifacts is clearer since it is now defined in the proto instead of in documentation.
+1. Availability of a pre-run command becomes more obvious.
+1. A post-run command becomes possible.
+
+### Negative
+
+1. The `Plan` message becomes larger. This is negligible because A) the `x/upgrades` module only stores at most one upgrade plan, and B) upgrades are rare enough that the increased gas cost isn't a concern.
+1. There is no option for providing a URL that will return the `UpgradeInstructions`.
+1. The only way to provide multiple assets (executables and other files) for a platform is to use an archive as the platform's artifact.
+
+### Neutral
+
+1. Existing functionality of the `info` field is maintained when the `UpgradeInstructions` aren't provided.
+
+## Further Discussions
+
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r698708349):
+ Consider different names for `UpgradeInstructions instructions` (either the message type or field name).
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754655072):
+ 1. Consider putting the `string platform` field inside `UpgradeInstructions` and make `UpgradeInstructions` a repeated field in `Plan`.
+ 1. Consider using a `oneof` field in the `Plan` which could either be `UpgradeInstructions` or else a URL that should return the `UpgradeInstructions`.
+ 1. Consider allowing `info` to either be a JSON serialized version of `UpgradeInstructions` or else a URL that returns that.
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r755462876):
+ Consider not including the `UpgradeInstructions.description` field, using the `info` field for that purpose instead.
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754643691):
+ Consider allowing multiple artifacts to be downloaded for any given `platform` by adding a `name` field to the `Artifact` message.
+1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288)
+ Allow the new `UpgradeInstructions` to be provided via URL.
+1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288)
+ Allow definition of a `signer` for assets (as an alternative to using a `checksum`).
+
+## References
+
+* [Current upgrade.proto](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto)
+* [Upgrade Module README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/x/upgrade/spec/README.md)
+* [Cosmovisor README](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md)
+* [Pre-upgrade README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md)
+* [Draft/POC PR #10032](https://github.com/cosmos/cosmos-sdk/pull/10032)
+* [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt)
diff --git a/copy-of-sdk-docs/build/architecture/adr-048-consensus-fees.md b/copy-of-sdk-docs/build/architecture/adr-048-consensus-fees.md
new file mode 100644
index 00000000..6fbaeef6
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-048-consensus-fees.md
@@ -0,0 +1,204 @@
+# ADR 048: Multi Tier Gas Price System
+
+## Changelog
+
+* Dec 1, 2021: Initial Draft
+
+## Status
+
+Rejected
+
+## Abstract
+
+This ADR describes a flexible mechanism to maintain a consensus level gas prices, in which one can choose a multi-tier gas price system or EIP-1559 like one through configuration.
+
+## Context
+
+Currently, each validator configures it's own `minimal-gas-prices` in `app.yaml`. But setting a proper minimal gas price is critical to protect network from dos attack, and it's hard for all the validators to pick a sensible value, so we propose to maintain a gas price in consensus level.
+
+Since tendermint 0.34.20 has supported mempool prioritization, we can take advantage of that to implement more sophisticated gas fee system.
+
+## Multi-Tier Price System
+
+We propose a multi-tier price system on consensus to provide maximum flexibility:
+
+* Tier 1: a constant gas price, which could only be modified occasionally through governance proposal.
+* Tier 2: a dynamic gas price which is adjusted according to previous block load.
+* Tier 3: a dynamic gas price which is adjusted according to previous block load at a higher speed.
+
+The gas price of higher tier should be bigger than the lower tier.
+
+The transaction fees are charged with the exact gas price calculated on consensus.
+
+The parameter schema is like this:
+
+```protobuf
+message TierParams {
+ uint32 priority = 1 // priority in tendermint mempool
+ Coin initial_gas_price = 2 //
+ uint32 parent_gas_target = 3 // the target saturation of block
+ uint32 change_denominator = 4 // decides the change speed
+ Coin min_gas_price = 5 // optional lower bound of the price adjustment
+ Coin max_gas_price = 6 // optional upper bound of the price adjustment
+}
+
+message Params {
+ repeated TierParams tiers = 1;
+}
+```
+
+### Extension Options
+
+We need to allow user to specify the tier of service for the transaction, to support it in an extensible way, we add an extension option in `AuthInfo`:
+
+```protobuf
+message ExtensionOptionsTieredTx {
+ uint32 fee_tier = 1
+}
+```
+
+The value of `fee_tier` is just the index to the `tiers` parameter list.
+
+We also change the semantic of existing `fee` field of `Tx`, instead of charging user the exact `fee` amount, we treat it as a fee cap, while the actual amount of fee charged is decided dynamically. If the `fee` is smaller than dynamic one, the transaction won't be included in current block and ideally should stay in the mempool until the consensus gas price drop. The mempool can eventually prune old transactions.
+
+### Tx Prioritization
+
+Transactions are prioritized based on the tier, the higher the tier, the higher the priority.
+
+Within the same tier, follow the default Tendermint order (currently FIFO). Be aware of that the mempool tx ordering logic is not part of consensus and can be modified by malicious validator.
+
+This mechanism can be easily composed with prioritization mechanisms:
+
+* we can add extra tiers out of a user control:
+ * Example 1: user can set tier 0, 10 or 20, but the protocol will create tiers 0, 1, 2 ... 29. For example IBC transactions will go to tier `user_tier + 5`: if user selected tier 1, then the transaction will go to tier 15.
+ * Example 2: we can reserve tier 4, 5, ... only for special transaction types. For example, tier 5 is reserved for evidence tx. So if submits a bank.Send transaction and set tier 5, it will be delegated to tier 3 (the max tier level available for any transaction).
+ * Example 3: we can enforce that all transactions of a specific type will go to specific tier. For example, tier 100 will be reserved for evidence transactions and all evidence transactions will always go to that tier.
+
+### `min-gas-prices`
+
+Deprecate the current per-validator `min-gas-prices` configuration, since it would confusing for it to work together with the consensus gas price.
+
+### Adjust For Block Load
+
+For tier 2 and tier 3 transactions, the gas price is adjusted according to previous block load, the logic could be similar to EIP-1559:
+
+```python
+def adjust_gas_price(gas_price, parent_gas_used, tier):
+ if parent_gas_used == tier.parent_gas_target:
+ return gas_price
+ elif parent_gas_used > tier.parent_gas_target:
+ gas_used_delta = parent_gas_used - tier.parent_gas_target
+ gas_price_delta = max(gas_price * gas_used_delta // tier.parent_gas_target // tier.change_speed, 1)
+ return gas_price + gas_price_delta
+ else:
+ gas_used_delta = parent_gas_target - parent_gas_used
+ gas_price_delta = gas_price * gas_used_delta // parent_gas_target // tier.change_speed
+ return gas_price - gas_price_delta
+```
+
+### Block Segment Reservation
+
+Ideally we should reserve block segments for each tier, so the lower tiered transactions won't be completely squeezed out by higher tier transactions, which will force user to use higher tier, and the system degraded to a single tier.
+
+We need help from tendermint to implement this.
+
+## Implementation
+
+We can make each tier's gas price strategy fully configurable in protocol parameters, while providing a sensible default one.
+
+Pseudocode in python-like syntax:
+
+```python
+interface TieredTx:
+ def tier(self) -> int:
+ pass
+
+def tx_tier(tx):
+ if isinstance(tx, TieredTx):
+ return tx.tier()
+ else:
+ # default tier for custom transactions
+ return 0
+ # NOTE: we can add more rules here per "Tx Prioritization" section
+
+class TierParams:
+ 'gas price strategy parameters of one tier'
+ priority: int # priority in tendermint mempool
+ initial_gas_price: Coin
+ parent_gas_target: int
+ change_speed: Decimal # 0 means don't adjust for block load.
+
+class Params:
+ 'protocol parameters'
+ tiers: List[TierParams]
+
+class State:
+ 'consensus state'
+ # total gas used in last block, None when it's the first block
+ parent_gas_used: Optional[int]
+ # gas prices of last block for all tiers
+ gas_prices: List[Coin]
+
+def begin_block():
+ 'Adjust gas prices'
+ for i, tier in enumerate(Params.tiers):
+ if State.parent_gas_used is None:
+ # initialized gas price for the first block
+ State.gas_prices[i] = tier.initial_gas_price
+ else:
+ # adjust gas price according to gas used in previous block
+ State.gas_prices[i] = adjust_gas_price(State.gas_prices[i], State.parent_gas_used, tier)
+
+def mempoolFeeTxHandler_checkTx(ctx, tx):
+ # the minimal-gas-price configured by validator, zero in deliver_tx context
+ validator_price = ctx.MinGasPrice()
+ consensus_price = State.gas_prices[tx_tier(tx)]
+ min_price = max(validator_price, consensus_price)
+
+ # zero means infinity for gas price cap
+ if tx.gas_price() > 0 and tx.gas_price() < min_price:
+ return 'insufficient fees'
+ return next_CheckTx(ctx, tx)
+
+def txPriorityHandler_checkTx(ctx, tx):
+ res, err := next_CheckTx(ctx, tx)
+ # pass priority to tendermint
+ res.Priority = Params.tiers[tx_tier(tx)].priority
+ return res, err
+
+def end_block():
+ 'Update block gas used'
+ State.parent_gas_used = block_gas_meter.consumed()
+```
+
+### Dos attack protection
+
+To fully saturate the blocks and prevent other transactions from executing, attacker need to use transactions of highest tier, the cost would be significantly higher than the default tier.
+
+If attacker spam with lower tier transactions, user can mitigate by sending higher tier transactions.
+
+## Consequences
+
+### Backwards Compatibility
+
+* New protocol parameters.
+* New consensus states.
+* New/changed fields in transaction body.
+
+### Positive
+
+* The default tier keeps the same predictable gas price experience for client.
+* The higher tier's gas price can adapt to block load.
+* No priority conflict with custom priority based on transaction types, since this proposal only occupy three priority levels.
+* Possibility to compose different priority rules with tiers
+
+### Negative
+
+* Wallets & tools need to update to support the new `tier` parameter, and semantic of `fee` field is changed.
+
+### Neutral
+
+## References
+
+* https://eips.ethereum.org/EIPS/eip-1559
+* https://iohk.io/en/blog/posts/2021/11/26/network-traffic-and-tiered-pricing/
diff --git a/copy-of-sdk-docs/build/architecture/adr-049-state-sync-hooks.md b/copy-of-sdk-docs/build/architecture/adr-049-state-sync-hooks.md
new file mode 100644
index 00000000..8b039d66
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-049-state-sync-hooks.md
@@ -0,0 +1,174 @@
+# ADR 049: State Sync Hooks
+
+## Changelog
+
+* Jan 19, 2022: Initial Draft
+* Apr 29, 2022: Safer extension snapshotter interface
+
+## Status
+
+Implemented
+
+## Abstract
+
+This ADR outlines a hooks-based mechanism for application modules to provide additional state (outside of the IAVL tree) to be used
+during state sync.
+
+## Context
+
+New clients use state-sync to download snapshots of module state from peers. Currently, the snapshot consists of a
+stream of `SnapshotStoreItem` and `SnapshotIAVLItem`, which means that application modules that define their state outside of the IAVL
+tree cannot include their state as part of the state-sync process.
+
+Note, Even though the module state data is outside of the tree, for determinism we require that the hash of the external data should
+be posted in the IAVL tree.
+
+## Decision
+
+A simple proposal based on our existing implementation is that, we can add two new message types: `SnapshotExtensionMeta`
+and `SnapshotExtensionPayload`, and they are appended to the existing multi-store stream with `SnapshotExtensionMeta`
+acting as a delimiter between extensions. As the chunk hashes should be able to ensure data integrity, we don't need
+a delimiter to mark the end of the snapshot stream.
+
+Besides, we provide `Snapshotter` and `ExtensionSnapshotter` interface for modules to implement snapshotters, which will handle both taking
+snapshot and the restoration. Each module could have multiple snapshotters, and for modules with additional state, they should
+implement `ExtensionSnapshotter` as extension snapshotters. When setting up the application, the snapshot `Manager` should call
+`RegisterExtensions([]ExtensionSnapshotter…)` to register all the extension snapshotters.
+
+```protobuf
+// SnapshotItem is an item contained in a rootmulti.Store snapshot.
+// On top of the existing SnapshotStoreItem and SnapshotIAVLItem, we add two new options for the item.
+message SnapshotItem {
+ // item is the specific type of snapshot item.
+ oneof item {
+ SnapshotStoreItem store = 1;
+ SnapshotIAVLItem iavl = 2 [(gogoproto.customname) = "IAVL"];
+ SnapshotExtensionMeta extension = 3;
+ SnapshotExtensionPayload extension_payload = 4;
+ }
+}
+
+// SnapshotExtensionMeta contains metadata about an external snapshotter.
+// One module may need multiple snapshotters, so each module may have multiple SnapshotExtensionMeta.
+message SnapshotExtensionMeta {
+ // the name of the ExtensionSnapshotter, and it is registered to snapshotter manager when setting up the application
+ // name should be unique for each ExtensionSnapshotter as we need to alphabetically order their snapshots to get
+ // deterministic snapshot stream.
+ string name = 1;
+ // this is used by each ExtensionSnapshotter to decide the format of payloads included in SnapshotExtensionPayload message
+ // it is used within the snapshotter/namespace, not global one for all modules
+ uint32 format = 2;
+}
+
+// SnapshotExtensionPayload contains payloads of an external snapshotter.
+message SnapshotExtensionPayload {
+ bytes payload = 1;
+}
+```
+
+When we create a snapshot stream, the `multistore` snapshot is always placed at the beginning of the binary stream, and other extension snapshots are alphabetically ordered by the name of the corresponding `ExtensionSnapshotter`.
+
+The snapshot stream would look like as follows:
+
+```go
+// multi-store snapshot
+{SnapshotStoreItem | SnapshotIAVLItem, ...}
+// extension1 snapshot
+SnapshotExtensionMeta
+{SnapshotExtensionPayload, ...}
+// extension2 snapshot
+SnapshotExtensionMeta
+{SnapshotExtensionPayload, ...}
+```
+
+We add an `extensions` field to snapshot `Manager` for extension snapshotters. The `multistore` snapshotter is a special one and it doesn't need a name because it is always placed at the beginning of the binary stream.
+
+```go
+type Manager struct {
+ store *Store
+ multistore types.Snapshotter
+ extensions map[string]types.ExtensionSnapshotter
+ mtx sync.Mutex
+ operation operation
+ chRestore chan<- io.ReadCloser
+ chRestoreDone <-chan restoreDone
+ restoreChunkHashes [][]byte
+ restoreChunkIndex uint32
+}
+```
+
+For extension snapshotters that implement the `ExtensionSnapshotter` interface, their names should be registered to the snapshot `Manager` by
+calling `RegisterExtensions` when setting up the application. The snapshotters will handle both taking snapshot and restoration.
+
+```go
+// RegisterExtensions register extension snapshotters to manager
+func (m *Manager) RegisterExtensions(extensions ...types.ExtensionSnapshotter) error
+```
+
+On top of the existing `Snapshotter` interface for the `multistore`, we add `ExtensionSnapshotter` interface for the extension snapshotters. Three more function signatures: `SnapshotFormat()`, `SupportedFormats()` and `SnapshotName()` are added to `ExtensionSnapshotter`.
+
+```go
+// ExtensionPayloadReader read extension payloads,
+// it returns io.EOF when reached either end of stream or the extension boundaries.
+type ExtensionPayloadReader = func() ([]byte, error)
+
+// ExtensionPayloadWriter is a helper to write extension payloads to underlying stream.
+type ExtensionPayloadWriter = func([]byte) error
+
+// ExtensionSnapshotter is an extension Snapshotter that is appended to the snapshot stream.
+// ExtensionSnapshotter has an unique name and manages it's own internal formats.
+type ExtensionSnapshotter interface {
+ // SnapshotName returns the name of snapshotter, it should be unique in the manager.
+ SnapshotName() string
+
+ // SnapshotFormat returns the default format used to take a snapshot.
+ SnapshotFormat() uint32
+
+ // SupportedFormats returns a list of formats it can restore from.
+ SupportedFormats() []uint32
+
+ // SnapshotExtension writes extension payloads into the underlying protobuf stream.
+ SnapshotExtension(height uint64, payloadWriter ExtensionPayloadWriter) error
+
+ // RestoreExtension restores an extension state snapshot,
+ // the payload reader returns `io.EOF` when reached the extension boundaries.
+ RestoreExtension(height uint64, format uint32, payloadReader ExtensionPayloadReader) error
+
+}
+```
+
+## Consequences
+
+As a result of this implementation, we are able to create snapshots of binary chunk stream for the state that we maintain outside of the IAVL Tree, CosmWasm blobs for example. And new clients are able to fetch snapshots of state for all modules that have implemented the corresponding interface from peer nodes.
+
+
+### Backwards Compatibility
+
+This ADR introduces new proto message types, adds an `extensions` field in snapshot `Manager`, and add new `ExtensionSnapshotter` interface, so this is not backwards compatible if we have extensions.
+
+But for applications that do not have the state data outside of the IAVL tree for any module, the snapshot stream is backwards-compatible.
+
+### Positive
+
+* State maintained outside of IAVL tree like CosmWasm blobs can create snapshots by implementing extension snapshotters, and being fetched by new clients via state-sync.
+
+### Negative
+
+### Neutral
+
+* All modules that maintain state outside of IAVL tree need to implement `ExtensionSnapshotter` and the snapshot `Manager` need to call `RegisterExtensions` when setting up the application.
+
+## Further Discussions
+
+While an ADR is in the DRAFT or PROPOSED stage, this section should contain a summary of issues to be solved in future iterations (usually referencing comments from a pull-request discussion).
+Later, this section can optionally list ideas or improvements the author or reviewers found during the analysis of this ADR.
+
+## Test Cases [optional]
+
+Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/pull/10961
+* https://github.com/cosmos/cosmos-sdk/issues/7340
+* https://hackmd.io/gJoyev6DSmqqkO667WQlGw
diff --git a/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual-annex1.md b/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual-annex1.md
new file mode 100644
index 00000000..96e0d094
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual-annex1.md
@@ -0,0 +1,361 @@
+# ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers
+
+## Changelog
+
+* Dec 06, 2021: Initial Draft
+* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team.
+* Dec 01, 2022: Remove `Object: ` prefix on Any header screen.
+* Dec 13, 2022: Sign over bytes hash when bytes length > 32.
+* Mar 27, 2023: Update `Any` value renderer to omit message header screen.
+
+## Status
+
+Accepted. Implementation started. Small value renderers details still need to be polished.
+
+## Abstract
+
+This Annex describes value renderers, which are used for displaying Protobuf values in a human-friendly way using a string array.
+
+## Value Renderers
+
+Value Renderers describe how values of different Protobuf types should be encoded as a string array. Value renderers can be formalized as a set of bijective functions `func renderT(value T) []string`, where `T` is one of the below Protobuf types for which this spec is defined.
+
+### Protobuf `number`
+
+* Applies to:
+ * protobuf numeric integer types (`int{32,64}`, `uint{32,64}`, `sint{32,64}`, `fixed{32,64}`, `sfixed{32,64}`)
+ * strings whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec`
+ * bytes whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec`
+* Trailing decimal zeroes are always removed
+* Formatting with `'`s for every three integral digits.
+* Usage of `.` to denote the decimal delimiter.
+
+#### Examples
+
+* `1000` (uint64) -> `1'000`
+* `"1000000.00"` (string representing a Dec) -> `1'000'000`
+* `"1000000.10"` (string representing a Dec) -> `1'000'000.1`
+
+### `coin`
+
+* Applies to `cosmos.base.v1beta1.Coin`.
+* Denoms are converted to `display` denoms using `Metadata` (if available). **This requires a state query**. The definition of `Metadata` can be found in the [bank protobuf definition](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.bank.v1beta1#cosmos.bank.v1beta1.Metadata). If the `display` field is empty or nil, then we do not perform any denom conversion.
+* Amounts are converted to `display` denom amounts and rendered as `number`s above
+ * We do not change the capitalization of the denom. In practice, `display` denoms are stored in lowercase in state (e.g. `10 atom`), however they are often showed in UPPERCASE in everyday life (e.g. `10 ATOM`). Value renderers keep the case used in state, but we may recommend chains changing the denom metadata to be uppercase for better user display.
+* One space between the denom and amount (e.g. `10 atom`).
+* In the future, IBC denoms could maybe be converted to DID/IIDs, if we can find a robust way for doing this (ex. `cosmos:cosmos:hub:bank:denom:atom`)
+
+#### Examples
+
+* `1000000000uatom` -> `["1'000 atom"]`, because atom is the metadata's display denom.
+
+### `coins`
+
+* an array of `coin` is display as the concatenation of each `coin` encoded as the specification above, then joined together with the delimiter `", "` (a comma and a space, no quotes around).
+* the list of coins is ordered by unicode code point of the display denom: `A-Z` < `a-z`. For example, the string `aAbBcC` would be sorted `ABCabc`.
+ * if the coins list had 0 items in it then it'll be rendered as `zero`
+
+### Example
+
+* `["3cosm", "2000000uatom"]` -> `2 atom, 3 COSM` (assuming the display denoms are `atom` and `COSM`)
+* `["10atom", "20Acoin"]` -> `20 Acoin, 10 atom` (assuming the display denoms are `atom` and `Acoin`)
+* `[]` -> `zero`
+
+### `repeated`
+
+* Applies to all `repeated` fields, except `cosmos.tx.v1beta1.TxBody#Messages`, which has a particular encoding (see [ADR-050](./adr-050-sign-mode-textual.md)).
+* A repeated type has the following template:
+
+```
+:
+ (/):
+
+ (/):
+
+End of .
+```
+
+where:
+
+* `field_name` is the Protobuf field name of the repeated field
+* `field_kind`:
+ * if the type of the repeated field is a message, `field_kind` is the message name
+ * if the type of the repeated field is an enum, `field_kind` is the enum name
+ * in any other case, `field_kind` is the protobuf primitive type (e.g. "string" or "bytes")
+* `int` is the length of the array
+* `index` is one based index of the repeated field
+
+#### Examples
+
+Given the proto definition:
+
+```protobuf
+message AllowedMsgAllowance {
+ repeated string allowed_messages = 1;
+}
+```
+
+and initializing with:
+
+```go
+x := []AllowedMsgAllowance{"cosmos.bank.v1beta1.MsgSend", "cosmos.gov.v1.MsgVote"}
+```
+
+we have the following value-rendered encoding:
+
+```
+Allowed messages: 2 strings
+Allowed messages (1/2): cosmos.bank.v1beta1.MsgSend
+Allowed messages (2/2): cosmos.gov.v1.MsgVote
+End of Allowed messages
+```
+
+### `message`
+
+* Applies to all Protobuf messages that do not have a custom encoding.
+* Field names follow [sentence case](https://en.wiktionary.org/wiki/sentence_case)
+ * replace each `_` with a space
+ * capitalize first letter of the sentence
+* Field names are ordered by their Protobuf field number
+* Screen title is the field name, and screen content is the value.
+* Nesting:
+ * if a field contains a nested message, we value-render the underlying message using the template:
+
+ ```
+ : <1st line of value-rendered message>
+ > // Notice the `>` prefix.
+ ```
+
+ * `>` character is used to denote nesting. For each additional level of nesting, add `>`.
+
+#### Examples
+
+Given the following Protobuf messages:
+
+```protobuf
+enum VoteOption {
+ VOTE_OPTION_UNSPECIFIED = 0;
+ VOTE_OPTION_YES = 1;
+ VOTE_OPTION_ABSTAIN = 2;
+ VOTE_OPTION_NO = 3;
+ VOTE_OPTION_NO_WITH_VETO = 4;
+}
+
+message WeightedVoteOption {
+ VoteOption option = 1;
+ string weight = 2 [(cosmos_proto.scalar) = "cosmos.Dec"];
+}
+
+message Vote {
+ uint64 proposal_id = 1;
+ string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"];
+ reserved 3;
+ repeated WeightedVoteOption options = 4;
+}
+```
+
+we get the following encoding for the `Vote` message:
+
+```
+Vote object
+> Proposal id: 4
+> Voter: cosmos1abc...def
+> Options: 2 WeightedVoteOptions
+> Options (1/2): WeightedVoteOption object
+>> Option: VOTE_OPTION_YES
+>> Weight: 0.7
+> Options (2/2): WeightedVoteOption object
+>> Option: VOTE_OPTION_NO
+>> Weight: 0.3
+> End of Options
+```
+
+### Enums
+
+* Show the enum variant name as string.
+
+#### Examples
+
+See example above with `message Vote{}`.
+
+### `google.protobuf.Any`
+
+* Applies to `google.protobuf.Any`
+* Rendered as:
+
+```
+
+>
+```
+
+There is however one exception: when the underlying message is a Protobuf message that does not have a custom encoding, then the message header screen is omitted, and one level of indentation is removed.
+
+Messages that have a custom encoding, including `google.protobuf.Timestamp`, `google.protobuf.Duration`, `google.protobuf.Any`, `cosmos.base.v1beta1.Coin`, and messages that have an app-defined custom encoding, will preserve their header and indentation level.
+
+#### Examples
+
+Message header screen is stripped, one-level of indentation removed:
+
+```
+/cosmos.gov.v1.Vote
+> Proposal id: 4
+> Vote: cosmos1abc...def
+> Options: 2 WeightedVoteOptions
+> Options (1/2): WeightedVoteOption object
+>> Option: Yes
+>> Weight: 0.7
+> Options (2/2): WeightedVoteOption object
+>> Option: No
+>> Weight: 0.3
+> End of Options
+```
+
+Message with custom encoding:
+
+```
+/cosmos.base.v1beta1.Coin
+> 10uatom
+```
+
+### `google.protobuf.Timestamp`
+
+Rendered using [RFC 3339](https://www.rfc-editor.org/rfc/rfc3339) (a
+simplification of ISO 8601), which is the current recommendation for portable
+time values. The rendering always uses "Z" (UTC) as the timezone. It uses only
+the necessary fractional digits of a second, omitting the fractional part
+entirely if the timestamp has no fractional seconds. (The resulting timestamps
+are not automatically sortable by standard lexicographic order, but we favor
+the legibility of the shorter string.)
+
+#### Examples
+
+The timestamp with 1136214245 seconds and 700000000 nanoseconds is rendered
+as `2006-01-02T15:04:05.7Z`.
+The timestamp with 1136214245 seconds and zero nanoseconds is rendered
+as `2006-01-02T15:04:05Z`.
+
+### `google.protobuf.Duration`
+
+The duration proto expresses a raw number of seconds and nanoseconds.
+This will be rendered as longer time units of days, hours, and minutes,
+plus any remaining seconds, in that order.
+Leading and trailing zero-quantity units will be omitted, but all
+units in between nonzero units will be shown, e.g. ` 3 days, 0 hours, 0 minutes, 5 seconds`.
+
+Even longer time units such as months or years are imprecise.
+Weeks are precise, but not commonly used - `91 days` is more immediately
+legible than `13 weeks`. Although `days` can be problematic,
+e.g. noon to noon on subsequent days can be 23 or 25 hours depending on
+daylight savings transitions, there is significant advantage in using
+strict 24-hour days over using only hours (e.g. `91 days` vs `2184 hours`).
+
+When nanoseconds are nonzero, they will be shown as fractional seconds,
+with only the minimum number of digits, e.g `0.5 seconds`.
+
+A duration of exactly zero is shown as `0 seconds`.
+
+Units will be given as singular (no trailing `s`) when the quantity is exactly one,
+and will be shown in plural otherwise.
+
+Negative durations will be indicated with a leading minus sign (`-`).
+
+Examples:
+
+* `1 day`
+* `30 days`
+* `-1 day, 12 hours`
+* `3 hours, 0 minutes, 53.025 seconds`
+
+### bytes
+
+* Bytes of length shorter or equal to 35 are rendered in hexadecimal, all capital letters, without the `0x` prefix.
+* Bytes of length greater than 35 are hashed using SHA256. The rendered text is `SHA-256=`, followed by the 32-byte hash, in hexadecimal, all capital letters, without the `0x` prefix.
+* The hexadecimal string is finally separated into groups of 4 digits, with a space `' '` as separator. If the bytes length is odd, the 2 remaining hexadecimal characters are at the end.
+
+The number 35 was chosen because it is the longest length where the hashed-and-prefixed representation is longer than the original data directly formatted, using the 3 rules above. More specifically:
+
+* a 35-byte array will have 70 hex characters, plus 17 space characters, resulting in 87 characters.
+* byte arrays starting from length 36 will be hashed to 32 bytes, which is 64 hex characters plus 15 spaces, and with the `SHA-256=` prefix, it takes 87 characters.
+Also, secp256k1 public keys have length 33, so their Textual representation is not their hashed value, which we would like to avoid.
+
+Note: Data longer than 35 bytes are not rendered in a way that can be inverted. See ADR-050's [section about invertibility](./adr-050-sign-mode-textual.md#invertible-rendering) for a discussion.
+
+#### Examples
+
+Inputs are displayed as byte arrays.
+
+* `[0]`: `00`
+* `[0,1,2]`: `0001 02`
+* `[0,1,2,..,34]`: `0001 0203 0405 0607 0809 0A0B 0C0D 0E0F 1011 1213 1415 1617 1819 1A1B 1C1D 1E1F 2021 22`
+* `[0,1,2,..,35]`: `SHA-256=5D7E 2D9B 1DCB C85E 7C89 0036 A2CF 2F9F E7B6 6554 F2DF 08CE C6AA 9C0A 25C9 9C21`
+
+### address bytes
+
+We currently use `string` types in protobuf for addresses so this may not be needed, but if any address bytes are used in sign mode textual they should be rendered with bech32 formatting
+
+### strings
+
+Strings are rendered as-is.
+
+### Default Values
+
+* Default Protobuf values for each field are skipped.
+
+#### Example
+
+```protobuf
+message TestData {
+ string signer = 1;
+ string metadata = 2;
+}
+```
+
+```go
+myTestData := TestData{
+ Signer: "cosmos1abc"
+}
+```
+
+We get the following encoding for the `TestData` message:
+
+```
+TestData object
+> Signer: cosmos1abc
+```
+
+### bool
+
+Boolean values are rendered as `True` or `False`.
+
+### [ABANDONED] Custom `msg_title` instead of Msg `type_url`
+
+_This paragraph is in the Annex for informational purposes only, and will be removed in a next update of the ADR._
+
+
+ Click to see abandoned idea.
+
+* all protobuf messages to be used with `SIGN_MODE_TEXTUAL` CAN have a short title associated with them that can be used in format strings whenever the type URL is explicitly referenced via the `cosmos.msg.v1.textual.msg_title` Protobuf message option.
+* if this option is not specified for a Msg, then the Protobuf fully qualified name will be used.
+
+```protobuf
+message MsgSend {
+ option (cosmos.msg.v1.textual.msg_title) = "bank send coins";
+}
+```
+
+* they MUST be unique per message, per chain
+
+#### Examples
+
+* `cosmos.gov.v1.MsgVote` -> `governance v1 vote`
+
+#### Best Practices
+
+We recommend to use this option only for `Msg`s whose Protobuf fully qualified name can be hard to understand. As such, the two examples above (`MsgSend` and `MsgVote`) are not good examples to be used with `msg_title`. We still allow `msg_title` for chains who might have `Msg`s with complex or non-obvious names.
+
+In those cases, we recommend to drop the version (e.g. `v1`) in the string if there's only one version of the module on chain. This way, the bijective mapping can figure out which message each string corresponds to. If multiple Protobuf versions of the same module exist on the same chain, we recommend keeping the first `msg_title` with version, and the second `msg_title` with version (e.g. `v2`):
+
+* `mychain.mymodule.v1.MsgDo` -> `mymodule do something`
+* `mychain.mymodule.v2.MsgDo` -> `mymodule v2 do something`
+
+
diff --git a/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual-annex2.md b/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual-annex2.md
new file mode 100644
index 00000000..3a44001f
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual-annex2.md
@@ -0,0 +1,122 @@
+# ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX
+
+## Changelog
+
+* Oct 3, 2022: Initial Draft
+
+## Status
+
+DRAFT
+
+## Abstract
+
+This annex provides normative guidance on how devices should render a
+`SIGN_MODE_TEXTUAL` document.
+
+## Context
+
+`SIGN_MODE_TEXTUAL` allows a legible version of a transaction to be signed
+on a hardware security device, such as a Ledger. Early versions of the
+design rendered transactions directly to lines of ASCII text, but this
+proved awkward from its in-band signaling, and for the need to display
+Unicode text within the transaction.
+
+## Decision
+
+`SIGN_MODE_TEXTUAL` renders to an abstract representation, leaving it
+up to device-specific software how to present this representation given the
+capabilities, limitations, and conventions of the device.
+
+We offer the following normative guidance:
+
+1. The presentation should be as legible as possible to the user, given
+the capabilities of the device. If legibility could be sacrificed for other
+properties, we would recommend just using some other signing mode.
+Legibility should focus on the common case - it is okay for unusual cases
+to be less legible.
+
+2. The presentation should be invertible if possible without substantial
+sacrifice of legibility. Any change to the rendered data should result
+in a visible change to the presentation. This extends the integrity of the
+signing to user-visible presentation.
+
+3. The presentation should follow normal conventions of the device,
+without sacrificing legibility or invertibility.
+
+As an illustration of these principles, here is an example algorithm
+for presentation on a device which can display a single 80-character
+line of printable ASCII characters:
+
+* The presentation is broken into lines, and each line is presented in
+sequence, with user controls for going forward or backward a line.
+
+* Expert mode screens are only presented if the device is in expert mode.
+
+* Each line of the screen starts with a number of `>` characters equal
+to the screen's indentation level, followed by a `+` character if this
+isn't the first line of the screen, followed by a space if either a
+`>` or a `+` has been emitted,
+or if this header is followed by a `>`, `+`, or space.
+
+* If the line ends with whitespace or an `@` character, an additional `@`
+character is appended to the line.
+
+* The following ASCII control characters or backslash (`\`) are converted
+to a backslash followed by a letter code, in the manner of string literals
+in many languages:
+
+ * a: U+0007 alert or bell
+ * b: U+0008 backspace
+ * f: U+000C form feed
+ * n: U+000A line feed
+ * r: U+000D carriage return
+ * t: U+0009 horizontal tab
+ * v: U+000B vertical tab
+ * `\`: U+005C backslash
+
+* All other ASCII control characters, plus non-ASCII Unicode code points,
+are shown as either:
+
+ * `\u` followed by 4 uppercase hex characters for code points
+ in the basic multilingual plane (BMP).
+
+ * `\U` followed by 8 uppercase hex characters for other code points.
+
+* The screen will be broken into multiple lines to fit the 80-character
+limit, considering the above transformations in a way that attempts to
+minimize the number of lines generated. Expanded control or Unicode characters
+are never split across lines.
+
+Example output:
+
+```
+An introductory line.
+key1: 123456
+key2: a string that ends in whitespace @
+key3: a string that ends in a single ampersand - @@
+ >tricky key4<: note the leading space in the presentation
+introducing an aggregate
+> key5: false
+> key6: a very long line of text, please co\u00F6perate and break into
+>+ multiple lines.
+> Can we do further nesting?
+>> You bet we can!
+```
+
+The inverse mapping gives us the only input which could have
+generated this output (JSON notation for string data):
+
+```
+Indent Text
+------ ----
+0 "An introductory line."
+0 "key1: 123456"
+0 "key2: a string that ends in whitespace "
+0 "key3: a string that ends in a single ampersand - @"
+0 ">tricky key4<: note the leading space in the presentation"
+0 "introducing an aggregate"
+1 "key5: false"
+1 "key6: a very long line of text, please coöperate and break into multiple lines."
+1 "Can we do further nesting?"
+2 "You bet we can!"
+```
diff --git a/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual.md b/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual.md
new file mode 100644
index 00000000..53185968
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-050-sign-mode-textual.md
@@ -0,0 +1,370 @@
+# ADR 050: SIGN_MODE_TEXTUAL
+
+## Changelog
+
+* Dec 06, 2021: Initial Draft.
+* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team.
+* May 16, 2022: Change status to Accepted.
+* Aug 11, 2022: Require signing over tx raw bytes.
+* Sep 07, 2022: Add custom `Msg`-renderers.
+* Sep 18, 2022: Structured format instead of lines of text
+* Nov 23, 2022: Specify CBOR encoding.
+* Dec 01, 2022: Link to examples in separate JSON file.
+* Dec 06, 2022: Re-ordering of envelope screens.
+* Dec 14, 2022: Mention exceptions for invertibility.
+* Jan 23, 2023: Switch Screen.Text to Title+Content.
+* Mar 07, 2023: Change SignDoc from array to struct containing array.
+* Mar 20, 2023: Introduce a spec version initialized to 0.
+
+## Status
+
+Accepted. Implementation started. Small value renderers details still need to be polished.
+
+Spec version: 0.
+
+## Abstract
+
+This ADR specifies SIGN_MODE_TEXTUAL, a new string-based sign mode that is targeted at signing with hardware devices.
+
+## Context
+
+Protobuf-based SIGN_MODE_DIRECT was introduced in [ADR-020](./adr-020-protobuf-transaction-encoding.md) and is intended to replace SIGN_MODE_LEGACY_AMINO_JSON in most situations, such as mobile wallets and CLI keyrings. However, the [Ledger](https://www.ledger.com/) hardware wallet is still using SIGN_MODE_LEGACY_AMINO_JSON for displaying the sign bytes to the user. Hardware wallets cannot transition to SIGN_MODE_DIRECT as:
+
+* SIGN_MODE_DIRECT is binary-based and thus not suitable for display to end-users. Technically, hardware wallets could simply display the sign bytes to the user. But this would be considered as blind signing, and is a security concern.
+* hardware cannot decode the protobuf sign bytes due to memory constraints, as the Protobuf definitions would need to be embedded on the hardware device.
+
+In an effort to remove Amino from the SDK, a new sign mode needs to be created for hardware devices. [Initial discussions](https://github.com/cosmos/cosmos-sdk/issues/6513) propose a text-based sign mode, which this ADR formally specifies.
+
+## Decision
+
+In SIGN_MODE_TEXTUAL, a transaction is rendered into a textual representation,
+which is then sent to a secure device or subsystem for the user to review and sign.
+Unlike `SIGN_MODE_DIRECT`, the transmitted data can be simply decoded into legible text
+even on devices with limited processing and display.
+
+The textual representation is a sequence of _screens_.
+Each screen is meant to be displayed in its entirety (if possible) even on a small device like a Ledger.
+A screen is roughly equivalent to a short line of text.
+Large screens can be displayed in several pieces,
+much as long lines of text are wrapped,
+so no hard guidance is given, though 40 characters is a good target.
+A screen is used to display a single key/value pair for scalar values
+(or composite values with a compact notation, such as `Coins`)
+or to introduce or conclude a larger grouping.
+
+The text can contain the full range of Unicode code points, including control characters and nul.
+The device is responsible for deciding how to display characters it cannot render natively.
+See [annex 2](./adr-050-sign-mode-textual-annex2.md) for guidance.
+
+Screens have a non-negative indentation level to signal composite or nested structures.
+Indentation level zero is the top level.
+Indentation is displayed via some device-specific mechanism.
+Message quotation notation is an appropriate model, such as
+leading `>` characters or vertical bars on more capable displays.
+
+Some screens are marked as _expert_ screens,
+meant to be displayed only if the viewer chooses to opt in for the extra detail.
+Expert screens are meant for information that is rarely useful,
+or needs to be present only for signature integrity (see below).
+
+### Invertible Rendering
+
+We require that the rendering of the transaction be invertible:
+there must be a parsing function such that for every transaction,
+when rendered to the textual representation,
+parsing that representation yields a proto message equivalent
+to the original under proto equality.
+
+Note that this inverse function does not need to perform correct
+parsing or error signaling for the whole domain of textual data.
+Merely that the range of valid transactions be invertible under
+the composition of rendering and parsing.
+
+Note that the existence of an inverse function ensures that the
+rendered text contains the full information of the original transaction,
+not a hash or subset.
+
+We make an exception for invertibility for data which are too large to
+meaningfully display, such as byte strings longer than 32 bytes. We may then
+selectively render them with a cryptographically-strong hash. In these cases,
+it is still computationally infeasible to find a different transaction which
+has the same rendering. However, we must ensure that the hash computation is
+simple enough to be reliably executed independently, so at least the hash is
+itself reasonably verifiable when the raw byte string is not.
+
+### Chain State
+
+The rendering function (and parsing function) may depend on the current chain state.
+This is useful for reading parameters, such as coin display metadata,
+or for reading user-specific preferences such as language or address aliases.
+Note that if the observed state changes between signature generation
+and the transaction's inclusion in a block, the delivery-time rendering
+might differ. If so, the signature will be invalid and the transaction
+will be rejected.
+
+### Signature and Security
+
+For security, transaction signatures should have three properties:
+
+1. Given the transaction, signatures, and chain state, it must be possible to validate that the signatures matches the transaction,
+to verify that the signers must have known their respective secret keys.
+
+2. It must be computationally infeasible to find a substantially different transaction for which the given signatures are valid, given the same chain state.
+
+3. The user should be able to give informed consent to the signed data via a simple, secure device with limited display capabilities.
+
+The correctness and security of `SIGN_MODE_TEXTUAL` is guaranteed by demonstrating an inverse function from the rendering to transaction protos.
+This means that it is impossible for a different protocol buffer message to render to the same text.
+
+### Transaction Hash Malleability
+
+When client software forms a transaction, the "raw" transaction (`TxRaw`) is serialized as a proto
+and a hash of the resulting byte sequence is computed.
+This is the `TxHash`, and is used by various services to track the submitted transaction through its lifecycle.
+Various misbehavior is possible if one can generate a modified transaction with a different TxHash
+but for which the signature still checks out.
+
+SIGN_MODE_TEXTUAL prevents this transaction malleability by including the TxHash as an expert screen
+in the rendering.
+
+### SignDoc
+
+The SignDoc for `SIGN_MODE_TEXTUAL` is formed from a data structure like:
+
+```go
+type Screen struct {
+ Title string // possibly size limited to, advised to 64 characters
+ Content string // possibly size limited to, advised to 255 characters
+ Indent uint8 // size limited to something small like 16 or 32
+ Expert bool
+}
+
+type SignDocTextual struct {
+ Screens []Screen
+}
+```
+
+We do not plan to use protobuf serialization to form the sequence of bytes
+that will be transmitted and signed, in order to keep the decoder simple.
+We will use [CBOR](https://cbor.io) ([RFC 8949](https://www.rfc-editor.org/rfc/rfc8949.html)) instead.
+The encoding is defined by the following CDDL ([RFC 8610](https://www.rfc-editor.org/rfc/rfc8610)):
+
+```
+;;; CDDL (RFC 8610) Specification of SignDoc for SIGN_MODE_TEXTUAL.
+;;; Must be encoded using CBOR deterministic encoding (RFC 8949, section 4.2.1).
+
+;; A Textual document is a struct containing one field: an array of screens.
+sign_doc = {
+ screens_key: [* screen],
+}
+
+;; The key is an integer to keep the encoding small.
+screens_key = 1
+
+;; A screen consists of a text string, an indentation, and the expert flag,
+;; represented as an integer-keyed map. All entries are optional
+;; and MUST be omitted from the encoding if empty, zero, or false.
+;; Text defaults to the empty string, indent defaults to zero,
+;; and expert defaults to false.
+screen = {
+ ? title_key: tstr,
+ ? content_key: tstr,
+ ? indent_key: uint,
+ ? expert_key: bool,
+}
+
+;; Keys are small integers to keep the encoding small.
+title_key = 1
+content_key = 2
+indent_key = 3
+expert_key = 4
+```
+
+Defining the sign_doc as directly an array of screens has also been considered. However, given the possibility of future iterations of this specification, using a single-keyed struct has been chosen over the former proposal, as structs allow for easier backwards-compatibility.
+
+## Details
+
+In the examples that follow, screens will be shown as lines of text,
+indentation is indicated with a leading '>',
+and expert screens are marked with a leading `*`.
+
+### Encoding of the Transaction Envelope
+
+We define "transaction envelope" as all data in a transaction that is not in the `TxBody.Messages` field. Transaction envelope includes fee, signer infos and memo, but don't include `Msg`s. `//` denotes comments and are not shown on the Ledger device.
+
+```
+Chain ID:
+Account number:
+Sequence:
+Address:
+*Public Key:
+This transaction has Message(s) // Pluralize "Message" only when int>1
+> Message (/): // See value renderers for Any rendering.
+End of Message
+Memo: // Skipped if no memo set.
+Fee: // See value renderers for coins rendering.
+*Fee payer: // Skipped if no fee_payer set.
+*Fee granter: // Skipped if no fee_granter set.
+Tip: // Skipped if no tip.
+Tipper:
+*Gas Limit:
+*Timeout Height: // Skipped if no timeout_height set.
+*Other signer: SignerInfo // Skipped if the transaction only has 1 signer.
+*> Other signer (/):
+*End of other signers
+*Extension options: Any: // Skipped if no body extension options
+*> Extension options (/):
+*End of extension options
+*Non critical extension options: Any: // Skipped if no body non critical extension options
+*> Non critical extension options (/):
+*End of Non critical extension options
+*Hash of raw bytes: // Hex encoding of bytes defined, to prevent tx hash malleability.
+```
+
+### Encoding of the Transaction Body
+
+Transaction Body is the `Tx.TxBody.Messages` field, which is an array of `Any`s, where each `Any` packs a `sdk.Msg`. Since `sdk.Msg`s are widely used, they have a slightly different encoding than usual array of `Any`s (Protobuf: `repeated google.protobuf.Any`) described in Annex 1.
+
+```
+This transaction has message: // Optional 's' for "message" if there's >1 sdk.Msgs.
+// For each Msg, print the following 2 lines:
+Msg (/): // E.g. Msg (1/2): bank v1beta1 send coins
+
+End of transaction messages
+```
+
+#### Example
+
+Given the following Protobuf message:
+
+```protobuf
+message Grant {
+ google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"];
+ google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
+}
+
+message MsgGrant {
+ option (cosmos.msg.v1.signer) = "granter";
+
+ string granter = 1 [(cosmos_proto.scalar) = "cosmos.AddressString"];
+ string grantee = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"];
+}
+```
+
+and a transaction containing 1 such `sdk.Msg`, we get the following encoding:
+
+```
+This transaction has 1 message:
+Msg (1/1): authz v1beta1 grant
+Granter: cosmos1abc...def
+Grantee: cosmos1ghi...jkl
+End of transaction messages
+```
+
+### Custom `Msg` Renderers
+
+Application developers may choose to not follow default renderer value output for their own `Msg`s. In this case, they can implement their own custom `Msg` renderer. This is similar to [EIP4430](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4430.md), where the smart contract developer chooses the description string to be shown to the end user.
+
+This is done by setting the `cosmos.msg.textual.v1.expert_custom_renderer` Protobuf option to a non-empty string. This option CAN ONLY be set on a Protobuf message representing transaction message object (implementing `sdk.Msg` interface).
+
+```protobuf
+message MsgFooBar {
+ // Optional comments to describe in human-readable language the formatting
+ // rules of the custom renderer.
+ option (cosmos.msg.textual.v1.expert_custom_renderer) = "";
+
+ // proto fields
+}
+```
+
+When this option is set on a `Msg`, a registered function will transform the `Msg` into an array of one or more strings, which MAY use the key/value format (described in point #3) with the expert field prefix (described in point #5) and arbitrary indentation (point #6). These strings MAY be rendered from a `Msg` field using a default value renderer, or they may be generated from several fields using custom logic.
+
+The `` is a string convention chosen by the application developer and is used to identify the custom `Msg` renderer. For example, the documentation or specification of this custom algorithm can reference this identifier. This identifier CAN have a versioned suffix (e.g. `_v1`) to adapt for future changes (which would be consensus-breaking). We also recommend adding Protobuf comments to describe in human language the custom logic used.
+
+Moreover, the renderer must provide 2 functions: one for formatting from Protobuf to string, and one for parsing string to Protobuf. These 2 functions are provided by the application developer. To satisfy point #1, the parse function MUST be the inverse of the formatting function. This property will not be checked by the SDK at runtime. However, we strongly recommend the application developer to include a comprehensive suite in their app repo to test invertibility, as to not introduce security bugs.
+
+### Require signing over the `TxBody` and `AuthInfo` raw bytes
+
+Recall that the transaction bytes merkleized on chain are the Protobuf binary serialization of [TxRaw](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.tx.v1beta1#cosmos.tx.v1beta1.TxRaw), which contains the `body_bytes` and `auth_info_bytes`. Moreover, the transaction hash is defined as the SHA256 hash of the `TxRaw` bytes. We require that the user signs over these bytes in SIGN_MODE_TEXTUAL, more specifically over the following string:
+
+```
+*Hash of raw bytes:
+```
+
+where:
+
+* `++` denotes concatenation,
+* `HEX` is the hexadecimal representation of the bytes, all in capital letters, no `0x` prefix,
+* and `len()` is encoded as a Big-Endian uint64.
+
+This is to prevent transaction hash malleability. The point #1 about invertibility assures that transaction `body` and `auth_info` values are not malleable, but the transaction hash still might be malleable with point #1 only, because the SIGN_MODE_TEXTUAL strings don't follow the byte ordering defined in `body_bytes` and `auth_info_bytes`. Without this hash, a malicious validator or exchange could intercept a transaction, modify its transaction hash _after_ the user signed it using SIGN_MODE_TEXTUAL (by tweaking the byte ordering inside `body_bytes` or `auth_info_bytes`), and then submit it to Tendermint.
+
+By including this hash in the SIGN_MODE_TEXTUAL signing payload, we keep the same level of guarantees as [SIGN_MODE_DIRECT](./adr-020-protobuf-transaction-encoding.md).
+
+These bytes are only shown in expert mode, hence the leading `*`.
+
+## Updates to the current specification
+
+The current specification is not set in stone, and future iterations are to be expected. We distinguish two categories of updates to this specification:
+
+1. Updates that require changes of the hardware device embedded application.
+2. Updates that only modify the envelope and the value renderers.
+
+Updates in the 1st category include changes of the `Screen` struct or its corresponding CBOR encoding. This type of updates require a modification of the hardware signer application, to be able to decode and parse the new types. Backwards-compatibility must also be guaranteed, so that the new hardware application works with existing versions of the SDK. These updates require the coordination of multiple parties: SDK developers, hardware application developers (currently: Zondax), and client-side developers (e.g. CosmJS). Furthermore, a new submission of the hardware device application may be necessary, which, depending on the vendor, can take some time. As such, we recommend to avoid this type of updates as much as possible.
+
+Updates in the 2nd category include changes to any of the value renderers or to the transaction envelope. For example, the ordering of fields in the envelope can be swapped, or the timestamp formatting can be modified. Since SIGN_MODE_TEXTUAL sends `Screen`s to the hardware device, this type of change does not need a hardware wallet application update. They are however state-machine-breaking, and must be documented as such. They require the coordination of SDK developers with client-side developers (e.g. CosmJS), so that the updates are released on both sides close to each other in time.
+
+We define a spec version, which is an integer that must be incremented on each update of either category. This spec version will be exposed by the SDK's implementation, and can be communicated to clients. For example, SDK v0.50 might use the spec version 1, and SDK v0.51 might use 2; thanks to this versioning, clients can know how to craft SIGN_MODE_TEXTUAL transactions based on the target SDK version.
+
+The current spec version is defined in the "Status" section, on the top of this document. It is initialized to `0` to allow flexibility in choosing how to define future versions, as it would allow adding a field either in the SignDoc Go struct or in Protobuf in a backwards-compatible way.
+
+## Additional Formatting by the Hardware Device
+
+See [annex 2](./adr-050-sign-mode-textual-annex2.md).
+
+## Examples
+
+1. A minimal MsgSend: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L2-L70).
+2. A transaction with a bit of everything: [see transaction](https://github.com/cosmos/cosmos-sdk/blob/094abcd393379acbbd043996024d66cd65246fb1/tx/textual/internal/testdata/e2e.json#L71-L270).
+
+The examples below are stored in a JSON file with the following fields:
+
+* `proto`: the representation of the transaction in ProtoJSON,
+* `screens`: the transaction rendered into SIGN_MODE_TEXTUAL screens,
+* `cbor`: the sign bytes of the transaction, which is the CBOR encoding of the screens.
+
+## Consequences
+
+### Backwards Compatibility
+
+SIGN_MODE_TEXTUAL is purely additive, and doesn't break any backwards compatibility with other sign modes.
+
+### Positive
+
+* Human-friendly way of signing in hardware devices.
+* Once SIGN_MODE_TEXTUAL is shipped, SIGN_MODE_LEGACY_AMINO_JSON can be deprecated and removed. On the longer term, once the ecosystem has totally migrated, Amino can be totally removed.
+
+### Negative
+
+* Some fields are still encoded in non-human-readable ways, such as public keys in hexadecimal.
+* New ledger app needs to be released, still unclear
+
+### Neutral
+
+* If the transaction is complex, the string array can be arbitrarily long, and some users might just skip some screens and blind sign.
+
+## Further Discussions
+
+* Some details on value renderers need to be polished, see [Annex 1](./adr-050-sign-mode-textual-annex1.md).
+* Are ledger apps able to support both SIGN_MODE_LEGACY_AMINO_JSON and SIGN_MODE_TEXTUAL at the same time?
+* Open question: should we add a Protobuf field option to allow app developers to overwrite the textual representation of certain Protobuf fields and message? This would be similar to Ethereum's [EIP4430](https://github.com/ethereum/EIPs/pull/4430), where the contract developer decides on the textual representation.
+* Internationalization.
+
+## References
+
+* [Annex 1](./adr-050-sign-mode-textual-annex1.md)
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/6513
+* Living document used in the working group: https://hackmd.io/fsZAO-TfT0CKmLDtfMcKeA?both
+* Working group meeting notes: https://hackmd.io/7RkGfv_rQAaZzEigUYhcXw
+* Ethereum's "Described Transactions" https://github.com/ethereum/EIPs/pull/4430
diff --git a/copy-of-sdk-docs/build/architecture/adr-053-go-module-refactoring.md b/copy-of-sdk-docs/build/architecture/adr-053-go-module-refactoring.md
new file mode 100644
index 00000000..a6a87ab2
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-053-go-module-refactoring.md
@@ -0,0 +1,110 @@
+# ADR 053: Go Module Refactoring
+
+## Changelog
+
+* 2022-04-27: First Draft
+
+## Status
+
+PROPOSED
+
+## Abstract
+
+The current SDK is built as a single monolithic go module. This ADR describes
+how we refactor the SDK into smaller independently versioned go modules
+for ease of maintenance.
+
+## Context
+
+Go modules impose certain requirements on software projects with respect to
+stable version numbers (anything above 0.x) in that [any API breaking changes
+necessitate a major version](https://go.dev/doc/modules/release-workflow#breaking)
+increase which technically creates a new go module
+(with a v2, v3, etc. suffix).
+
+[Keeping modules API compatible](https://go.dev/blog/module-compatibility) in
+this way requires a fair amount of thought and discipline.
+
+The Cosmos SDK is a fairly large project which originated before go modules
+came into existence and has always been under a v0.x release even though
+it has been used in production for years now, not because it isn't production
+quality software, but rather because the API compatibility guarantees required
+by go modules are fairly complex to adhere to with such a large project.
+Up to now, it has generally been deemed more important to be able to break the
+API if needed rather than require all users update all package import paths
+to accommodate breaking changes causing v2, v3, etc. releases. This is in
+addition to the other complexities related to protobuf generated code that will
+be addressed in a separate ADR.
+
+Nevertheless, the desire for semantic versioning has been [strong in the
+community](https://github.com/cosmos/cosmos-sdk/discussions/10162) and the
+single go module release process has made it very hard to
+release small changes to isolated features in a timely manner. Release cycles
+often exceed six months which means small improvements done in a day or
+two get bottle-necked by everything else in the monolithic release cycle.
+
+## Decision
+
+To improve the current situation, the SDK is being refactored into multiple
+go modules within the current repository. There has been a [fair amount of
+debate](https://github.com/cosmos/cosmos-sdk/discussions/10582#discussioncomment-1813377)
+as to how to do this, with some developers arguing for larger vs smaller
+module scopes. There are pros and cons to both approaches (which will be
+discussed below in the [Consequences](#consequences) section), but the
+approach being adopted is the following:
+
+* a go module should generally be scoped to a specific coherent set of
+functionality (such as math, errors, store, etc.)
+* when code is removed from the core SDK and moved to a new module path, every
+effort should be made to avoid API breaking changes in the existing code using
+aliases and wrapper types (as done in https://github.com/cosmos/cosmos-sdk/pull/10779
+and https://github.com/cosmos/cosmos-sdk/pull/11788)
+* new go modules should be moved to a standalone domain (`cosmossdk.io`) before
+being tagged as `v1.0.0` to accommodate the possibility that they may be
+better served by a standalone repository in the future
+* all go modules should follow the guidelines in https://go.dev/blog/module-compatibility
+before `v1.0.0` is tagged and should make use of `internal` packages to limit
+the exposed API surface
+* the new go module's API may deviate from the existing code where there are
+clear improvements to be made or to remove legacy dependencies (for instance on
+amino or gogo proto), as long the old package attempts
+to avoid API breakage with aliases and wrappers
+* care should be taken when simply trying to turn an existing package into a
+new go module: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+In general, it seems safer to just create a new module path (appending v2, v3, etc.
+if necessary), rather than trying to make an old package a new module.
+
+## Consequences
+
+### Backwards Compatibility
+
+If the above guidelines are followed to use aliases or wrapper types pointing
+in existing APIs that point back to the new go modules, there should be no or
+very limited breaking changes to existing APIs.
+
+### Positive
+
+* standalone pieces of software will reach `v1.0.0` sooner
+* new features to specific functionality will be released sooner
+
+### Negative
+
+* there will be more go module versions to update in the SDK itself and
+per-project, although most of these will hopefully be indirect
+
+### Neutral
+
+## Further Discussions
+
+Further discussions are occurring primarily in
+https://github.com/cosmos/cosmos-sdk/discussions/10582 and within
+the Cosmos SDK Framework Working Group.
+
+## References
+
+* https://go.dev/doc/modules/release-workflow
+* https://go.dev/blog/module-compatibility
+* https://github.com/cosmos/cosmos-sdk/discussions/10162
+* https://github.com/cosmos/cosmos-sdk/discussions/10582
+* https://github.com/cosmos/cosmos-sdk/pull/10779
+* https://github.com/cosmos/cosmos-sdk/pull/11788
diff --git a/copy-of-sdk-docs/build/architecture/adr-054-semver-compatible-modules.md b/copy-of-sdk-docs/build/architecture/adr-054-semver-compatible-modules.md
new file mode 100644
index 00000000..2152e1a9
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-054-semver-compatible-modules.md
@@ -0,0 +1,731 @@
+# ADR 054: Semver Compatible SDK Modules
+
+## Changelog
+
+* 2022-04-27: First draft
+
+## Status
+
+DRAFT
+
+## Abstract
+
+In order to move the Cosmos SDK to a system of decoupled semantically versioned
+modules which can be composed in different combinations (ex. staking v3 with
+bank v1 and distribution v2), we need to reassess how we organize the API surface
+of modules to avoid problems with go semantic import versioning and
+circular dependencies. This ADR explores various approaches we can take to
+addressing these issues.
+
+## Context
+
+There has been [a fair amount of desire](https://github.com/cosmos/cosmos-sdk/discussions/10162)
+in the community for semantic versioning in the SDK and there has been significant
+movement to splitting SDK modules into [standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899).
+Both of these will ideally allow the ecosystem to move faster because we won't
+be waiting for all dependencies to update synchronously. For instance, we could
+have 3 versions of the core SDK compatible with the latest 2 releases of
+CosmWasm as well as 4 different versions of staking . This sort of setup would
+allow early adopters to aggressively integrate new versions, while allowing
+more conservative users to be selective about which versions they're ready for.
+
+In order to achieve this, we need to solve the following problems:
+
+1. because of the way [go semantic import versioning](https://research.swtch.com/vgo-import) (SIV)
+ works, moving to SIV naively will actually make it harder to achieve these goals
+2. circular dependencies between modules need to be broken to actually release
+ many modules in the SDK independently
+3. pernicious minor version incompatibilities introduced through correctly
+ [evolving protobuf schemas](https://developers.google.com/protocol-buffers/docs/proto3#updating)
+ without correct [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)
+
+Note that all the following discussion assumes that the proto file versioning and state machine versioning of a module
+are distinct in that:
+
+* proto files are maintained in a non-breaking way (using something
+ like [buf breaking](https://docs.buf.build/breaking/overview)
+ to ensure all changes are backwards compatible)
+* proto file versions get bumped much less frequently, i.e. we might maintain `cosmos.bank.v1` through many versions
+ of the bank module state machine
+* state machine breaking changes are more common and ideally this is what we'd want to semantically version with
+ go modules, ex. `x/bank/v2`, `x/bank/v3`, etc.
+
+### Problem 1: Semantic Import Versioning Compatibility
+
+Consider we have a module `foo` which defines the following `MsgDoSomething` and that we've released its state
+machine in go module `example.com/foo`:
+
+```protobuf
+package foo.v1;
+
+message MsgDoSomething {
+ string sender = 1;
+ uint64 amount = 2;
+}
+
+service Msg {
+ DoSomething(MsgDoSomething) returns (MsgDoSomethingResponse);
+}
+```
+
+Now consider that we make a revision to this module and add a new `condition` field to `MsgDoSomething` and also
+add a new validation rule on `amount` requiring it to be non-zero, and that following go semantic versioning we
+release the next state machine version of `foo` as `example.com/foo/v2`.
+
+```protobuf
+// Revision 1
+package foo.v1;
+
+message MsgDoSomething {
+ string sender = 1;
+
+ // amount must be a non-zero integer.
+ uint64 amount = 2;
+
+ // condition is an optional condition on doing the thing.
+ //
+ // Since: Revision 1
+ Condition condition = 3;
+}
+```
+
+Approaching this naively, we would generate the protobuf types for the initial
+version of `foo` in `example.com/foo/types` and we would generate the protobuf
+types for the second version in `example.com/foo/v2/types`.
+
+Now let's say we have a module `bar` which talks to `foo` using this keeper
+interface which `foo` provides:
+
+```go
+type FooKeeper interface {
+ DoSomething(MsgDoSomething) error
+}
+```
+
+#### Scenario A: Backward Compatibility: Newer Foo, Older Bar
+
+Imagine we have a chain which uses both `foo` and `bar` and wants to upgrade to
+`foo/v2`, but the `bar` module has not upgraded to `foo/v2`.
+
+In this case, the chain will not be able to upgrade to `foo/v2` until `bar`
+has upgraded its references to `example.com/foo/types.MsgDoSomething` to
+`example.com/foo/v2/types.MsgDoSomething`.
+
+Even if `bar`'s usage of `MsgDoSomething` has not changed at all, the upgrade
+will be impossible without this change because `example.com/foo/types.MsgDoSomething`
+and `example.com/foo/v2/types.MsgDoSomething` are fundamentally different
+incompatible structs in the go type system.
+
+#### Scenario B: Forward Compatibility: Older Foo, Newer Bar
+
+Now let's consider the reverse scenario, where `bar` upgrades to `foo/v2`
+by changing the `MsgDoSomething` reference to `example.com/foo/v2/types.MsgDoSomething`
+and releases that as `bar/v2` with some other changes that a chain wants.
+The chain, however, has decided that it thinks the changes in `foo/v2` are too
+risky and that it'd prefer to stay on the initial version of `foo`.
+
+In this scenario, it is impossible to upgrade to `bar/v2` without upgrading
+to `foo/v2` even if `bar/v2` would have worked 100% fine with `foo` other
+than changing the import path to `MsgDoSomething` (meaning that `bar/v2`
+doesn't actually use any new features of `foo/v2`).
+
+Now because of the way go semantic import versioning works, we are locked
+into either using `foo` and `bar` OR `foo/v2` and `bar/v2`. We cannot have
+`foo` + `bar/v2` OR `foo/v2` + `bar`. The go type system doesn't allow this
+even if both versions of these modules are otherwise compatible with each
+other.
+
+#### Naive Mitigation
+
+A naive approach to fixing this would be to not regenerate the protobuf types
+in `example.com/foo/v2/types` but instead just update `example.com/foo/types`
+to reflect the changes needed for `v2` (adding `condition` and requiring
+`amount` to be non-zero). Then we could release a patch of `example.com/foo/types`
+with this update and use that for `foo/v2`. But this change is state machine
+breaking for `v1`. It requires changing the `ValidateBasic` method to reject
+the case where `amount` is zero, and it adds the `condition` field which
+should be rejected based
+on [ADR 020 unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering).
+So adding these changes as a patch on `v1` is actually incorrect based on semantic
+versioning. Chains that want to stay on `v1` of `foo` should not
+be importing these changes because they are incorrect for `v1.`
+
+### Problem 2: Circular dependencies
+
+None of the above approaches allow `foo` and `bar` to be separate modules
+if for some reason `foo` and `bar` depend on each other in different ways.
+For instance, we can't have `foo` import `bar/types` while `bar` imports
+`foo/types`.
+
+We have several cases of circular module dependencies in the SDK
+(ex. staking, distribution and slashing) that are legitimate from a state machine
+perspective. Without separating the API types out somehow, there would be
+no way to independently semantically version these modules without some other
+mitigation.
+
+### Problem 3: Handling Minor Version Incompatibilities
+
+Imagine that we solve the first two problems but now have a scenario where
+`bar/v2` wants the option to use `MsgDoSomething.condition` which only `foo/v2`
+supports. If `bar/v2` works with `foo` `v1` and sets `condition` to some non-nil
+value, then `foo` will silently ignore this field resulting in a silent logic
+possibly dangerous logic error. If `bar/v2` were able to check whether `foo` was
+on `v1` or `v2` and dynamically, it could choose to only use `condition` when
+`foo/v2` is available. Even if `bar/v2` were able to perform this check, however,
+how do we know that it is always performing the check properly. Without
+some sort of
+framework-level [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering),
+it is hard to know whether these pernicious hard to detect bugs are getting into
+our app and a client-server layer such as [ADR 033: Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md)
+may be needed to do this.
+
+## Solutions
+
+### Approach A) Separate API and State Machine Modules
+
+One solution (first proposed in https://github.com/cosmos/cosmos-sdk/discussions/10582) is to isolate all protobuf
+generated code into a separate module
+from the state machine module. This would mean that we could have state machine
+go modules `foo` and `foo/v2` which could use a types or API go module say
+`foo/api`. This `foo/api` go module would be perpetually on `v1.x` and only
+accept non-breaking changes. This would then allow other modules to be
+compatible with either `foo` or `foo/v2` as long as the inter-module API only
+depends on the types in `foo/api`. It would also allow modules `foo` and `bar`
+to depend on each other in that both of them could depend on `foo/api` and
+`bar/api` without `foo` directly depending on `bar` and vice versa.
+
+This is similar to the naive mitigation described above except that it separates
+the types into separate go modules which in and of itself could be used to
+break circular module dependencies. It has the same problems as the naive solution,
+otherwise, which we could rectify by:
+
+1. removing all state machine breaking code from the API module (ex. `ValidateBasic` and any other interface methods)
+2. embedding the correct file descriptors for unknown field filtering in the binary
+
+#### Migrate all interface methods on API types to handlers
+
+To solve 1), we need to remove all interface implementations from generated
+types and instead use a handler approach which essentially means that given
+a type `X`, we have some sort of resolver which allows us to resolve interface
+implementations for that type (ex. `sdk.Msg` or `authz.Authorization`). For
+example:
+
+```go
+func (k Keeper) DoSomething(msg MsgDoSomething) error {
+ var validateBasicHandler ValidateBasicHandler
+ err := k.resolver.Resolve(&validateBasic, msg)
+ if err != nil {
+ return err
+ }
+
+ err = validateBasicHandler.ValidateBasic()
+ ...
+}
+```
+
+In the case of some methods on `sdk.Msg`, we could replace them with declarative
+annotations. For instance, `GetSigners` can already be replaced by the protobuf
+annotation `cosmos.msg.v1.signer`. In the future, we may consider some sort
+of protobuf validation framework (like https://github.com/bufbuild/protoc-gen-validate
+but more Cosmos-specific) to replace `ValidateBasic`.
+
+#### Pinned FileDescriptor's
+
+To solve 2), state machine modules must be able to specify what the version of
+the protobuf files was that they were built against. For instance if the API
+module for `foo` upgrades to `foo/v2`, the original `foo` module still needs
+a copy of the original protobuf files it was built with so that ADR 020
+unknown field filtering will reject `MsgDoSomething` when `condition` is
+set.
+
+The simplest way to do this may be to embed the protobuf `FileDescriptor`s into
+the module itself so that these `FileDescriptor`s are used at runtime rather
+than the ones that are built into the `foo/api` which may be different. Using
+[buf build](https://docs.buf.build/build/usage#output-format), [go embed](https://pkg.go.dev/embed),
+and a build script we can probably come up with a solution for embedding
+`FileDescriptor`s into modules that is fairly straightforward.
+
+#### Potential limitations to generated code
+
+One challenge with this approach is that it places heavy restrictions on what
+can go in API modules and requires that most of this is state machine breaking.
+All or most of the code in the API module would be generated from protobuf
+files, so we can probably control this with how code generation is done, but
+it is a risk to be aware of.
+
+For instance, we do code generation for the ORM that in the future could
+contain optimizations that are state machine breaking. We
+would either need to ensure very carefully that the optimizations aren't
+actually state machine breaking in generated code or separate this generated code
+out from the API module into the state machine module. Both of these mitigations
+are potentially viable but the API module approach does require an extra level
+of care to avoid these sorts of issues.
+
+#### Minor Version Incompatibilities
+
+This approach in and of itself does little to address any potential minor
+version incompatibilities and the
+requisite [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering).
+Likely some sort of client-server routing layer which does this check such as
+[ADR 033: Inter-Module communication](./adr-033-protobuf-inter-module-comm.md)
+is required to make sure that this is done properly. We could then allow
+modules to perform a runtime check given a `MsgClient`, ex:
+
+```go
+func (k Keeper) CallFoo() error {
+ if k.interModuleClient.MinorRevision(k.fooMsgClient) >= 2 {
+ k.fooMsgClient.DoSomething(&MsgDoSomething{Condition: ...})
+ } else {
+ ...
+ }
+}
+```
+
+To do the unknown field filtering itself, the ADR 033 router would need to use
+the [protoreflect API](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect)
+to ensure that no fields unknown to the receiving module are set. This could
+result in an undesirable performance hit depending on how complex this logic is.
+
+### Approach B) Changes to Generated Code
+
+An alternate approach to solving the versioning problem is to change how protobuf code is generated and move modules
+mostly or completely in the direction of inter-module communication as described
+in [ADR 033](./adr-033-protobuf-inter-module-comm.md).
+In this paradigm, a module could generate all the types it needs internally - including the API types of other modules -
+and talk to other modules via a client-server boundary. For instance, if `bar` needs to talk to `foo`, it could
+generate its own version of `MsgDoSomething` as `bar/internal/foo/v1.MsgDoSomething` and just pass this to the
+inter-module router which would somehow convert it to the version which foo needs (ex. `foo/internal.MsgDoSomething`).
+
+Currently, two generated structs for the same protobuf type cannot exist in the same go binary without special
+build flags (see https://developers.google.com/protocol-buffers/docs/reference/go/faq#fix-namespace-conflict).
+A relatively simple mitigation to this issue would be to set up the protobuf code to not register protobuf types
+globally if they are generated in an `internal/` package. This will require modules to register their types manually
+with the app-level level protobuf registry, this is similar to what modules already do with the `InterfaceRegistry`
+and amino codec.
+
+If modules _only_ do ADR 033 message passing then a naive and non-performant solution for
+converting `bar/internal/foo/v1.MsgDoSomething`
+to `foo/internal.MsgDoSomething` would be marshaling and unmarshaling in the ADR 033 router. This would break down if
+we needed to expose protobuf types in `Keeper` interfaces because the whole point is to try to keep these types
+`internal/` so that we don't end up with all the import version incompatibilities we've described above. However,
+because of the issue with minor version incompatibilities and the need
+for [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering),
+sticking with the `Keeper` paradigm instead of ADR 033 may be unviable to begin with.
+
+A more performant solution (that could maybe be adapted to work with `Keeper` interfaces) would be to only expose
+getters and setters for generated types and internally store data in memory buffers which could be passed from
+one implementation to another in a zero-copy way.
+
+For example, imagine this protobuf API with only getters and setters is exposed for `MsgSend`:
+
+```go
+type MsgSend interface {
+ proto.Message
+ GetFromAddress() string
+ GetToAddress() string
+ GetAmount() []v1beta1.Coin
+ SetFromAddress(string)
+ SetToAddress(string)
+ SetAmount([]v1beta1.Coin)
+}
+
+func NewMsgSend() MsgSend { return &msgSendImpl{memoryBuffers: ...} }
+```
+
+Under the hood, `MsgSend` could be implemented based on some raw memory buffer in the same way
+that [Cap'n Proto](https://capnproto.org)
+and [FlatBuffers](https://google.github.io/flatbuffers/) so that we could convert between one version of `MsgSend`
+and another without serialization (i.e. zero-copy). This approach would have the added benefits of allowing zero-copy
+message passing to modules written in other languages such as Rust and accessed through a VM or FFI. It could also make
+unknown field filtering in inter-module communication simpler if we require that all new fields are added in sequential
+order, ex. just checking that no field `> 5` is set.
+
+Also, we wouldn't have any issues with state machine breaking code on generated types because all the generated
+code used in the state machine would actually live in the state machine module itself. Depending on how interface
+types and protobuf `Any`s are used in other languages, however, it may still be desirable to take the handler
+approach described in approach A. Either way, types implementing interfaces would still need to be registered
+with an `InterfaceRegistry` as they are now because there would be no way to retrieve them via the global registry.
+
+In order to simplify access to other modules using ADR 033, a public API module (maybe even one
+[remotely generated by Buf](https://buf.build/docs/bsr/generated-sdks/go/)) could be used by client modules instead
+of requiring to generate all client types internally.
+
+The big downsides of this approach are that it requires big changes to how people use protobuf types and would be a
+substantial rewrite of the protobuf code generator. This new generated code, however, could still be made compatible
+with
+the [`google.golang.org/protobuf/reflect/protoreflect`](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect)
+API in order to work with all standard golang protobuf tooling.
+
+It is possible that the naive approach of marshaling/unmarshaling in the ADR 033 router is an acceptable intermediate
+solution if the changes to the code generator are seen as too complex. However, since all modules would likely need
+to migrate to ADR 033 anyway with this approach, it might be better to do this all at once.
+
+### Approach C) Don't address these issues
+
+If the above solutions are seen as too complex, we can also decide not to do anything explicit to enable better module
+version compatibility, and break circular dependencies.
+
+In this case, when developers are confronted with the issues described above they can require dependencies to update in
+sync (what we do now) or attempt some ad-hoc potentially hacky solution.
+
+One approach is to ditch go semantic import versioning (SIV) altogether. Some people have commented that go's SIV
+(i.e. changing the import path to `foo/v2`, `foo/v3`, etc.) is too restrictive and that it should be optional. The
+golang maintainers disagree and only officially support semantic import versioning. We could, however, take the
+contrarian perspective and get more flexibility by using 0.x-based versioning basically forever.
+
+Module version compatibility could then be achieved using go.mod replace directives to pin dependencies to specific
+compatible 0.x versions. For instance if we knew `foo` 0.2 and 0.3 were both compatible with `bar` 0.3 and 0.4, we
+could use replace directives in our go.mod to stick to the versions of `foo` and `bar` we want. This would work as
+long as the authors of `foo` and `bar` avoid incompatible breaking changes between these modules.
+
+Or, if developers choose to use semantic import versioning, they can attempt the naive solution described above
+and would also need to use special tags and replace directives to make sure that modules are pinned to the correct
+versions.
+
+Note, however, that all of these ad-hoc approaches, would be vulnerable to the minor version compatibility issues
+described above unless [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)
+is properly addressed.
+
+### Approach D) Avoid protobuf generated code in public APIs
+
+An alternative approach would be to avoid protobuf generated code in public module APIs. This would help avoid the
+discrepancy between state machine versions and client API versions at the module to module boundaries. It would mean
+that we wouldn't do inter-module message passing based on ADR 033, but rather stick to the existing keeper approach
+and take it one step further by avoiding any protobuf generated code in the keeper interface methods.
+
+Using this approach, our `foo.Keeper.DoSomething` method wouldn't have the generated `MsgDoSomething` struct (which
+comes from the protobuf API), but instead positional parameters. Then in order for `foo/v2` to support the `foo/v1`
+keeper it would simply need to implement both the v1 and v2 keeper APIs. The `DoSomething` method in v2 could have the
+additional `condition` parameter, but this wouldn't be present in v1 at all so there would be no danger of a client
+accidentally setting this when it isn't available.
+
+So this approach would avoid the challenge around minor version incompatibilities because the existing module keeper
+API would not get new fields when they are added to protobuf files.
+
+Taking this approach, however, would likely require making all protobuf generated code internal in order to prevent
+it from leaking into the keeper API. This means we would still need to modify the protobuf code generator to not
+register `internal/` code with the global registry, and we would still need to manually register protobuf
+`FileDescriptor`s (this is probably true in all scenarios). It may, however, be possible to avoid needing to refactor
+interface methods on generated types to handlers.
+
+Also, this approach doesn't address what would be done in scenarios where modules still want to use the message router.
+Either way, we probably still want a way to pass messages from one module to another router safely even if it's just for
+use cases like `x/gov`, `x/authz`, CosmWasm, etc. That would still require most of the things outlined in approach (B),
+although we could advise modules to prefer keepers for communicating with other modules.
+
+The biggest downside of this approach is probably that it requires a strict refactoring of keeper interfaces to avoid
+generated code leaking into the API. This may result in cases where we need to duplicate types that are already defined
+in proto files and then write methods for converting between the golang and protobuf version. This may end up in a lot
+of unnecessary boilerplate and that may discourage modules from actually adopting it and achieving effective version
+compatibility. Approaches (A) and (B), although heavy handed initially, aim to provide a system which once adopted
+more or less gives the developer version compatibility for free with minimal boilerplate. Approach (D) may not be able
+to provide such a straightforward system since it requires a golang API to be defined alongside a protobuf API in a
+way that requires duplication and differing sets of design principles (protobuf APIs encourage additive changes
+while golang APIs would forbid it).
+
+Other downsides to this approach are:
+
+* no clear roadmap to supporting modules in other languages like Rust
+* doesn't get us any closer to proper object capability security (one of the goals of ADR 033)
+* ADR 033 needs to be done properly anyway for the set of use cases which do need it
+
+## Decision
+
+The latest **DRAFT** proposal is:
+
+1. we are alignment on adopting [ADR 033](./adr-033-protobuf-inter-module-comm.md) not just as an addition to the
+ framework, but as a core replacement to the keeper paradigm entirely.
+2. the ADR 033 inter-module router will accommodate any variation of approach (A) or (B) given the following rules:
+ a. if the client type is the same as the server type then pass it directly through,
+ b. if both client and server use the zero-copy generated code wrappers (which still need to be defined), then pass
+ the memory buffers from one wrapper to the other, or
+ c. marshal/unmarshal types between client and server.
+
+This approach will allow for both maximal correctness and enable a clear path to enabling modules within in other
+languages, possibly executed within a WASM VM.
+
+### Minor API Revisions
+
+To declare minor API revisions of proto files, we propose the following guidelines (which were already documented
+in [cosmos.app.v1alpha module options](../proto/cosmos/app/v1alpha1/module.proto)):
+
+* proto packages which are revised from their initial version (considered revision `0`) should include a `package`
+* comment in some .proto file containing the test `Revision N` at the start of a comment line where `N` is the current
+revision number.
+* all fields, messages, etc. added in a version beyond the initial revision should add a comment at the start of a
+comment line of the form `Since: Revision N` where `N` is the non-zero revision it was added.
+
+It is advised that there is a 1:1 correspondence between a state machine module and versioned set of proto files
+which are versioned either as a buf module a go API module or both. If the buf schema registry is used, the version of
+this buf module should always be `1.N` where `N` corresponds to the package revision. Patch releases should be used when
+only documentation comments are updated. It is okay to include proto packages named `v2`, `v3`, etc. in this same
+`1.N` versioned buf module (ex. `cosmos.bank.v2`) as long as all these proto packages consist of a single API intended
+to be served by a single SDK module.
+
+### Introspecting Minor API Revisions
+
+In order for modules to introspect the minor API revision of peer modules, we propose adding the following method
+to `cosmossdk.io/core/intermodule.Client`:
+
+```go
+ServiceRevision(ctx context.Context, serviceName string) uint64
+```
+
+Modules could call this using the service name statically generated by the go grpc code generator:
+
+```go
+intermoduleClient.ServiceRevision(ctx, bankv1beta1.Msg_ServiceDesc.ServiceName)
+```
+
+In the future, we may decide to extend the code generator used for protobuf services to add a field
+to client types which does this check more concisely, ex:
+
+```go
+package bankv1beta1
+
+type MsgClient interface {
+ Send(context.Context, MsgSend) (MsgSendResponse, error)
+ ServiceRevision(context.Context) uint64
+}
+```
+
+### Unknown Field Filtering
+
+To correctly perform [unknown field filtering](./adr-020-protobuf-transaction-encoding.md#unknown-field-filtering),
+the inter-module router can do one of the following:
+
+* use the `protoreflect` API for messages which support that
+* for gogo proto messages, marshal and use the existing `codec/unknownproto` code
+* for zero-copy messages, do a simple check on the highest set field number (assuming we can require that fields are
+ adding consecutively in increasing order)
+
+### `FileDescriptor` Registration
+
+Because a single go binary may contain different versions of the same generated protobuf code, we cannot rely on the
+global protobuf registry to contain the correct `FileDescriptor`s. Because `appconfig` module configuration is itself
+written in protobuf, we would like to load the `FileDescriptor`s for a module before loading a module itself. So we
+will provide ways to register `FileDescriptor`s at module registration time before instantiation. We propose the
+following `cosmossdk.io/core/appmodule.Option` constructors for the various cases of how `FileDescriptor`s may be
+packaged:
+
+```go
+package appmodule
+
+// this can be used when we are using google.golang.org/protobuf compatible generated code
+// Ex:
+// ProtoFiles(bankv1beta1.File_cosmos_bank_v1beta1_module_proto)
+func ProtoFiles(file []protoreflect.FileDescriptor) Option {}
+
+// this can be used when we are using gogo proto generated code.
+func GzippedProtoFiles(file [][]byte) Option {}
+
+// this can be used when we are using buf build to generated a pinned file descriptor
+func ProtoImage(protoImage []byte) Option {}
+```
+
+This approach allows us to support several ways protobuf files might be generated:
+
+* proto files generated internally to a module (use `ProtoFiles`)
+* the API module approach with pinned file descriptors (use `ProtoImage`)
+* gogo proto (use `GzippedProtoFiles`)
+
+### Module Dependency Declaration
+
+One risk of ADR 033 is that dependencies are called at runtime which are not present in the loaded set of SDK modules.
+Also we want modules to have a way to define a minimum dependency API revision that they require. Therefore, all
+modules should declare their set of dependencies upfront. These dependencies could be defined when a module is
+instantiated, but ideally we know what the dependencies are before instantiation and can statically look at an app
+config and determine whether the set of modules. For example, if `bar` requires `foo` revision `>= 1`, then we
+should be able to know this when creating an app config with two versions of `bar` and `foo`.
+
+We propose defining these dependencies in the proto options of the module config object itself.
+
+### Interface Registration
+
+We will also need to define how interface methods are defined on types that are serialized as `google.protobuf.Any`'s.
+In light of the desire to support modules in other languages, we may want to think of solutions that will accommodate
+other languages such as plugins described briefly in [ADR 033](./adr-033-protobuf-inter-module-comm.md#internal-methods).
+
+### Testing
+
+In order to ensure that modules are indeed with multiple versions of their dependencies, we plan to provide specialized
+unit and integration testing infrastructure that automatically tests multiple versions of dependencies.
+
+#### Unit Testing
+
+Unit tests should be conducted inside SDK modules by mocking their dependencies. In a full ADR 033 scenario,
+this means that all interaction with other modules is done via the inter-module router, so mocking of dependencies
+means mocking their msg and query server implementations. We will provide both a test runner and fixture to make this
+streamlined. The key thing that the test runner should do to test compatibility is to test all combinations of
+dependency API revisions. This can be done by taking the file descriptors for the dependencies, parsing their comments
+to determine the revisions various elements were added, and then created synthetic file descriptors for each revision
+by subtracting elements that were added later.
+
+Here is a proposed API for the unit test runner and fixture:
+
+```go
+package moduletesting
+
+import (
+ "context"
+ "testing"
+
+ "cosmossdk.io/core/intermodule"
+ "cosmossdk.io/depinject"
+ "google.golang.org/grpc"
+ "google.golang.org/protobuf/proto"
+ "google.golang.org/protobuf/reflect/protodesc"
+)
+
+type TestFixture interface {
+ context.Context
+ intermodule.Client // for making calls to the module we're testing
+ BeginBlock()
+ EndBlock()
+}
+
+type UnitTestFixture interface {
+ TestFixture
+ grpc.ServiceRegistrar // for registering mock service implementations
+}
+
+type UnitTestConfig struct {
+ ModuleConfig proto.Message // the module's config object
+ DepinjectConfig depinject.Config // optional additional depinject config options
+ DependencyFileDescriptors []protodesc.FileDescriptorProto // optional dependency file descriptors to use instead of the global registry
+}
+
+// Run runs the test function for all combinations of dependency API revisions.
+func (cfg UnitTestConfig) Run(t *testing.T, f func(t *testing.T, f UnitTestFixture)) {
+ // ...
+}
+```
+
+Here is an example for testing bar calling foo which takes advantage of conditional service revisions in the expected
+mock arguments:
+
+```go
+func TestBar(t *testing.T) {
+ UnitTestConfig{ModuleConfig: &foomodulev1.Module{}}.Run(t, func (t *testing.T, f moduletesting.UnitTestFixture) {
+ ctrl := gomock.NewController(t)
+ mockFooMsgServer := footestutil.NewMockMsgServer()
+ foov1.RegisterMsgServer(f, mockFooMsgServer)
+ barMsgClient := barv1.NewMsgClient(f)
+ if f.ServiceRevision(foov1.Msg_ServiceDesc.ServiceName) >= 1 {
+ mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{
+ ...,
+ Condition: ..., // condition is expected in revision >= 1
+ }).Return(&foov1.MsgDoSomethingResponse{}, nil)
+ } else {
+ mockFooMsgServer.EXPECT().DoSomething(gomock.Any(), &foov1.MsgDoSomething{...}).Return(&foov1.MsgDoSomethingResponse{}, nil)
+ }
+ res, err := barMsgClient.CallFoo(f, &MsgCallFoo{})
+ ...
+ })
+}
+```
+
+The unit test runner would make sure that no dependency mocks return arguments which are invalid for the service
+revision being tested to ensure that modules don't incorrectly depend on functionality not present in a given revision.
+
+#### Integration Testing
+
+An integration test runner and fixture would also be provided which instead of using mocks would test actual module
+dependencies in various combinations. Here is the proposed API:
+
+```go
+type IntegrationTestFixture interface {
+ TestFixture
+}
+
+type IntegrationTestConfig struct {
+ ModuleConfig proto.Message // the module's config object
+ DependencyMatrix map[string][]proto.Message // all the dependent module configs
+}
+
+// Run runs the test function for all combinations of dependency modules.
+func (cfg IntegrationTestConfig) Run(t *testing.T, f func (t *testing.T, f IntegrationTestFixture)) {
+ // ...
+}
+```
+
+And here is an example with foo and bar:
+
+```go
+func TestBarIntegration(t *testing.T) {
+ IntegrationTestConfig{
+ ModuleConfig: &barmodulev1.Module{},
+ DependencyMatrix: map[string][]proto.Message{
+ "runtime": []proto.Message{ // test against two versions of runtime
+ &runtimev1.Module{},
+ &runtimev2.Module{},
+ },
+ "foo": []proto.Message{ // test against three versions of foo
+ &foomodulev1.Module{},
+ &foomodulev2.Module{},
+ &foomodulev3.Module{},
+ }
+ }
+ }.Run(t, func (t *testing.T, f moduletesting.IntegrationTestFixture) {
+ barMsgClient := barv1.NewMsgClient(f)
+ res, err := barMsgClient.CallFoo(f, &MsgCallFoo{})
+ ...
+ })
+}
+```
+
+Unlike unit tests, integration tests actually pull in other module dependencies. So that modules can be written
+without direct dependencies on other modules and because golang has no concept of development dependencies, integration
+tests should be written in separate go modules, ex. `example.com/bar/v2/test`. Because this paradigm uses go semantic
+versioning, it is possible to build a single go module which imports 3 versions of bar and 2 versions of runtime and
+can test these all together in the six various combinations of dependencies.
+
+## Consequences
+
+### Backwards Compatibility
+
+Modules which migrate fully to ADR 033 will not be compatible with existing modules which use the keeper paradigm.
+As a temporary workaround we may create some wrapper types that emulate the current keeper interface to minimize
+the migration overhead.
+
+### Positive
+
+* we will be able to deliver interoperable semantically versioned modules which should dramatically increase the
+ ability of the Cosmos SDK ecosystem to iterate on new features
+* it will be possible to write Cosmos SDK modules in other languages in the near future
+
+### Negative
+
+* all modules will need to be refactored somewhat dramatically
+
+### Neutral
+
+* the `cosmossdk.io/core/appconfig` framework will play a more central role in terms of how modules are defined, this
+ is likely generally a good thing but does mean additional changes for users wanting to stick to the pre-depinject way
+ of wiring up modules
+* `depinject` is somewhat less needed or maybe even obviated because of the full ADR 033 approach. If we adopt the
+ core API proposed in https://github.com/cosmos/cosmos-sdk/pull/12239, then a module would probably always instantiate
+ itself with a method `ProvideModule(appmodule.Service) (appmodule.AppModule, error)`. There is no complex wiring of
+ keeper dependencies in this scenario and dependency injection may not have as much of (or any) use case.
+
+## Further Discussions
+
+The decision described above is considered in draft mode and is pending final buy-in from the team and key stakeholders.
+Key outstanding discussions if we do adopt that direction are:
+
+* how do module clients introspect dependency module API revisions
+* how do modules determine a minor dependency module API revision requirement
+* how do modules appropriately test compatibility with different dependency versions
+* how to register and resolve interface implementations
+* how do modules register their protobuf file descriptors depending on the approach they take to generated code (the
+ API module approach may still be viable as a supported strategy and would need pinned file descriptors)
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/discussions/10162
+* https://github.com/cosmos/cosmos-sdk/discussions/10582
+* https://github.com/cosmos/cosmos-sdk/discussions/10368
+* https://github.com/cosmos/cosmos-sdk/pull/11340
+* https://github.com/cosmos/cosmos-sdk/issues/11899
+* [ADR 020](./adr-020-protobuf-transaction-encoding.md)
+* [ADR 033](./adr-033-protobuf-inter-module-comm.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-055-orm.md b/copy-of-sdk-docs/build/architecture/adr-055-orm.md
new file mode 100644
index 00000000..6d5974e5
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-055-orm.md
@@ -0,0 +1,114 @@
+# ADR 055: ORM
+
+## Changelog
+
+* 2022-04-27: First draft
+
+## Status
+
+ACCEPTED Implemented
+
+## Abstract
+
+In order to make it easier for developers to build Cosmos SDK modules and for clients to query, index and verify proofs
+against state data, we have implemented an ORM (object-relational mapping) layer for the Cosmos SDK.
+
+## Context
+
+Historically modules in the Cosmos SDK have always used the key-value store directly and created various handwritten
+functions for managing key format as well as constructing secondary indexes. This consumes a significant amount of
+time when building a module and is error-prone. Because key formats are non-standard, sometimes poorly documented,
+and subject to change, it is hard for clients to generically index, query and verify merkle proofs against state data.
+
+The known first instance of an "ORM" in the Cosmos ecosystem was in [weave](https://github.com/iov-one/weave/tree/master/orm).
+A later version was built for [regen-ledger](https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm) for
+use in the group module and later [ported to the SDK](https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm)
+just for that purpose.
+
+While these earlier designs made it significantly easier to write state machines, they still required a lot of manual
+configuration, didn't expose state format directly to clients, and were limited in their support of different types
+of index keys, composite keys, and range queries.
+
+Discussions about the design continued in https://github.com/cosmos/cosmos-sdk/discussions/9156 and more
+sophisticated proofs of concept were created in https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm
+and https://github.com/cosmos/cosmos-sdk/pull/10454.
+
+## Decision
+
+These prior efforts culminated in the creation of the Cosmos SDK `orm` go module which uses protobuf annotations
+for specifying ORM table definitions. This ORM is based on the new `google.golang.org/protobuf/reflect/protoreflect`
+API and supports:
+
+* sorted indexes for all simple protobuf types (except `bytes`, `enum`, `float`, `double`) as well as `Timestamp` and `Duration`
+* unsorted `bytes` and `enum` indexes
+* composite primary and secondary keys
+* unique indexes
+* auto-incrementing `uint64` primary keys
+* complex prefix and range queries
+* paginated queries
+* complete logical decoding of KV-store data
+
+Almost all the information needed to decode state directly is specified in .proto files. Each table definition specifies
+an ID which is unique per .proto file and each index within a table is unique within that table. Clients then only need
+to know the name of a module and the prefix ORM data for a specific .proto file within that module in order to decode
+state data directly. This additional information will be exposed directly through app configs which will be explained
+in a future ADR related to app wiring.
+
+The ORM makes optimizations around storage space by not repeating values in the primary key in the key value
+when storing primary key records. For example, if the object `{"a":0,"b":1}` has the primary key `a`, it will
+be stored in the key value store as `Key: '0', Value: {"b":1}` (with more efficient protobuf binary encoding).
+Also, the generated code from https://github.com/cosmos/cosmos-proto does optimizations around the
+`google.golang.org/protobuf/reflect/protoreflect` API to improve performance.
+
+A code generator is included with the ORM which creates type safe wrappers around the ORM's dynamic `Table`
+implementation and is the recommended way for modules to use the ORM.
+
+The ORM tests provide a simplified bank module demonstration which illustrates:
+
+* [ORM proto options](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.proto)
+* [Generated Code](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/internal/testpb/bank.cosmos_orm.go)
+* [Example Usage in a Module Keeper](https://github.com/cosmos/cosmos-sdk/blob/0d846ae2f0424b2eb640f6679a703b52d407813d/orm/model/ormdb/module_test.go)
+
+## Consequences
+
+### Backwards Compatibility
+
+State machine code that adopts the ORM will need migrations as the state layout is generally backwards incompatible.
+These state machines will also need to migrate to https://github.com/cosmos/cosmos-proto at least for state data.
+
+### Positive
+
+* easier to build modules
+* easier to add secondary indexes to state
+* possible to write a generic indexer for ORM state
+* easier to write clients that do state proofs
+* possible to automatically write query layers rather than needing to manually implement gRPC queries
+
+### Negative
+
+* worse performance than handwritten keys (for now). See [Further Discussions](#further-discussions)
+for potential improvements
+
+### Neutral
+
+## Further Discussions
+
+Further discussions will happen within the Cosmos SDK Framework Working Group. Current planned and ongoing work includes:
+
+* automatically generate client-facing query layer
+* client-side query libraries that transparently verify light client proofs
+* index ORM data to SQL databases
+* improve performance by:
+ * optimizing existing reflection based code to avoid unnecessary gets when doing deletes & updates of simple tables
+ * more sophisticated code generation such as making fast path reflection even faster (avoiding `switch` statements),
+ or even fully generating code that equals handwritten performance
+
+
+## References
+
+* https://github.com/iov-one/weave/tree/master/orm).
+* https://github.com/regen-network/regen-ledger/tree/157181f955823149e1825263a317ad8e16096da4/orm
+* https://github.com/cosmos/cosmos-sdk/tree/35d3312c3be306591fcba39892223f1244c8d108/x/group/internal/orm
+* https://github.com/cosmos/cosmos-sdk/discussions/9156
+* https://github.com/allinbits/cosmos-sdk-poc/tree/master/runtime/orm
+* https://github.com/cosmos/cosmos-sdk/pull/10454
diff --git a/copy-of-sdk-docs/build/architecture/adr-057-app-wiring.md b/copy-of-sdk-docs/build/architecture/adr-057-app-wiring.md
new file mode 100644
index 00000000..824403fb
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-057-app-wiring.md
@@ -0,0 +1,369 @@
+# ADR 057: App Wiring
+
+## Changelog
+
+* 2022-05-04: Initial Draft
+* 2022-08-19: Updates
+
+## Status
+
+PROPOSED Implemented
+
+## Abstract
+
+In order to make it easier to build Cosmos SDK modules and apps, we propose a new app wiring system based on
+dependency injection and declarative app configurations to replace the current `app.go` code.
+
+## Context
+
+A number of factors have made the SDK and SDK apps in their current state hard to maintain. A symptom of the current
+state of complexity is [`simapp/app.go`](https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go)
+which contains almost 100 lines of imports and is otherwise over 600 lines of mostly boilerplate code that is
+generally copied to each new project. (Not to mention the additional boilerplate which gets copied in `simapp/simd`.)
+
+The large amount of boilerplate needed to bootstrap an app has made it hard to release independently versioned go
+modules for Cosmos SDK modules as described in [ADR 053: Go Module Refactoring](./adr-053-go-module-refactoring.md).
+
+In addition to being very verbose and repetitive, `app.go` also exposes a large surface area for breaking changes
+as most modules instantiate themselves with positional parameters which forces breaking changes anytime a new parameter
+(even an optional one) is needed.
+
+Several attempts were made to improve the current situation including [ADR 033: Internal-Module Communication](./adr-033-protobuf-inter-module-comm.md)
+and [a proof-of-concept of a new SDK](https://github.com/allinbits/cosmos-sdk-poc). The discussions around these
+designs led to the current solution described here.
+
+## Decision
+
+In order to improve the current situation, a new "app wiring" paradigm has been designed to replace `app.go` which
+involves:
+
+* declaration configuration of the modules in an app which can be serialized to JSON or YAML
+* a dependency-injection (DI) framework for instantiating apps from the configuration
+
+### Dependency Injection
+
+When examining the code in `app.go` most of the code simply instantiates modules with dependencies provided either
+by the framework (such as store keys) or by other modules (such as keepers). It is generally pretty obvious given
+the context what the correct dependencies actually should be, so dependency-injection is an obvious solution. Rather
+than making developers manually resolve dependencies, a module will tell the DI container what dependency it needs
+and the container will figure out how to provide it.
+
+We explored several existing DI solutions in golang and felt that the reflection-based approach in [uber/dig](https://github.com/uber-go/dig)
+was closest to what we needed but not quite there. Assessing what we needed for the SDK, we designed and built
+the Cosmos SDK [depinject module](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject), which has the following
+features:
+
+* dependency resolution and provision through functional constructors, ex: `func(need SomeDep) (AnotherDep, error)`
+* dependency injection `In` and `Out` structs which support `optional` dependencies
+* grouped-dependencies (many-per-container) through the `ManyPerContainerType` tag interface
+* module-scoped dependencies via `ModuleKey`s (where each module gets a unique dependency)
+* one-per-module dependencies through the `OnePerModuleType` tag interface
+* sophisticated debugging information and container visualization via GraphViz
+
+Here are some examples of how these would be used in an SDK module:
+
+* `StoreKey` could be a module-scoped dependency which is unique per module
+* a module's `AppModule` instance (or the equivalent) could be a `OnePerModuleType`
+* CLI commands could be provided with `ManyPerContainerType`s
+
+Note that even though dependency resolution is dynamic and based on reflection, which could be considered a pitfall
+of this approach, the entire dependency graph should be resolved immediately on app startup and only gets resolved
+once (except in the case of dynamic config reloading which is a separate topic). This means that if there are any
+errors in the dependency graph, they will get reported immediately on startup so this approach is only slightly worse
+than fully static resolution in terms of error reporting and much better in terms of code complexity.
+
+### Declarative App Config
+
+In order to compose modules into an app, a declarative app configuration will be used. This configuration is based off
+of protobuf and its basic structure is very simple:
+
+```protobuf
+package cosmos.app.v1;
+
+message Config {
+ repeated ModuleConfig modules = 1;
+}
+
+message ModuleConfig {
+ string name = 1;
+ google.protobuf.Any config = 2;
+}
+```
+
+(See also https://github.com/cosmos/cosmos-sdk/blob/6e18f582bf69e3926a1e22a6de3c35ea327aadce/proto/cosmos/app/v1alpha1/config.proto)
+
+The configuration for every module is itself a protobuf message and modules will be identified and loaded based
+on the protobuf type URL of their config object (ex. `cosmos.bank.module.v1.Module`). Modules are given a unique short `name`
+to share resources across different versions of the same module which might have a different protobuf package
+versions (ex. `cosmos.bank.module.v2.Module`). All module config objects should define the `cosmos.app.v1alpha1.module`
+descriptor option which will provide additional useful metadata for the framework and which can also be indexed
+in module registries.
+
+An example app config in YAML might look like this:
+
+```yaml
+modules:
+ - name: baseapp
+ config:
+ "@type": cosmos.baseapp.module.v1.Module
+ begin_blockers: [staking, auth, bank]
+ end_blockers: [bank, auth, staking]
+ init_genesis: [bank, auth, staking]
+ - name: auth
+ config:
+ "@type": cosmos.auth.module.v1.Module
+ bech32_prefix: "foo"
+ - name: bank
+ config:
+ "@type": cosmos.bank.module.v1.Module
+ - name: staking
+ config:
+ "@type": cosmos.staking.module.v1.Module
+```
+
+In the above example, there is a hypothetical `baseapp` module which contains the information around ordering of
+begin blockers, end blockers, and init genesis. Rather than lifting these concerns up to the module config layer,
+they are themselves handled by a module which could allow a convenient way of swapping out different versions of
+baseapp (for instance to target different versions of tendermint), without needing to change the rest of the config.
+The `baseapp` module would then provide to the server framework (which sort of sits outside the ABCI app) an instance
+of `abci.Application`.
+
+In this model, an app is *modules all the way down* and the dependency injection/app config layer is very much
+protocol-agnostic and can adapt to even major breaking changes at the protocol layer.
+
+### Module & Protobuf Registration
+
+In order for the two components of dependency injection and declarative configuration to work together as described,
+we need a way for modules to actually register themselves and provide dependencies to the container.
+
+One additional complexity that needs to be handled at this layer is protobuf registry initialization. Recall that
+in both the current SDK `codec` and the proposed [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802),
+protobuf types need to be explicitly registered. Given that the app config itself is based on protobuf and
+uses protobuf `Any` types, protobuf registration needs to happen before the app config itself can be decoded. Because
+we don't know which protobuf `Any` types will be needed a priori and modules themselves define those types, we need
+to decode the app config in separate phases:
+
+1. parse app config JSON/YAML as raw JSON and collect required module type URLs (without doing proto JSON decoding)
+2. build a [protobuf type registry](https://pkg.go.dev/google.golang.org/protobuf@v1.28.0/reflect/protoregistry) based
+ on file descriptors and types provided by each required module
+3. decode the app config as proto JSON using the protobuf type registry
+
+Because in [ADR 054: Protobuf Semver Compatible Codegen](https://github.com/cosmos/cosmos-sdk/pull/11802), each module
+might use `internal` generated code which is not registered with the global protobuf registry, this code should provide
+an alternate way to register protobuf types with a type registry. In the same way that `.pb.go` files currently have a
+`var File_foo_proto protoreflect.FileDescriptor` for the file `foo.proto`, generated code should have a new member
+`var Types_foo_proto TypeInfo` where `TypeInfo` is an interface or struct with all the necessary info to register both
+the protobuf generated types and file descriptor.
+
+So a module must provide dependency injection providers and protobuf types, and takes as input its module
+config object which uniquely identifies the module based on its type URL.
+
+With this in mind, we define a global module register which allows module implementations to register themselves
+with the following API:
+
+```go
+// Register registers a module with the provided type name (ex. cosmos.bank.module.v1.Module)
+// and the provided options.
+func Register(configTypeName protoreflect.FullName, option ...Option) { ... }
+
+type Option { /* private methods */ }
+
+// Provide registers dependency injection provider functions which work with the
+// cosmos-sdk container module. These functions can also accept an additional
+// parameter for the module's config object.
+func Provide(providers ...interface{}) Option { ... }
+
+// Types registers protobuf TypeInfo's with the protobuf registry.
+func Types(types ...TypeInfo) Option { ... }
+```
+
+Ex:
+
+```go
+func init() {
+ appmodule.Register("cosmos.bank.module.v1.Module",
+ appmodule.Types(
+ types.Types_tx_proto,
+ types.Types_query_proto,
+ types.Types_types_proto,
+ ),
+ appmodule.Provide(
+ provideBankModule,
+ )
+ )
+}
+
+type Inputs struct {
+ container.In
+
+ AuthKeeper auth.Keeper
+ DB ormdb.ModuleDB
+}
+
+type Outputs struct {
+ Keeper bank.Keeper
+ AppModule appmodule.AppModule
+}
+
+func ProvideBankModule(config *bankmodulev1.Module, Inputs) (Outputs, error) { ... }
+```
+
+Note that in this module, a module configuration object *cannot* register different dependency providers at runtime
+based on the configuration. This is intentional because it allows us to know globally which modules provide which
+dependencies, and it will also allow us to do code generation of the whole app initialization. This
+can help us figure out issues with missing dependencies in an app config if the needed modules are loaded at runtime.
+In cases where required modules are not loaded at runtime, it may be possible to guide users to the correct module if
+through a global Cosmos SDK module registry.
+
+The `*appmodule.Handler` type referenced above is a replacement for the legacy `AppModule` framework, and
+described in [ADR 063: Core Module API](./adr-063-core-module-api.md).
+
+### New `app.go`
+
+With this setup, `app.go` might now look something like this:
+
+```go
+package main
+
+import (
+ // Each go package which registers a module must be imported just for side-effects
+ // so that module implementations are registered.
+ _ "github.com/cosmos/cosmos-sdk/x/auth/module"
+ _ "github.com/cosmos/cosmos-sdk/x/bank/module"
+ _ "github.com/cosmos/cosmos-sdk/x/staking/module"
+ "github.com/cosmos/cosmos-sdk/core/app"
+)
+
+// go:embed app.yaml
+var appConfigYAML []byte
+
+func main() {
+ app.Run(app.LoadYAML(appConfigYAML))
+}
+```
+
+### Application to existing SDK modules
+
+So far we have described a system which is largely agnostic to the specifics of the SDK such as store keys, `AppModule`,
+`BaseApp`, etc. Improvements to these parts of the framework that integrate with the general app wiring framework
+defined here are described in [ADR 063: Core Module API](./adr-063-core-module-api.md).
+
+### Registration of Inter-Module Hooks
+
+Some modules define a hooks interface (ex. `StakingHooks`) which allows one module to call back into another module
+when certain events happen.
+
+With the app wiring framework, these hooks interfaces can be defined as a `OnePerModuleType`s and then the module
+which consumes these hooks can collect these hooks as a map of module name to hook type (ex. `map[string]FooHooks`). Ex:
+
+```go
+func init() {
+ appmodule.Register(
+ &foomodulev1.Module{},
+ appmodule.Invoke(InvokeSetFooHooks),
+ ...
+ )
+}
+func InvokeSetFooHooks(
+ keeper *keeper.Keeper,
+ fooHooks map[string]FooHooks,
+) error {
+ for k in sort.Strings(maps.Keys(fooHooks)) {
+ keeper.AddFooHooks(fooHooks[k])
+ }
+}
+```
+
+Optionally, the module consuming hooks can allow app's to define an order for calling these hooks based on module name
+in its config object.
+
+An alternative way for registering hooks via reflection was considered where all keeper types are inspected to see if
+they implement the hook interface by the modules exposing hooks. This has the downsides of:
+
+* needing to expose all the keepers of all modules to the module providing hooks,
+* not allowing for encapsulating hooks on a different type which doesn't expose all keeper methods,
+* harder to know statically which module expose hooks or are checking for them.
+
+With the approach proposed here, hooks registration will be obviously observable in `app.go` if `depinject` codegen
+(described below) is used.
+
+### Code Generation
+
+The `depinject` framework will optionally allow the app configuration and dependency injection wiring to be code
+generated. This will allow:
+
+* dependency injection wiring to be inspected as regular go code just like the existing `app.go`,
+* dependency injection to be opt-in with manual wiring 100% still possible.
+
+Code generation requires that all providers and invokers and their parameters are exported and in non-internal packages.
+
+### Module Semantic Versioning
+
+When we start creating semantically versioned SDK modules that are in standalone go modules, a state machine breaking
+change to a module should be handled as follows:
+
+* the semantic major version should be incremented, and
+* a new semantically versioned module config protobuf type should be created.
+
+For instance, if we have the SDK module for bank in the go module `github.com/cosmos/cosmos-sdk/x/bank` with the module config type
+`cosmos.bank.module.v1.Module`, and we want to make a state machine breaking change to the module, we would:
+
+* create a new go module `github.com/cosmos/cosmos-sdk/x/bank/v2`,
+* with the module config protobuf type `cosmos.bank.module.v2.Module`.
+
+This *does not* mean that we need to increment the protobuf API version for bank. Both modules can support
+`cosmos.bank.v1`, but `github.com/cosmos/cosmos-sdk/x/bank/v2` will be a separate go module with a separate module config type.
+
+This practice will eventually allow us to use appconfig to load new versions of a module via a configuration change.
+
+Effectively, there should be a 1:1 correspondence between a semantically versioned go module and a
+versioned module config protobuf type, and major versioning bumps should occur whenever state machine breaking changes
+are made to a module.
+
+NOTE: SDK modules that are standalone go modules *should not* adopt semantic versioning until the concerns described in
+[ADR 054: Module Semantic Versioning](./adr-054-semver-compatible-modules.md) are
+addressed. The short-term solution for this issue was left somewhat unresolved. However, the easiest tactic is
+likely to use a standalone API go module and follow the guidelines described in this comment: https://github.com/cosmos/cosmos-sdk/pull/11802#issuecomment-1406815181. For the time-being, it is recommended that
+Cosmos SDK modules continue to follow tried and true [0-based versioning](https://0ver.org) until an officially
+recommended solution is provided. This section of the ADR will be updated when that happens and for now, this section
+should be considered as a design recommendation for future adoption of semantic versioning.
+
+## Consequences
+
+### Backwards Compatibility
+
+Modules which work with the new app wiring system do not need to drop their existing `AppModule` and `NewKeeper`
+registration paradigms. These two methods can live side-by-side for as long as is needed.
+
+### Positive
+
+* wiring up new apps will be simpler, more succinct and less error-prone
+* it will be easier to develop and test standalone SDK modules without needing to replicate all of simapp
+* it may be possible to dynamically load modules and upgrade chains without needing to do a coordinated stop and binary
+ upgrade using this mechanism
+* easier plugin integration
+* dependency injection framework provides more automated reasoning about dependencies in the project, with a graph visualization.
+
+### Negative
+
+* it may be confusing when a dependency is missing although error messages, the GraphViz visualization, and global
+ module registration may help with that
+
+### Neutral
+
+* it will require work and education
+
+## Further Discussions
+
+The protobuf type registration system described in this ADR has not been implemented and may need to be reconsidered in
+light of code generation. It may be better to do this type registration with a DI provider.
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/blob/c3edbb22cab8678c35e21fe0253919996b780c01/simapp/app.go
+* https://github.com/allinbits/cosmos-sdk-poc
+* https://github.com/uber-go/dig
+* https://github.com/google/wire
+* https://pkg.go.dev/github.com/cosmos/cosmos-sdk/container
+* https://github.com/cosmos/cosmos-sdk/pull/11802
+* [ADR 063: Core Module API](./adr-063-core-module-api.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-058-auto-generated-cli.md b/copy-of-sdk-docs/build/architecture/adr-058-auto-generated-cli.md
new file mode 100644
index 00000000..8dc78920
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-058-auto-generated-cli.md
@@ -0,0 +1,98 @@
+# ADR 058: Auto-Generated CLI
+
+## Changelog
+
+* 2022-05-04: Initial Draft
+
+## Status
+
+ACCEPTED Partially Implemented
+
+## Abstract
+
+In order to make it easier for developers to write Cosmos SDK modules, we provide infrastructure which automatically
+generates CLI commands based on protobuf definitions.
+
+## Context
+
+Current Cosmos SDK modules generally implement a CLI command for every transaction and every query supported by the
+module. These are handwritten for each command and essentially amount to providing some CLI flags or positional
+arguments for specific fields in protobuf messages.
+
+In order to make sure CLI commands are correctly implemented as well as to make sure that the application works
+in end-to-end scenarios, we do integration tests using CLI commands. While these tests are valuable on some-level,
+they can be hard to write and maintain, and run slowly. [Some teams have contemplated](https://github.com/regen-network/regen-ledger/issues/1041)
+moving away from CLI-style integration tests (which are really end-to-end tests) towards narrower integration tests
+which exercise `MsgClient` and `QueryClient` directly. This might involve replacing the current end-to-end CLI
+tests with unit tests as there still needs to be some way to test these CLI commands for full quality assurance.
+
+## Decision
+
+To make module development simpler, we provide infrastructure - in the new [`client/v2`](https://github.com/cosmos/cosmos-sdk/tree/main/client/v2)
+go module - for automatically generating CLI commands based on protobuf definitions to either replace or complement
+handwritten CLI commands. This will mean that when developing a module, it will be possible to skip both writing and
+testing CLI commands as that can all be taken care of by the framework.
+
+The basic design for automatically generating CLI commands is to:
+
+* create one CLI command for each `rpc` method in a protobuf `Query` or `Msg` service
+* create a CLI flag for each field in the `rpc` request type
+* for `query` commands call gRPC and print the response as protobuf JSON or YAML (via the `-o`/`--output` flag)
+* for `tx` commands, create a transaction and apply common transaction flags
+
+In order to make the auto-generated CLI as easy to use (or easier) than handwritten CLI, we need to do custom handling
+of specific protobuf field types so that the input format is easy for humans:
+
+* `Coin`, `Coins`, `DecCoin`, and `DecCoins` should be input using the existing format (i.e. `1000uatom`)
+* it should be possible to specify an address using either the bech32 address string or a named key in the keyring
+* `Timestamp` and `Duration` should accept strings like `2001-01-01T00:00:00Z` and `1h3m` respectively
+* pagination should be handled with flags like `--page-limit`, `--page-offset`, etc.
+* it should be possible to customize any other protobuf type either via its message name or a `cosmos_proto.scalar` annotation
+
+At a basic level it should be possible to generate a command for a single `rpc` method as well as all the commands for
+a whole protobuf `service` definition. It should be possible to mix and match auto-generated and handwritten commands.
+
+## Consequences
+
+### Backwards Compatibility
+
+Existing modules can mix and match auto-generated and handwritten CLI commands so it is up to them as to whether they
+make breaking changes by replacing handwritten commands with slightly different auto-generated ones.
+
+For now the SDK will maintain the existing set of CLI commands for backwards compatibility but new commands will use
+this functionality.
+
+### Positive
+
+* module developers will not need to write CLI commands
+* module developers will not need to test CLI commands
+* [lens](https://github.com/strangelove-ventures/lens) may benefit from this
+
+### Negative
+
+### Neutral
+
+## Further Discussions
+
+We would like to be able to customize:
+
+* short and long usage strings for commands
+* aliases for flags (ex. `-a` for `--amount`)
+* which fields are positional parameters rather than flags
+
+It is an [open discussion](https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129)
+as to whether these customizations options should lie in:
+
+* the .proto files themselves,
+* separate config files (ex. YAML), or
+* directly in code
+
+Providing the options in .proto files would allow a dynamic client to automatically generate
+CLI commands on the fly. However, that may pollute the .proto files themselves with information that is only relevant
+for a small subset of users.
+
+## References
+
+* https://github.com/regen-network/regen-ledger/issues/1041
+* https://github.com/cosmos/cosmos-sdk/tree/main/client/v2
+* https://github.com/cosmos/cosmos-sdk/pull/11725#issuecomment-1108676129
diff --git a/copy-of-sdk-docs/build/architecture/adr-059-test-scopes.md b/copy-of-sdk-docs/build/architecture/adr-059-test-scopes.md
new file mode 100644
index 00000000..6fa387c2
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-059-test-scopes.md
@@ -0,0 +1,254 @@
+# ADR 059: Test Scopes
+
+## Changelog
+
+* 2022-08-02: Initial Draft
+* 2023-03-02: Add precision for integration tests
+* 2023-03-23: Add precision for E2E tests
+
+## Status
+
+PROPOSED Partially Implemented
+
+## Abstract
+
+Recent work in the SDK aimed at breaking apart the monolithic root go module has highlighted
+shortcomings and inconsistencies in our testing paradigm. This ADR clarifies a common
+language for talking about test scopes and proposes an ideal state of tests at each scope.
+
+## Context
+
+[ADR-053: Go Module Refactoring](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-053-go-module-refactoring.md) expresses our desire for an SDK composed of many
+independently versioned Go modules, and [ADR-057: App Wiring](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-057-app-wiring.md) offers a methodology
+for breaking apart inter-module dependencies through the use of dependency injection. As
+described in [EPIC: Separate all SDK modules into standalone go modules](https://github.com/cosmos/cosmos-sdk/issues/11899), module
+dependencies are particularly complected in the test phase, where simapp is used as
+the key test fixture in setting up and running tests. It is clear that the successful
+completion of Phases 3 and 4 in that EPIC require the resolution of this dependency problem.
+
+In [EPIC: Unit Testing of Modules via Mocks](https://github.com/cosmos/cosmos-sdk/issues/12398) it was thought this Gordian knot could be
+unwound by mocking all dependencies in the test phase for each module, but seeing how these
+refactors were complete rewrites of test suites discussions began around the fate of the
+existing integration tests. One perspective is that they ought to be thrown out, another is
+that integration tests have some utility of their own and a place in the SDK's testing story.
+
+Another point of confusion has been the current state of CLI test suites, [x/auth](https://github.com/cosmos/cosmos-sdk/blob/0f7e56c6f9102cda0ca9aba5b6f091dbca976b5a/x/auth/client/testutil/suite.go#L44-L49) for
+example. In code these are called integration tests, but in reality function as end to end
+tests by starting up a tendermint node and full application. [EPIC: Rewrite and simplify
+CLI tests](https://github.com/cosmos/cosmos-sdk/issues/12696) identifies the ideal state of CLI tests using mocks, but does not address the
+place end to end tests may have in the SDK.
+
+From here we identify three scopes of testing, **unit**, **integration**, **e2e** (end to
+end), seek to define the boundaries of each, their shortcomings (real and imposed), and their
+ideal state in the SDK.
+
+### Unit tests
+
+Unit tests exercise the code contained in a single module (e.g. `/x/bank`) or package
+(e.g. `/client`) in isolation from the rest of the code base. Within this we identify two
+levels of unit tests, *illustrative* and *journey*. The definitions below lean heavily on
+[The BDD Books - Formulation](https://leanpub.com/bddbooks-formulation) section 1.3.
+
+*Illustrative* tests exercise an atomic part of a module in isolation - in this case we
+might do fixture setup/mocking of other parts of the module.
+
+Tests which exercise a whole module's function with dependencies mocked, are *journeys*.
+These are almost like integration tests in that they exercise many things together but still
+use mocks.
+
+Example 1 journey vs illustrative tests - [depinject's BDD style tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go), show how we can
+rapidly build up many illustrative cases demonstrating behavioral rules without [very much code](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/binding_test.go) while maintaining high level readability.
+
+Example 2 [depinject table driven tests](https://github.com/cosmos/cosmos-sdk/blob/main/depinject/provider_desc_test.go)
+
+Example 3 [Bank keeper tests](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/bank/keeper/keeper_test.go#L94-L105) - A mock implementation of `AccountKeeper` is supplied to the keeper constructor.
+
+#### Limitations
+
+Certain modules are tightly coupled beyond the test phase. A recent dependency report for
+`bank -> auth` found 274 total usages of `auth` in `bank`, 50 of which are in
+production code and 224 in test. This tight coupling may suggest that either the modules
+should be merged, or refactoring is required to abstract references to the core types tying
+the modules together. It could also indicate that these modules should be tested together
+in integration tests beyond mocked unit tests.
+
+In some cases setting up a test case for a module with many mocked dependencies can be quite
+cumbersome and the resulting test may only show that the mocking framework works as expected
+rather than working as a functional test of interdependent module behavior.
+
+### Integration tests
+
+Integration tests define and exercise relationships between an arbitrary number of modules
+and/or application subsystems.
+
+Wiring for integration tests is provided by `depinject` and some [helper code](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/testutil/sims/app_helpers.go#L95) starts up
+a running application. A section of the running application may then be tested. Certain
+inputs during different phases of the application life cycle are expected to produce
+invariant outputs without too much concern for component internals. This type of black box
+testing has a larger scope than unit testing.
+
+Example 1 [client/grpc_query_test/TestGRPCQuery](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/client/grpc_query_test.go#L111-L129) - This test is misplaced in `/client`,
+but tests the life cycle of (at least) `runtime` and `bank` as they progress through
+startup, genesis and query time. It also exercises the fitness of the client and query
+server without putting bytes on the wire through the use of [QueryServiceTestHelper](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/baseapp/grpcrouter_helpers.go#L31).
+
+Example 2 `x/evidence` Keeper integration tests - Starts up an application composed of [8
+modules](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/testutil/app.yaml#L1) with [5 keepers](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/keeper_test.go#L101-L106) used in the integration test suite. One test in the suite
+exercises [HandleEquivocationEvidence](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/x/evidence/keeper/infraction_test.go#L42) which contains many interactions with the staking
+keeper.
+
+Example 3 - Integration suite app configurations may also be specified via golang (not
+YAML as above) [statically](https://github.com/cosmos/cosmos-sdk/blob/main/x/nft/testutil/app_config.go) or [dynamically](https://github.com/cosmos/cosmos-sdk/blob/8c23f6f957d1c0bedd314806d1ac65bea59b084c/tests/integration/bank/keeper/keeper_test.go#L129-L134).
+
+#### Limitations
+
+Setting up a particular input state may be more challenging since the application is
+starting from a zero state. Some of this may be addressed by good test fixture
+abstractions with testing of their own. Tests may also be more brittle, and larger
+refactors could impact application initialization in unexpected ways with harder to
+understand errors. This could also be seen as a benefit, and indeed the SDK's current
+integration tests were helpful in tracking down logic errors during earlier stages
+of app-wiring refactors.
+
+### Simulations
+
+Simulations (also called generative testing) are a special case of integration tests where
+deterministically random module operations are executed against a running simapp, building
+blocks on the chain until a specified height is reached. No *specific* assertions are
+made for the state transitions resulting from module operations but any error will halt and
+fail the simulation. Since `crisis` is included in simapp and the simulation runs
+EndBlockers at the end of each block any module invariant violations will also fail
+the simulation.
+
+Modules must implement [AppModuleSimulation.WeightedOperations](https://github.com/cosmos/cosmos-sdk/blob/2bec9d2021918650d3938c3ab242f84289daef80/types/module/simulation.go#L31) to define their
+simulation operations. Note that not all modules implement this which may indicate a
+gap in current simulation test coverage.
+
+Modules not returning simulation operations:
+
+* `auth`
+* `evidence`
+* `mint`
+* `params`
+
+A separate binary, [runsim](https://github.com/cosmos/tools/tree/master/cmd/runsim), is responsible for kicking off some of these tests and
+managing their life cycle.
+
+#### Limitations
+
+* [A success](https://github.com/cosmos/cosmos-sdk/runs/7606931983?check_suite_focus=true) may take a long time to run, 7-10 minutes per simulation in CI.
+* [Timeouts](https://github.com/cosmos/cosmos-sdk/runs/7606932295?check_suite_focus=true) sometimes occur on apparent successes without any indication why.
+* Useful error messages not provided on [failure](https://github.com/cosmos/cosmos-sdk/runs/7606932548?check_suite_focus=true) from CI, requiring a developer to run
+ the simulation locally to reproduce.
+
+### E2E tests
+
+End to end tests exercise the entire system as we understand it in as close an approximation
+to a production environment as is practical. Presently these tests are located at
+[tests/e2e](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e) and rely on [testutil/network](https://github.com/cosmos/cosmos-sdk/tree/main/testutil/network) to start up an in-process Tendermint node.
+
+An application should be built as minimally as possible to exercise the desired functionality.
+The SDK uses an application will only the required modules for the tests. The application developer is advised to use its own application for e2e tests.
+
+#### Limitations
+
+In general the limitations of end to end tests are orchestration and compute cost.
+Scaffolding is required to start up and run a prod-like environment and this
+process takes much longer to start and run than unit or integration tests.
+
+Global locks present in Tendermint code cause stateful starting/stopping to sometimes hang
+or fail intermittently when run in a CI environment.
+
+The scope of e2e tests has been complected with command line interface testing.
+
+## Decision
+
+We accept these test scopes and identify the following decisions points for each.
+
+| Scope | App Type | Mocks? |
+| ----------- | ------------------- | ------ |
+| Unit | None | Yes |
+| Integration | integration helpers | Some |
+| Simulation | minimal app | No |
+| E2E | minimal app | No |
+
+The decision above is valid for the SDK. An application developer should test their application with their full application instead of the minimal app.
+
+### Unit Tests
+
+All modules must have mocked unit test coverage.
+
+Illustrative tests should outnumber journeys in unit tests.
+
+Unit tests should outnumber integration tests.
+
+Unit tests must not introduce additional dependencies beyond those already present in
+production code.
+
+When module unit test introduction as per [EPIC: Unit testing of modules via mocks](https://github.com/cosmos/cosmos-sdk/issues/12398)
+results in a near complete rewrite of an integration test suite the test suite should be
+retained and moved to `/tests/integration`. We accept the resulting test logic
+duplication but recommend improving the unit test suite through the addition of
+illustrative tests.
+
+### Integration Tests
+
+All integration tests shall be located in `/tests/integration`, even those which do not
+introduce extra module dependencies.
+
+To help limit scope and complexity, it is recommended to use the smallest possible number of
+modules in application startup, i.e. don't depend on simapp.
+
+Integration tests should outnumber e2e tests.
+
+### Simulations
+
+Simulations shall use a minimal application (usually via app wiring). They are located under `/x/{moduleName}/simulation`.
+
+### E2E Tests
+
+Existing e2e tests shall be migrated to integration tests by removing the dependency on the
+test network and in-process Tendermint node to ensure we do not lose test coverage.
+
+The e2e rest runner shall transition from in process Tendermint to a runner powered by
+Docker via [dockertest](https://github.com/ory/dockertest).
+
+E2E tests exercising a full network upgrade shall be written.
+
+The CLI testing aspect of existing e2e tests shall be rewritten using the network mocking
+demonstrated in [PR#12706](https://github.com/cosmos/cosmos-sdk/pull/12706).
+
+## Consequences
+
+### Positive
+
+* test coverage is increased
+* test organization is improved
+* reduced dependency graph size in modules
+* simapp removed as a dependency from modules
+* inter-module dependencies introduced in test code are removed
+* reduced CI run time after transitioning away from in process Tendermint
+
+### Negative
+
+* some test logic duplication between unit and integration tests during transition
+* test written using dockertest DX may be a bit worse
+
+### Neutral
+
+* some discovery required for e2e transition to dockertest
+
+## Further Discussions
+
+It may be useful if test suites could be run in integration mode (with mocked tendermint) or
+with e2e fixtures (with real tendermint and many nodes). Integration fixtures could be used
+for quicker runs, e2e fixtures could be used for more battle hardening.
+
+A PoC `x/gov` was completed in PR [#12847](https://github.com/cosmos/cosmos-sdk/pull/12847)
+is in progress for unit tests demonstrating BDD [Rejected].
+Observing that a strength of BDD specifications is their readability, and a con is the
+cognitive load while writing and maintaining, current consensus is to reserve BDD use
+for places in the SDK where complex rules and module interactions are demonstrated.
+More straightforward or low level test cases will continue to rely on go table tests.
+
+Levels are network mocking in integration and e2e tests are still being worked on and formalized.
diff --git a/copy-of-sdk-docs/build/architecture/adr-060-abci-1.0.md b/copy-of-sdk-docs/build/architecture/adr-060-abci-1.0.md
new file mode 100644
index 00000000..41e2230b
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-060-abci-1.0.md
@@ -0,0 +1,238 @@
+# ADR 60: ABCI 1.0 Integration (Phase I)
+
+## Changelog
+
+* 2022-08-10: Initial Draft (@alexanderbez, @tac0turtle)
+* Nov 12, 2022: Update `PrepareProposal` and `ProcessProposal` semantics per the
+ initial implementation [PR](https://github.com/cosmos/cosmos-sdk/pull/13453) (@alexanderbez)
+
+## Status
+
+ACCEPTED
+
+## Abstract
+
+This ADR describes the initial adoption of [ABCI 1.0](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md),
+the next evolution of ABCI, within the Cosmos SDK. ABCI 1.0 aims to provide
+application developers with more flexibility and control over application and
+consensus semantics, e.g. in-application mempools, in-process oracles, and
+order-book style matching engines.
+
+## Context
+
+Tendermint will release ABCI 1.0. Notably, at the time of this writing,
+Tendermint is releasing v0.37.0 which will include `PrepareProposal` and `ProcessProposal`.
+
+The `PrepareProposal` ABCI method is concerned with a block proposer requesting
+the application to evaluate a series of transactions to be included in the next
+block, defined as a slice of `TxRecord` objects. The application can either
+accept, reject, or completely ignore some or all of these transactions. This is
+an important consideration to make as the application can essentially define and
+control its own mempool allowing it to define sophisticated transaction priority
+and filtering mechanisms, by completely ignoring the `TxRecords` Tendermint
+sends it, favoring its own transactions. This essentially means that the Tendermint
+mempool acts more like a gossip data structure.
+
+The second ABCI method, `ProcessProposal`, is used to process the block proposer's
+proposal as defined by `PrepareProposal`. It is important to note the following
+with respect to `ProcessProposal`:
+
+* Execution of `ProcessProposal` must be deterministic.
+* There must be coherence between `PrepareProposal` and `ProcessProposal`. In
+ other words, for any two correct processes *p* and *q*, if *q*'s Tendermint
+ calls `RequestProcessProposal` on *up*, *q*'s Application returns
+ ACCEPT in `ResponseProcessProposal`.
+
+It is important to note that in ABCI 1.0 integration, the application
+is NOT responsible for locking semantics -- Tendermint will still be responsible
+for that. In the future, however, the application will be responsible for locking,
+which allows for parallel execution possibilities.
+
+## Decision
+
+We will integrate ABCI 1.0, which will be introduced in Tendermint
+v0.37.0, in the next major release of the Cosmos SDK. We will integrate ABCI 1.0
+methods on the `BaseApp` type. We describe the implementations of the two methods
+individually below.
+
+Prior to describing the implementation of the two new methods, it is important to
+note that the existing ABCI methods, `CheckTx`, `DeliverTx`, etc, still exist and
+serve the same functions as they do now.
+
+### `PrepareProposal`
+
+Prior to evaluating the decision for how to implement `PrepareProposal`, it is
+important to note that `CheckTx` will still be executed and will be responsible
+for evaluating transaction validity as it does now, with one very important
+*additive* distinction.
+
+When executing transactions in `CheckTx`, the application will now add valid
+transactions, i.e. passing the AnteHandler, to its own mempool data structure.
+In order to provide a flexible approach to meet the varying needs of application
+developers, we will define both a mempool interface and a data structure utilizing
+Golang generics, allowing developers to focus only on transaction
+ordering. Developers requiring absolute full control can implement their own
+custom mempool implementation.
+
+We define the general mempool interface as follows (subject to change):
+
+```go
+type Mempool interface {
+ // Insert attempts to insert a Tx into the app-side mempool returning
+ // an error upon failure.
+ Insert(sdk.Context, sdk.Tx) error
+
+ // Select returns an Iterator over the app-side mempool. If txs are specified,
+ // then they shall be incorporated into the Iterator. The Iterator must
+ // be closed by the caller.
+ Select(sdk.Context, [][]byte) Iterator
+
+ // CountTx returns the number of transactions currently in the mempool.
+ CountTx() int
+
+ // Remove attempts to remove a transaction from the mempool, returning an error
+ // upon failure.
+ Remove(sdk.Tx) error
+}
+
+// Iterator defines an app-side mempool iterator interface that is as minimal as
+// possible. The order of iteration is determined by the app-side mempool
+// implementation.
+type Iterator interface {
+ // Next returns the next transaction from the mempool. If there are no more
+ // transactions, it returns nil.
+ Next() Iterator
+
+ // Tx returns the transaction at the current position of the iterator.
+ Tx() sdk.Tx
+}
+```
+
+We will define an implementation of `Mempool`, defined by `nonceMempool`, that
+will cover most basic application use-cases. Namely, it will prioritize transactions
+by transaction sender, allowing for multiple transactions from the same sender.
+
+The default app-side mempool implementation, `nonceMempool`, will operate on a
+single skip list data structure. Specifically, transactions with the lowest nonce
+globally are prioritized. Transactions with the same nonce are prioritized by
+sender address.
+
+```go
+type nonceMempool struct {
+ txQueue *huandu.SkipList
+}
+```
+
+Previous discussions1 have come to the agreement that Tendermint will
+perform a request to the application, via `RequestPrepareProposal`, with a certain
+amount of transactions reaped from Tendermint's local mempool. The exact amount
+of transactions reaped will be determined by a local operator configuration.
+This is referred to as the "one-shot approach" seen in discussions.
+
+When Tendermint reaps transactions from the local mempool and sends them to the
+application via `RequestPrepareProposal`, the application will have to evaluate
+the transactions. Specifically, it will need to inform Tendermint if it should
+reject and or include each transaction. Note, the application can even *replace*
+transactions entirely with other transactions.
+
+When evaluating transactions from `RequestPrepareProposal`, the application will
+ignore *ALL* transactions sent to it in the request and instead reap up to
+`RequestPrepareProposal.max_tx_bytes` from it's own mempool.
+
+Since an application can technically insert or inject transactions on `Insert`
+during `CheckTx` execution, it is recommended that applications ensure transaction
+validity when reaping transactions during `PrepareProposal`. However, what validity
+exactly means is entirely determined by the application.
+
+The Cosmos SDK will provide a default `PrepareProposal` implementation that simply
+select up to `MaxBytes` *valid* transactions.
+
+However, applications can override this default implementation with their own
+implementation and set that on `BaseApp` via `SetPrepareProposal`.
+
+
+### `ProcessProposal`
+
+The `ProcessProposal` ABCI method is relatively straightforward. It is responsible
+for ensuring validity of the proposed block containing transactions that were
+selected from the `PrepareProposal` step. However, how an application determines
+validity of a proposed block depends on the application and its varying use cases.
+For most applications, simply calling the `AnteHandler` chain would suffice, but
+there could easily be other applications that need more control over the validation
+process of the proposed block, such as ensuring txs are in a certain order or
+that certain transactions are included. While this theoretically could be achieved
+with a custom `AnteHandler` implementation, it's not the cleanest UX or the most
+efficient solution.
+
+Instead, we will define an additional ABCI interface method on the existing
+`Application` interface, similar to the existing ABCI methods such as `BeginBlock`
+or `EndBlock`. This new interface method will be defined as follows:
+
+```go
+ProcessProposal(sdk.Context, abci.ProcessProposalRequest) error {}
+```
+
+Note, we must call `ProcessProposal` with a new internal branched state on the
+`Context` argument as we cannot simply just use the existing `checkState` because
+`BaseApp` already has a modified `checkState` at this point. So when executing
+`ProcessProposal`, we create a similar branched state, `processProposalState`,
+off of `deliverState`. Note, the `processProposalState` is never committed and
+is completely discarded after `ProcessProposal` finishes execution.
+
+The Cosmos SDK will provide a default implementation of `ProcessProposal` in which
+all transactions are validated using the CheckTx flow, i.e. the AnteHandler, and
+will always return ACCEPT unless any transaction cannot be decoded.
+
+### `DeliverTx`
+
+Since transactions are not truly removed from the app-side mempool during
+`PrepareProposal`, since `ProcessProposal` can fail or take multiple rounds and
+we do not want to lose transactions, we need to finally remove the transaction
+from the app-side mempool during `DeliverTx` since during this phase, the
+transactions are being included in the proposed block.
+
+Alternatively, we can keep the transactions as truly being removed during the
+reaping phase in `PrepareProposal` and add them back to the app-side mempool in
+case `ProcessProposal` fails.
+
+## Consequences
+
+### Backwards Compatibility
+
+ABCI 1.0 is naturally not backwards compatible with prior versions of the Cosmos SDK
+and Tendermint. For example, an application that requests `RequestPrepareProposal`
+to the same application that does not speak ABCI 1.0 will naturally fail.
+
+However, in the first phase of the integration, the existing ABCI methods as we
+know them today will still exist and function as they currently do.
+
+### Positive
+
+* Applications now have full control over transaction ordering and priority.
+* Lays the groundwork for the full integration of ABCI 1.0, which will unlock more
+ app-side use cases around block construction and integration with the Tendermint
+ consensus engine.
+
+### Negative
+
+* Requires that the "mempool", as a general data structure that collects and stores
+ uncommitted transactions will be duplicated between both Tendermint and the
+ Cosmos SDK.
+* Additional requests between Tendermint and the Cosmos SDK in the context of
+ block execution. Albeit, the overhead should be negligible.
+* Not backwards compatible with previous versions of Tendermint and the Cosmos SDK.
+
+## Further Discussions
+
+It is possible to design the app-side implementation of the `Mempool[T MempoolTx]`
+in many different ways using different data structures and implementations. All
+of which have different tradeoffs. The proposed solution keeps things simple
+and covers cases that would be required for most basic applications. There are
+tradeoffs that can be made to improve performance of reaping and inserting into
+the provided mempool implementation.
+
+## References
+
+* https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md
+* [1] https://github.com/tendermint/tendermint/issues/7750#issuecomment-1076806155
+* [2] https://github.com/tendermint/tendermint/issues/7750#issuecomment-1075717151
diff --git a/copy-of-sdk-docs/build/architecture/adr-061-liquid-staking.md b/copy-of-sdk-docs/build/architecture/adr-061-liquid-staking.md
new file mode 100644
index 00000000..a1be7e76
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-061-liquid-staking.md
@@ -0,0 +1,82 @@
+# ADR-061: Liquid Staking
+
+## Changelog
+
+* 2022-09-10: Initial Draft (@zmanian)
+
+## Status
+
+ACCEPTED
+
+## Abstract
+
+Add a semi-fungible liquid staking primitive to the default Cosmos SDK staking module. This upgrades proof of stake to enable safe designs with lower overall monetary issuance and integration with numerous liquid staking protocols like Stride, Persistence, Quicksilver, Lido etc.
+
+## Context
+
+The original release of the Cosmos Hub featured the implementation of a ground breaking proof of stake mechanism featuring delegation, slashing, in protocol reward distribution and adaptive issuance. This design was state of the art for 2016 and has been deployed without major changes by many L1 blockchains.
+
+As both Proof of Stake and blockchain use cases have matured, this design has aged poorly and should no longer be considered a good baseline Proof of Stake issuance. In the world of application specific blockchains, there cannot be a one size fits all blockchain but the Cosmos SDK does endeavour to provide a good baseline implementation and one that is suitable for the Cosmos Hub.
+
+The most important deficiency of the legacy staking design is that it composes poorly with on chain protocols for trading, lending, derivatives that are referred to collectively as DeFi. The legacy staking implementation starves these applications of liquidity by increasing the risk free rate adaptively. It basically makes DeFi and staking security somewhat incompatible.
+
+The Osmosis team has adopted the idea of Superfluid and Interfluid staking where assets that are participating in DeFi applications can also be used in proof of stake. This requires tight integration with an enshrined set of DeFi applications and thus is unsuitable for the Cosmos SDK.
+
+It's also important to note that Interchain Accounts are available in the default IBC implementation and can be used to [rehypothecate](https://www.investopedia.com/terms/h/hypothecation.asp#toc-what-is-rehypothecation) delegations. Thus liquid staking is already possible and these changes merely improve the UX of liquid staking. Centralized exchanges also rehypothecate staked assets, posing challenges for decentralization. This ADR takes the position that adoption of in-protocol liquid staking is the preferable outcome and provides new levers to incentivize decentralization of stake.
+
+These changes to the staking module have been in development for more than a year and have seen substantial industry adoption who plan to build staking UX. The internal economics at Informal team has also done a review of the impacts of these changes and this review led to the development of the exempt delegation system. This system provides governance with a tuneable parameter for modulating the risks of principal agent problem called the exemption factor.
+
+## Decision
+
+We implement the semi-fungible liquid staking system and exemption factor system within the cosmos sdk. Though registered as fungible assets, these tokenized shares have extremely limited fungibility, only among the specific delegation record that was created when shares were tokenized. These assets can be used for OTC trades but composability with DeFi is limited. The primary expected use case is improving the user experience of liquid staking providers.
+
+A new governance parameter is introduced that defines the ratio of exempt to issued tokenized shares. This is called the exemption factor. A larger exemption factor allows more tokenized shares to be issued for a smaller amount of exempt delegations. If governance is comfortable with how the liquid staking market is evolving, it makes sense to increase this value.
+
+Min self delegation is removed from the staking system with the expectation that it will be replaced by the exempt delegations system. The exempt delegation system allows multiple accounts to demonstrate economic alignment with the validator operator as team members, partners etc. without co-mingling funds. Delegation exemption will likely be required to grow the validators' business under widespread adoption of liquid staking once governance has adjusted the exemption factor.
+
+When shares are tokenized, the underlying shares are transferred to a module account and rewards go to the module account for the TokenizedShareRecord.
+
+There is no longer a mechanism to override the validators vote for TokenizedShares.
+
+
+### `MsgTokenizeShares`
+
+The MsgTokenizeShares message is used to create tokenize delegated tokens. This message can be executed by any delegator who has positive amount of delegation and after execution the specific amount of delegation disappear from the account and share tokens are provided. Share tokens are denominated in the validator and record id of the underlying delegation.
+
+A user may tokenize some or all of their delegation.
+
+They will receive shares with the denom of `cosmosvaloper1xxxx/5` where 5 is the record id for the validator operator.
+
+MsgTokenizeShares fails if the account is a VestingAccount. Users will have to move vested tokens to a new account and endure the unbonding period. We view this as an acceptable tradeoff vs. the complex book keeping required to track vested tokens.
+
+The total amount of outstanding tokenized shares for the validator is checked against the sum of exempt delegations multiplied by the exemption factor. If the tokenized shares exceeds this limit, execution fails.
+
+MsgTokenizeSharesResponse provides the number of tokens generated and their denom.
+
+
+### `MsgRedeemTokensforShares`
+
+The MsgRedeemTokensforShares message is used to redeem the delegation from share tokens. This message can be executed by any user who owns share tokens. After execution delegations will appear to the user.
+
+### `MsgTransferTokenizeShareRecord`
+
+The MsgTransferTokenizeShareRecord message is used to transfer the ownership of rewards generated from the tokenized amount of delegation. The tokenize share record is created when a user tokenize his/her delegation and deleted when the full amount of share tokens are redeemed.
+
+This is designed to work with liquid staking designs that do not redeem the tokenized shares and may instead want to keep the shares tokenized.
+
+
+### `MsgExemptDelegation`
+
+The MsgExemptDelegation message is used to exempt a delegation to a validator. If the exemption factor is greater than 0, this will allow more delegation shares to be issued from the validator.
+
+This design allows the chain to force an amount of self-delegation by validators participating in liquid staking schemes.
+
+## Consequences
+
+### Backwards Compatibility
+
+By setting the exemption factor to zero, this module works like legacy staking. The only substantial change is the removal of min-self-bond and without any tokenized shares, there is no incentive to exempt delegation.
+
+### Positive
+
+This approach should enable integration with liquid staking providers and improved user experience. It provides a pathway to security under non-exponential issuance policies in the baseline staking module.
diff --git a/copy-of-sdk-docs/build/architecture/adr-062-collections-state-layer.md b/copy-of-sdk-docs/build/architecture/adr-062-collections-state-layer.md
new file mode 100644
index 00000000..db71cef0
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-062-collections-state-layer.md
@@ -0,0 +1,120 @@
+# ADR 062: Collections, a simplified storage layer for cosmos-sdk modules
+
+## Changelog
+
+* 30/11/2022: PROPOSED
+
+## Status
+
+PROPOSED - Implemented
+
+## Abstract
+
+We propose a simplified module storage layer which leverages golang generics to allow module developers to handle module
+storage in a simple and straightforward manner, whilst offering safety, extensibility and standardization.
+
+## Context
+
+Module developers are forced into manually implementing storage functionalities in their modules, those functionalities include
+but are not limited to:
+
+* Defining key to bytes formats.
+* Defining value to bytes formats.
+* Defining secondary indexes.
+* Defining query methods to expose outside to deal with storage.
+* Defining local methods to deal with storage writing.
+* Dealing with genesis imports and exports.
+* Writing tests for all the above.
+
+
+This brings in a lot of problems:
+
+* It blocks developers from focusing on the most important part: writing business logic.
+* Key to bytes formats are complex and their definition is error-prone, for example:
+ * how do I format time to bytes in such a way that bytes are sorted?
+ * how do I ensure when I don't have namespace collisions when dealing with secondary indexes?
+* The lack of standardization makes life hard for clients, and the problem is exacerbated when it comes to providing proofs for objects present in state. Clients are forced to maintain a list of object paths to gather proofs.
+
+### Current Solution: ORM
+
+The current SDK proposed solution to this problem is [ORM](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-055-orm.md).
+Whilst ORM offers a lot of good functionality aimed at solving these specific problems, it has some downsides:
+
+* It requires migrations.
+* It uses the newest protobuf golang API, whilst the SDK still mainly uses gogoproto.
+* Integrating ORM into a module would require the developer to deal with two different golang frameworks (golang protobuf + gogoproto) representing the same API objects.
+* It has a high learning curve, even for simple storage layers as it requires developers to have knowledge around protobuf options, custom cosmos-sdk storage extensions, and tooling download. Then after this they still need to learn the code-generated API.
+
+### CosmWasm Solution: cw-storage-plus
+
+The collections API takes inspiration from [cw-storage-plus](https://docs.cosmwasm.com/docs/1.0/smart-contracts/state/cw-plus/),
+which has demonstrated to be a powerful tool for dealing with storage in CosmWasm contracts.
+It's simple, does not require extra tooling, it makes it easy to deal with complex storage structures (indexes, snapshot, etc).
+The API is straightforward and explicit.
+
+## Decision
+
+We propose to port the `collections` API, whose implementation lives in [NibiruChain/collections](https://github.com/NibiruChain/collections) to cosmos-sdk.
+
+Collections implements four different storage handlers types:
+
+* `Map`: which deals with simple `key=>object` mappings.
+* `KeySet`: which acts as a `Set` and only retains keys and no object (usecase: allow-lists).
+* `Item`: which always contains only one object (usecase: Params)
+* `Sequence`: which implements a simple always increasing number (usecase: Nonces)
+* `IndexedMap`: builds on top of `Map` and `KeySet` and allows to create relationships with `Objects` and `Objects` secondary keys.
+
+All the collection APIs build on top of the simple `Map` type.
+
+Collections is fully generic, meaning that anything can be used as `Key` and `Value`. It can be a protobuf object or not.
+
+Collections types, in fact, delegate the duty of serialization of keys and values to a secondary collections API component called `ValueEncoders` and `KeyEncoders`.
+
+`ValueEncoders` take care of converting a value to bytes (relevant only for `Map`). And offers a plug and play layer which allows us to change how we encode objects,
+which is relevant for swapping serialization frameworks and enhancing performance.
+`Collections` already comes in with default `ValueEncoders`, specifically for: protobuf objects, special SDK types (sdk.Int, sdk.Dec).
+
+`KeyEncoders` take care of converting keys to bytes, `collections` already comes in with some default `KeyEncoders` for some primitive golang types
+(uint64, string, time.Time, ...) and some widely used sdk types (sdk.Acc/Val/ConsAddress, sdk.Int/Dec, ...).
+These default implementations also offer safety around proper lexicographic ordering and namespace-collision.
+
+Examples of the collections API can be found here:
+
+* introduction: https://github.com/NibiruChain/collections/tree/main/examples
+* usage in nibiru: [x/oracle](https://github.com/NibiruChain/nibiru/blob/master/x/oracle/keeper/keeper.go#L32), [x/perp](https://github.com/NibiruChain/nibiru/blob/master/x/perp/keeper/keeper.go#L31)
+* cosmos-sdk's x/staking migrated: https://github.com/testinginprod/cosmos-sdk/pull/22
+
+
+## Consequences
+
+### Backwards Compatibility
+
+The design of `ValueEncoders` and `KeyEncoders` allows modules to retain the same `byte(key)=>byte(value)` mappings, making
+the upgrade to the new storage layer non-state breaking.
+
+
+### Positive
+
+* ADR aimed at removing code from the SDK rather than adding it. Migrating just `x/staking` to collections would yield to a net decrease in LOC (even considering the addition of collections itself).
+* Simplifies and standardizes storage layers across modules in the SDK.
+* Does not require to have to deal with protobuf.
+* It's pure golang code.
+* Generalization over `KeyEncoders` and `ValueEncoders` allows us to not tie ourself to the data serialization framework.
+* `KeyEncoders` and `ValueEncoders` can be extended to provide schema reflection.
+
+### Negative
+
+* Golang generics are not as battle-tested as other Golang features, despite being used in production right now.
+* Collection types instantiation needs to be improved.
+
+### Neutral
+
+{neutral consequences}
+
+## Further Discussions
+
+* Automatic genesis import/export (not implemented because of API breakage)
+* Schema reflection
+
+
+## References
diff --git a/copy-of-sdk-docs/build/architecture/adr-063-core-module-api.md b/copy-of-sdk-docs/build/architecture/adr-063-core-module-api.md
new file mode 100644
index 00000000..57f92d4d
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-063-core-module-api.md
@@ -0,0 +1,562 @@
+# ADR 063: Core Module API
+
+## Changelog
+
+* 2022-08-18 First Draft
+* 2022-12-08 First Draft
+* 2023-01-24 Updates
+
+## Status
+
+ACCEPTED Partially Implemented
+
+## Abstract
+
+A new core API is proposed as a way to develop cosmos-sdk applications that will eventually replace the existing
+`AppModule` and `sdk.Context` frameworks a set of core services and extension interfaces. This core API aims to:
+
+* be simpler
+* more extensible
+* more stable than the current framework
+* enable deterministic events and queries,
+* support event listeners
+* [ADR 033: Protobuf-based Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md) clients.
+
+## Context
+
+Historically modules have exposed their functionality to the framework via the `AppModule` and `AppModuleBasic`
+interfaces which have the following shortcomings:
+
+* both `AppModule` and `AppModuleBasic` need to be defined and registered which is counter-intuitive
+* apps need to implement the full interfaces, even parts they don't need (although there are workarounds for this),
+* interface methods depend heavily on unstable third party dependencies, in particular Comet,
+* legacy required methods have littered these interfaces for far too long
+
+In order to interact with the state machine, modules have needed to do a combination of these things:
+
+* get store keys from the app
+* call methods on `sdk.Context` which contains more or less the full set of capability available to modules.
+
+By isolating all the state machine functionality into `sdk.Context`, the set of functionalities available to
+modules are tightly coupled to this type. If there are changes to upstream dependencies (such as Comet)
+or new functionalities are desired (such as alternate store types), the changes need impact `sdk.Context` and all
+consumers of it (basically all modules). Also, all modules now receive `context.Context` and need to convert these
+to `sdk.Context`'s with a non-ergonomic unwrapping function.
+
+Any breaking changes to these interfaces, such as ones imposed by third-party dependencies like Comet, have the
+side effect of forcing all modules in the ecosystem to update in lock-step. This means it is almost impossible to have
+a version of the module which can be run with 2 or 3 different versions of the SDK or 2 or 3 different versions of
+another module. This lock-step coupling slows down overall development within the ecosystem and causes updates to
+components to be delayed longer than they would if things were more stable and loosely coupled.
+
+## Decision
+
+The `core` API proposes a set of core APIs that modules can rely on to interact with the state machine and expose their
+functionalities to it that are designed in a principled way such that:
+
+* tight coupling of dependencies and unrelated functionalities is minimized or eliminated
+* APIs can have long-term stability guarantees
+* the SDK framework is extensible in a safe and straightforward way
+
+The design principles of the core API are as follows:
+
+* everything that a module wants to interact with in the state machine is a service
+* all services coordinate state via `context.Context` and don't try to recreate the "bag of variables" approach of `sdk.Context`
+* all independent services are isolated in independent packages with minimal APIs and minimal dependencies
+* the core API should be minimalistic and designed for long-term support (LTS)
+* a "runtime" module will implement all the "core services" defined by the core API and can handle all module
+ functionalities exposed by core extension interfaces
+* other non-core and/or non-LTS services can be exposed by specific versions of runtime modules or other modules
+following the same design principles, this includes functionality that interacts with specific non-stable versions of
+third party dependencies such as Comet
+* the core API doesn't implement *any* functionality, it just defines types
+* go stable API compatibility guidelines are followed: https://go.dev/blog/module-compatibility
+
+A "runtime" module is any module which implements the core functionality of composing an ABCI app, which is currently
+handled by `BaseApp` and the `ModuleManager`. Runtime modules which implement the core API are *intentionally* separate
+from the core API in order to enable more parallel versions and forks of the runtime module than is possible with the
+SDK's current tightly coupled `BaseApp` design while still allowing for a high degree of composability and
+compatibility.
+
+Modules which are built only against the core API don't need to know anything about which version of runtime,
+`BaseApp` or Comet in order to be compatible. Modules from the core mainline SDK could be easily composed
+with a forked version of runtime with this pattern.
+
+This design is intended to enable matrices of compatible dependency versions. Ideally a given version of any module
+is compatible with multiple versions of the runtime module and other compatible modules. This will allow dependencies
+to be selectively updated based on battle-testing. More conservative projects may want to update some dependencies
+slower than more fast moving projects.
+
+### Core Services
+
+The following "core services" are defined by the core API. All valid runtime module implementations should provide
+implementations of these services to modules via both [dependency injection](./adr-057-app-wiring.md) and
+manual wiring. The individual services described below are all bundled in a convenient `appmodule.Service`
+"bundle service" so that for simplicity modules can declare a dependency on a single service.
+
+#### Store Services
+
+Store services will be defined in the `cosmossdk.io/core/store` package.
+
+The generic `store.KVStore` interface is the same as current SDK `KVStore` interface. Store keys have been refactored
+into store services which, instead of expecting the context to know about stores, invert the pattern and allow
+retrieving a store from a generic context. There are three store services for the three types of currently supported
+stores - regular kv-store, memory, and transient:
+
+```go
+type KVStoreService interface {
+ OpenKVStore(context.Context) KVStore
+}
+
+type MemoryStoreService interface {
+ OpenMemoryStore(context.Context) KVStore
+}
+type TransientStoreService interface {
+ OpenTransientStore(context.Context) KVStore
+}
+```
+
+Modules can use these services like this:
+
+```go
+func (k msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) {
+ store := k.kvStoreSvc.OpenKVStore(ctx)
+}
+```
+
+Just as with the current runtime module implementation, modules will not need to explicitly name these store keys,
+but rather the runtime module will choose an appropriate name for them and modules just need to request the
+type of store they need in their dependency injection (or manual) constructors.
+
+#### Event Service
+
+The event `Service` will be defined in the `cosmossdk.io/core/event` package.
+
+The event `Service` allows modules to emit typed and legacy untyped events:
+
+```go
+package event
+
+type Service interface {
+ // EmitProtoEvent emits events represented as a protobuf message (as described in ADR 032).
+ //
+ // Callers SHOULD assume that these events may be included in consensus. These events
+ // MUST be emitted deterministically and adding, removing or changing these events SHOULD
+ // be considered state-machine breaking.
+ EmitProtoEvent(ctx context.Context, event protoiface.MessageV1) error
+
+ // EmitKVEvent emits an event based on an event and kv-pair attributes.
+ //
+ // These events will not be part of consensus and adding, removing or changing these events is
+ // not a state-machine breaking change.
+ EmitKVEvent(ctx context.Context, eventType string, attrs ...KVEventAttribute) error
+
+ // EmitProtoEventNonConsensus emits events represented as a protobuf message (as described in ADR 032), without
+ // including it in blockchain consensus.
+ //
+ // These events will not be part of consensus and adding, removing or changing events is
+ // not a state-machine breaking change.
+ EmitProtoEventNonConsensus(ctx context.Context, event protoiface.MessageV1) error
+}
+```
+
+Typed events emitted with `EmitProto` should be assumed to be part of blockchain consensus (whether they are part of
+the block or app hash is left to the runtime to specify).
+
+Events emitted by `EmitKVEvent` and `EmitProtoEventNonConsensus` are not considered to be part of consensus and cannot be observed
+by other modules. If there is a client-side need to add events in patch releases, these methods can be used.
+
+#### Logger
+
+A logger (`cosmossdk.io/log`) must be supplied using `depinject`, and will
+be made available for modules to use via `depinject.In`.
+Modules using it should follow the current pattern in the SDK by adding the module name before using it.
+
+```go
+type ModuleInputs struct {
+ depinject.In
+
+ Logger log.Logger
+}
+
+func ProvideModule(in ModuleInputs) ModuleOutputs {
+ keeper := keeper.NewKeeper(
+ in.logger,
+ )
+}
+
+func NewKeeper(logger log.Logger) Keeper {
+ return Keeper{
+ logger: logger.With(log.ModuleKey, "x/"+types.ModuleName),
+ }
+}
+```
+
+### Core `AppModule` extension interfaces
+
+
+Modules will provide their core services to the runtime module via extension interfaces built on top of the
+`cosmossdk.io/core/appmodule.AppModule` tag interface. This tag interface requires only two empty methods which
+allow `depinject` to identify implementers as `depinject.OnePerModule` types and as app module implementations:
+
+```go
+type AppModule interface {
+ depinject.OnePerModuleType
+
+ // IsAppModule is a dummy method to tag a struct as implementing an AppModule.
+ IsAppModule()
+}
+```
+
+Other core extension interfaces will be defined in `cosmossdk.io/core` should be supported by valid runtime
+implementations.
+
+#### `MsgServer` and `QueryServer` registration
+
+`MsgServer` and `QueryServer` registration is done by implementing the `HasServices` extension interface:
+
+```go
+type HasServices interface {
+ AppModule
+
+ RegisterServices(grpc.ServiceRegistrar)
+}
+
+```
+
+Because of the `cosmos.msg.v1.service` protobuf option, required for `Msg` services, the same `ServiceRegistrar` can be
+used to register both `Msg` and query services.
+
+#### Genesis
+
+The genesis `Handler` functions - `DefaultGenesis`, `ValidateGenesis`, `InitGenesis` and `ExportGenesis` - are specified
+against the `GenesisSource` and `GenesisTarget` interfaces which will abstract over genesis sources which may be a single
+JSON object or collections of JSON objects that can be efficiently streamed.
+
+```go
+// GenesisSource is a source for genesis data in JSON format. It may abstract over a
+// single JSON object or separate files for each field in a JSON object that can
+// be streamed over. Modules should open a separate io.ReadCloser for each field that
+// is required. When fields represent arrays they can efficiently be streamed
+// over. If there is no data for a field, this function should return nil, nil. It is
+// important that the caller closes the reader when done with it.
+type GenesisSource = func(field string) (io.ReadCloser, error)
+
+// GenesisTarget is a target for writing genesis data in JSON format. It may
+// abstract over a single JSON object or JSON in separate files that can be
+// streamed over. Modules should open a separate io.WriteCloser for each field
+// and should prefer writing fields as arrays when possible to support efficient
+// iteration. It is important the caller closers the writer AND checks the error
+// when done with it. It is expected that a stream of JSON data is written
+// to the writer.
+type GenesisTarget = func(field string) (io.WriteCloser, error)
+```
+
+All genesis objects for a given module are expected to conform to the semantics of a JSON object.
+Each field in the JSON object should be read and written separately to support streaming genesis.
+The [ORM](./adr-055-orm.md) and [collections](./adr-062-collections-state-layer.md) both support
+streaming genesis and modules using these frameworks generally do not need to write any manual
+genesis code.
+
+To support genesis, modules should implement the `HasGenesis` extension interface:
+
+```go
+type HasGenesis interface {
+ AppModule
+
+ // DefaultGenesis writes the default genesis for this module to the target.
+ DefaultGenesis(GenesisTarget) error
+
+ // ValidateGenesis validates the genesis data read from the source.
+ ValidateGenesis(GenesisSource) error
+
+ // InitGenesis initializes module state from the genesis source.
+ InitGenesis(context.Context, GenesisSource) error
+
+ // ExportGenesis exports module state to the genesis target.
+ ExportGenesis(context.Context, GenesisTarget) error
+}
+```
+
+#### Pre Blockers
+
+Modules that have functionality that runs before BeginBlock and should implement the `HasPreBlocker` interfaces:
+
+```go
+type HasPreBlocker interface {
+ AppModule
+ PreBlock(context.Context) error
+}
+```
+
+#### Begin and End Blockers
+
+Modules that have functionality that runs before transactions (begin blockers) or after transactions
+(end blockers) should implement the has `HasBeginBlocker` and/or `HasEndBlocker` interfaces:
+
+```go
+type HasBeginBlocker interface {
+ AppModule
+ BeginBlock(context.Context) error
+}
+
+type HasEndBlocker interface {
+ AppModule
+ EndBlock(context.Context) error
+}
+```
+
+The `BeginBlock` and `EndBlock` methods will take a `context.Context`, because:
+
+* most modules don't need Comet information other than `BlockInfo` so we can eliminate dependencies on specific
+Comet versions
+* for the few modules that need Comet block headers and/or return validator updates, specific versions of the
+runtime module will provide specific functionality for interacting with the specific version(s) of Comet
+supported
+
+In order for `BeginBlock`, `EndBlock` and `InitGenesis` to send back validator updates and retrieve full Comet
+block headers, the runtime module for a specific version of Comet could provide services like this:
+
+```go
+type ValidatorUpdateService interface {
+ SetValidatorUpdates(context.Context, []abci.ValidatorUpdate)
+}
+```
+
+Header Service defines a way to get header information about a block. This information is generalized for all implementations:
+
+```go
+
+type Service interface {
+ GetHeaderInfo(context.Context) Info
+}
+
+type Info struct {
+ Height int64 // Height returns the height of the block
+ Hash []byte // Hash returns the hash of the block header
+ Time time.Time // Time returns the time of the block
+ ChainID string // ChainId returns the chain ID of the block
+}
+```
+
+Comet Service provides a way to get comet specific information:
+
+```go
+type Service interface {
+ GetCometInfo(context.Context) Info
+}
+
+type CometInfo struct {
+ Evidence []abci.Misbehavior // Misbehavior returns the misbehavior of the block
+ // ValidatorsHash returns the hash of the validators
+ // For Comet, it is the hash of the next validators
+ ValidatorsHash []byte
+ ProposerAddress []byte // ProposerAddress returns the address of the block proposer
+ DecidedLastCommit abci.CommitInfo // DecidedLastCommit returns the last commit info
+}
+```
+
+If a user would like to provide a module other information they would need to implement another service like:
+
+```go
+type RollKit Interface {
+ ...
+}
+```
+
+We know these types will change at the Comet level and that also a very limited set of modules actually need this
+functionality, so they are intentionally kept out of core to keep core limited to the necessary, minimal set of stable
+APIs.
+
+#### Remaining Parts of AppModule
+
+The current `AppModule` framework handles a number of additional concerns which aren't addressed by this core API.
+These include:
+
+* gas
+* block headers
+* upgrades
+* registration of gogo proto and amino interface types
+* cobra query and tx commands
+* gRPC gateway
+* crisis module invariants
+* simulations
+
+Additional `AppModule` extension interfaces either inside or outside of core will need to be specified to handle
+these concerns.
+
+In the case of gogo proto and amino interfaces, the registration of these generally should happen as early
+as possible during initialization and in [ADR 057: App Wiring](./adr-057-app-wiring.md), protobuf type registration
+happens before dependency injection (although this could alternatively be done dedicated DI providers).
+
+gRPC gateway registration should probably be handled by the runtime module, but the core API shouldn't depend on gRPC
+gateway types as 1) we are already using an older version and 2) it's possible the framework can do this registration
+automatically in the future. So for now, the runtime module should probably provide some sort of specific type for doing
+this registration ex:
+
+```go
+type GrpcGatewayInfo struct {
+ Handlers []GrpcGatewayHandler
+}
+
+type GrpcGatewayHandler func(ctx context.Context, mux *runtime.ServeMux, client QueryClient) error
+```
+
+which modules can return in a provider:
+
+```go
+func ProvideGrpcGateway() GrpcGatewayInfo {
+ return GrpcGatewayInfo {
+ Handlers: []Handler {types.RegisterQueryHandlerClient}
+ }
+}
+```
+
+Crisis module invariants and simulations are subject to potential redesign and should be managed with types
+defined in the crisis and simulation modules respectively.
+
+Extension interface for CLI commands will be provided via the `cosmossdk.io/client/v2` module and its
+[autocli](./adr-058-auto-generated-cli.md) framework.
+
+#### Example Usage
+
+Here is an example of setting up a hypothetical `foo` v2 module which uses the [ORM](./adr-055-orm.md) for its state
+management and genesis.
+
+```go
+
+type Keeper struct {
+ db orm.ModuleDB
+ evtSrv event.Service
+}
+
+func (k Keeper) RegisterServices(r grpc.ServiceRegistrar) {
+ foov1.RegisterMsgServer(r, k)
+ foov1.RegisterQueryServer(r, k)
+}
+
+func (k Keeper) BeginBlock(context.Context) error {
+ return nil
+}
+
+func ProvideApp(config *foomodulev2.Module, evtSvc event.EventService, db orm.ModuleDB) (Keeper, appmodule.AppModule){
+ k := &Keeper{db: db, evtSvc: evtSvc}
+ return k, k
+}
+```
+
+### Runtime Compatibility Version
+
+The `core` module will define a static integer var, `cosmossdk.io/core.RuntimeCompatibilityVersion`, which is
+a minor version indicator of the core module that is accessible at runtime. Correct runtime module implementations
+should check this compatibility version and return an error if the current `RuntimeCompatibilityVersion` is higher
+than the version of the core API that this runtime version can support. When new features are adding to the `core`
+module API that runtime modules are required to support, this version should be incremented.
+
+### Runtime Modules
+
+The initial `runtime` module will simply be created within the existing `github.com/cosmos/cosmos-sdk` go module
+under the `runtime` package. This module will be a small wrapper around the existing `BaseApp`, `sdk.Context` and
+module manager and follow the Cosmos SDK's existing [0-based versioning](https://0ver.org). To move to semantic
+versioning as well as runtime modularity, new officially supported runtime modules will be created under the
+`cosmossdk.io/runtime` prefix. For each supported consensus engine a semantically-versioned go module should be created
+with a runtime implementation for that consensus engine. For example:
+
+* `cosmossdk.io/runtime/comet`
+* `cosmossdk.io/runtime/comet/v2`
+* `cosmossdk.io/runtime/rollkit`
+* etc.
+
+These runtime modules should attempt to be semantically versioned even if the underlying consensus engine is not. Also,
+because a runtime module is also a first class Cosmos SDK module, it should have a protobuf module config type.
+A new semantically versioned module config type should be created for each of these runtime module such that there is a
+1:1 correspondence between the go module and module config type. This is the same practice should be followed for every
+semantically versioned Cosmos SDK module as described in [ADR 057: App Wiring](./adr-057-app-wiring.md).
+
+Currently, `github.com/cosmos/cosmos-sdk/runtime` uses the protobuf config type `cosmos.app.runtime.v1alpha1.Module`.
+When we have a standalone v1 comet runtime, we should use a dedicated protobuf module config type such as
+`cosmos.runtime.comet.v1.Module1`. When we release v2 of the comet runtime (`cosmossdk.io/runtime/comet/v2`) we should
+have a corresponding `cosmos.runtime.comet.v2.Module` protobuf type.
+
+In order to make it easier to support different consensus engines that support the same core module functionality as
+described in this ADR, a common go module should be created with shared runtime components. The easiest runtime components
+to share initially are probably the message/query router, inter-module client, service register, and event router.
+This common runtime module should be created initially as the `cosmossdk.io/runtime/common` go module.
+
+When this new architecture has been implemented, the main dependency for a Cosmos SDK module would be
+`cosmossdk.io/core` and that module should be able to be used with any supported consensus engine (to the extent
+that it does not explicitly depend on consensus engine specific functionality such as Comet's block headers). An
+app developer would then be able to choose which consensus engine they want to use by importing the corresponding
+runtime module. The current `BaseApp` would be refactored into the `cosmossdk.io/runtime/comet` module, the router
+infrastructure in `baseapp/` would be refactored into `cosmossdk.io/runtime/common` and support ADR 033, and eventually
+a dependency on `github.com/cosmos/cosmos-sdk` would no longer be required.
+
+In short, modules would depend primarily on `cosmossdk.io/core`, and each `cosmossdk.io/runtime/{consensus-engine}`
+would implement the `cosmossdk.io/core` functionality for that consensus engine.
+
+One additional piece that would need to be resolved as part of this architecture is how runtimes relate to the server.
+Likely it would make sense to modularize the current server architecture so that it can be used with any runtime even
+if that is based on a consensus engine besides Comet. This means that eventually the Comet runtime would need to
+encapsulate the logic for starting Comet and the ABCI app.
+
+### Testing
+
+A mock implementation of all services should be provided in core to allow for unit testing of modules
+without needing to depend on any particular version of runtime. Mock services should
+allow tests to observe service behavior or provide a non-production implementation - for instance memory
+stores can be used to mock stores.
+
+For integration testing, a mock runtime implementation should be provided that allows composing different app modules
+together for testing without a dependency on runtime or Comet.
+
+## Consequences
+
+### Backwards Compatibility
+
+Early versions of runtime modules should aim to support as much as possible modules built with the existing
+`AppModule`/`sdk.Context` framework. As the core API is more widely adopted, later runtime versions may choose to
+drop support and only support the core API plus any runtime module specific APIs (like specific versions of Comet).
+
+The core module itself should strive to remain at the go semantic version `v1` as long as possible and follow design
+principles that allow for strong long-term support (LTS).
+
+Older versions of the SDK can support modules built against core with adaptors that convert wrap core `AppModule`
+implementations in implementations of `AppModule` that conform to that version of the SDK's semantics as well
+as by providing service implementations by wrapping `sdk.Context`.
+
+### Positive
+
+* better API encapsulation and separation of concerns
+* more stable APIs
+* more framework extensibility
+* deterministic events and queries
+* event listeners
+* inter-module msg and query execution support
+* more explicit support for forking and merging of module versions (including runtime)
+
+### Negative
+
+### Neutral
+
+* modules will need to be refactored to use this API
+* some replacements for `AppModule` functionality still need to be defined in follow-ups
+ (type registration, commands, invariants, simulations) and this will take additional design work
+
+## Further Discussions
+
+* gas
+* block headers
+* upgrades
+* registration of gogo proto and amino interface types
+* cobra query and tx commands
+* gRPC gateway
+* crisis module invariants
+* simulations
+
+## References
+
+* [ADR 033: Protobuf-based Inter-Module Communication](./adr-033-protobuf-inter-module-comm.md)
+* [ADR 057: App Wiring](./adr-057-app-wiring.md)
+* [ADR 055: ORM](./adr-055-orm.md)
+* [ADR 028: Public Key Addresses](./adr-028-public-key-addresses.md)
+* [Keeping Your Modules Compatible](https://go.dev/blog/module-compatibility)
diff --git a/copy-of-sdk-docs/build/architecture/adr-064-abci-2.0.md b/copy-of-sdk-docs/build/architecture/adr-064-abci-2.0.md
new file mode 100644
index 00000000..47689627
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-064-abci-2.0.md
@@ -0,0 +1,473 @@
+# ADR 64: ABCI 2.0 Integration (Phase II)
+
+## Changelog
+
+* 2023-01-17: Initial Draft (@alexanderbez)
+* 2023-04-06: Add upgrading section (@alexanderbez)
+* 2023-04-10: Simplify vote extension state persistence (@alexanderbez)
+* 2023-07-07: Revise vote extension state persistence (@alexanderbez)
+* 2023-08-24: Revise vote extension power calculations and staking interface (@davidterpay)
+
+## Status
+
+ACCEPTED
+
+## Abstract
+
+This ADR outlines the continuation of the efforts to implement ABCI++ in the Cosmos
+SDK outlined in [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md).
+
+Specifically, this ADR outlines the design and implementation of ABCI 2.0, which
+includes `ExtendVote`, `VerifyVoteExtension` and `FinalizeBlock`.
+
+## Context
+
+ABCI 2.0 continues the promised updates from ABCI++, specifically three additional
+ABCI methods that the application can implement in order to gain further control,
+insight and customization of the consensus process, unlocking many novel use-cases
+that were previously not possible. We describe these three new methods below:
+
+### `ExtendVote`
+
+This method allows each validator process to extend the pre-commit phase of the
+CometBFT consensus process. Specifically, it allows the application to perform
+custom business logic that extends the pre-commit vote and supply additional data
+as part of the vote, although they are signed separately by the same key.
+
+The data, called vote extension, will be broadcast and received together with the
+vote it is extending, and will be made available to the application in the next
+height. Specifically, the proposer of the next block will receive the vote extensions
+in `RequestPrepareProposal.local_last_commit.votes`.
+
+If the application does not have vote extension information to provide, it
+returns a 0-length byte array as its vote extension.
+
+**NOTE**:
+
+* Although each validator process submits its own vote extension, ONLY the *proposer*
+ of the *next* block will receive all the vote extensions included as part of the
+ pre-commit phase of the previous block. This means only the proposer will
+ implicitly have access to all the vote extensions, via `RequestPrepareProposal`,
+ and that not all vote extensions may be included, since a validator does not
+ have to wait for all pre-commits, only 2/3.
+* The pre-commit vote is signed independently from the vote extension.
+
+### `VerifyVoteExtension`
+
+This method allows validators to validate the vote extension data attached to
+each pre-commit message it receives. If the validation fails, the whole pre-commit
+message will be deemed invalid and ignored by CometBFT.
+
+CometBFT uses `VerifyVoteExtension` when validating a pre-commit vote. Specifically,
+for a pre-commit, CometBFT will:
+
+* Reject the message if it doesn't contain a signed vote AND a signed vote extension
+* Reject the message if the vote's signature OR the vote extension's signature fails to verify
+* Reject the message if `VerifyVoteExtension` was rejected by the app
+
+Otherwise, CometBFT will accept the pre-commit message.
+
+Note, this has important consequences on liveness, i.e., if vote extensions repeatedly
+cannot be verified by correct validators, CometBFT may not be able to finalize
+a block even if sufficiently many (+2/3) validators send pre-commit votes for
+that block. Thus, `VerifyVoteExtension` should be used with special care.
+
+CometBFT recommends that an application that detects an invalid vote extension
+SHOULD accept it in `ResponseVerifyVoteExtension` and ignore it in its own logic.
+
+### `FinalizeBlock`
+
+This method delivers a decided block to the application. The application must
+execute the transactions in the block deterministically and update its state
+accordingly. Cryptographic commitments to the block and transaction results,
+returned via the corresponding parameters in `ResponseFinalizeBlock`, are
+included in the header of the next block. CometBFT calls it when a new block
+is decided.
+
+In other words, `FinalizeBlock` encapsulates the current ABCI execution flow of
+`BeginBlock`, one or more `DeliverTx`, and `EndBlock` into a single ABCI method.
+CometBFT will no longer execute requests for these legacy methods and instead
+will just simply call `FinalizeBlock`.
+
+## Decision
+
+We will discuss changes to the Cosmos SDK to implement ABCI 2.0 in two distinct
+phases, `VoteExtensions` and `FinalizeBlock`.
+
+### `VoteExtensions`
+
+Similarly for `PrepareProposal` and `ProcessProposal`, we propose to introduce
+two new handlers that an application can implement in order to provide and verify
+vote extensions.
+
+We propose the following new handlers for applications to implement:
+
+```go
+type ExtendVoteHandler func(sdk.Context, abci.ExtendVoteRequest) abci.ExtendVoteResponse
+type VerifyVoteExtensionHandler func(sdk.Context, abci.VerifyVoteExtensionRequest) abci.VerifyVoteExtensionResponse
+```
+
+An ephemeral context and state will be supplied to both handlers. The
+context will contain relevant metadata such as the block height and block hash.
+The state will be a cached version of the committed state of the application and
+will be discarded after the execution of the handler, this means that both handlers
+get a fresh state view and no changes made to it will be written.
+
+If an application decides to implement `ExtendVoteHandler`, it must return a
+non-nil `ResponseExtendVote.VoteExtension`.
+
+Recall, an implementation of `ExtendVoteHandler` does NOT need to be deterministic,
+however, given a set of vote extensions, `VerifyVoteExtensionHandler` must be
+deterministic, otherwise the chain may suffer from liveness faults. In addition,
+recall CometBFT proceeds in rounds for each height, so if a decision cannot be
+made about a block proposal at a given height, CometBFT will proceed to the
+next round and thus will execute `ExtendVote` and `VerifyVoteExtension` again for
+the new round for each validator until 2/3 valid pre-commits can be obtained.
+
+Given the broad scope of potential implementations and use-cases of vote extensions,
+and how to verify them, most applications should choose to implement the handlers
+through a single handler type, which can have any number of dependencies injected
+such as keepers. In addition, this handler type could contain some notion of
+volatile vote extension state management which would assist in vote extension
+verification. This state management could be ephemeral or could be some form of
+on-disk persistence.
+
+Example:
+
+```go
+// VoteExtensionHandler implements an Oracle vote extension handler.
+type VoteExtensionHandler struct {
+ cdc Codec
+ mk MyKeeper
+ state VoteExtState // This could be a map or a DB connection object
+}
+
+// ExtendVoteHandler can do something with h.mk and possibly h.state to create
+// a vote extension, such as fetching a series of prices for supported assets.
+func (h VoteExtensionHandler) ExtendVoteHandler(ctx sdk.Context, req abci.ExtendVoteRequest) abci.ExtendVoteResponse {
+ prices := GetPrices(ctx, h.mk.Assets())
+ bz, err := EncodePrices(h.cdc, prices)
+ if err != nil {
+ panic(fmt.Errorf("failed to encode prices for vote extension: %w", err))
+ }
+
+ // store our vote extension at the given height
+ //
+ // NOTE: Vote extensions can be overridden since we can timeout in a round.
+ SetPrices(h.state, req, bz)
+
+ return abci.ExtendVoteResponse{VoteExtension: bz}
+}
+
+// VerifyVoteExtensionHandler can do something with h.state and req to verify
+// the req.VoteExtension field, such as ensuring the provided oracle prices are
+// within some valid range of our prices.
+func (h VoteExtensionHandler) VerifyVoteExtensionHandler(ctx sdk.Context, req abci.VerifyVoteExtensionRequest) abci.VerifyVoteExtensionResponse {
+ prices, err := DecodePrices(h.cdc, req.VoteExtension)
+ if err != nil {
+ log("failed to decode vote extension", "err", err)
+ return abci.VerifyVoteExtensionResponse{Status: REJECT}
+ }
+
+ if err := ValidatePrices(h.state, req, prices); err != nil {
+ log("failed to validate vote extension", "prices", prices, "err", err)
+ return abci.VerifyVoteExtensionResponse{Status: REJECT}
+ }
+
+ // store updated vote extensions at the given height
+ //
+ // NOTE: Vote extensions can be overridden since we can timeout in a round.
+ SetPrices(h.state, req, req.VoteExtension)
+
+ return abci.VerifyVoteExtensionResponse{Status: ACCEPT}
+}
+```
+
+#### Vote Extension Propagation & Verification
+
+As mentioned previously, vote extensions for height `H` are only made available
+to the proposer at height `H+1` during `PrepareProposal`. However, in order to
+make vote extensions useful, all validators should have access to the agreed upon
+vote extensions at height `H` during `H+1`.
+
+Since CometBFT includes all the vote extension signatures in `RequestPrepareProposal`,
+we propose that the proposing validator manually "inject" the vote extensions
+along with their respective signatures via a special transaction, `VoteExtsTx`,
+into the block proposal during `PrepareProposal`. The `VoteExtsTx` will be
+populated with a single `ExtendedCommitInfo` object which is received directly
+from `RequestPrepareProposal`.
+
+For convention, the `VoteExtsTx` transaction should be the first transaction in
+the block proposal, although chains can implement their own preferences. For
+safety purposes, we also propose that the proposer itself verify all the vote
+extension signatures it receives in `RequestPrepareProposal`.
+
+A validator, upon a `RequestProcessProposal`, will receive the injected `VoteExtsTx`
+which includes the vote extensions along with their signatures. If no such transaction
+exists, the validator MUST REJECT the proposal.
+
+When a validator inspects a `VoteExtsTx`, it will evaluate each `SignedVoteExtension`.
+For each signed vote extension, the validator will generate the signed bytes and
+verify the signature. At least 2/3 valid signatures, based on voting power, must
+be received in order for the block proposal to be valid, otherwise the validator
+MUST REJECT the proposal.
+
+In order to have the ability to validate signatures, `BaseApp` must have access
+to the `x/staking` module, since this module stores an index from consensus
+address to public key. However, we will avoid a direct dependency on `x/staking`
+and instead rely on an interface instead. In addition, the Cosmos SDK will expose
+a default signature verification method which applications can use:
+
+```go
+type ValidatorStore interface {
+ GetPubKeyByConsAddr(context.Context, sdk.ConsAddress) (cmtprotocrypto.PublicKey, error)
+}
+
+// ValidateVoteExtensions is a function that an application can execute in
+// ProcessProposal to verify vote extension signatures.
+func (app *BaseApp) ValidateVoteExtensions(ctx sdk.Context, currentHeight int64, extCommit abci.ExtendedCommitInfo) error {
+ votingPower := 0
+ totalVotingPower := 0
+
+ for _, vote := range extCommit.Votes {
+ totalVotingPower += vote.Validator.Power
+
+ if !vote.SignedLastBlock || len(vote.VoteExtension) == 0 {
+ continue
+ }
+
+ valConsAddr := sdk.ConsAddress(vote.Validator.Address)
+ pubKeyProto, err := valStore.GetPubKeyByConsAddr(ctx, valConsAddr)
+ if err != nil {
+ return fmt.Errorf("failed to get public key for validator %s: %w", valConsAddr, err)
+ }
+
+ if len(vote.ExtensionSignature) == 0 {
+ return fmt.Errorf("received a non-empty vote extension with empty signature for validator %s", valConsAddr)
+ }
+
+ cmtPubKey, err := cryptoenc.PubKeyFromProto(pubKeyProto)
+ if err != nil {
+ return fmt.Errorf("failed to convert validator %X public key: %w", valConsAddr, err)
+ }
+
+ cve := cmtproto.CanonicalVoteExtension{
+ Extension: vote.VoteExtension,
+ Height: currentHeight - 1, // the vote extension was signed in the previous height
+ Round: int64(extCommit.Round),
+ ChainId: app.GetChainID(),
+ }
+
+ extSignBytes, err := cosmosio.MarshalDelimited(&cve)
+ if err != nil {
+ return fmt.Errorf("failed to encode CanonicalVoteExtension: %w", err)
+ }
+
+ if !cmtPubKey.VerifySignature(extSignBytes, vote.ExtensionSignature) {
+ return errors.New("received vote with invalid signature")
+ }
+
+ votingPower += vote.Validator.Power
+ }
+
+ if (votingPower / totalVotingPower) < threshold {
+ return errors.New("not enough voting power for the vote extensions")
+ }
+
+ return nil
+}
+```
+
+Once at least 2/3 signatures, by voting power, are received and verified, the
+validator can use the vote extensions to derive additional data or come to some
+decision based on the vote extensions.
+
+> NOTE: It is very important to state, that neither the vote propagation technique
+> nor the vote extension verification mechanism described above is required for
+> applications to implement. In other words, a proposer is not required to verify
+> and propagate vote extensions along with their signatures nor are proposers
+> required to verify those signatures. An application can implement it's own
+> PKI mechanism and use that to sign and verify vote extensions.
+
+#### Vote Extension Persistence
+
+In certain contexts, it may be useful or necessary for applications to persist
+data derived from vote extensions. In order to facilitate this use case, we propose
+to allow app developers to define a pre-Blocker hook which will be called
+at the very beginning of `FinalizeBlock`, i.e. before `BeginBlock` (see below).
+
+Note, we cannot allow applications to directly write to the application state
+during `ProcessProposal` because during replay, CometBFT will NOT call `ProcessProposal`,
+which would result in an incomplete state view.
+
+```go
+func (a MyApp) PreBlocker(ctx sdk.Context, req *abci.FinalizeBlockRequest) error {
+ voteExts := GetVoteExtensions(ctx, req.Txs)
+
+ // Process and perform some compute on vote extensions, storing any resulting
+ // state.
+ if err a.processVoteExtensions(ctx, voteExts); if err != nil {
+ return err
+ }
+}
+```
+
+### `FinalizeBlock`
+
+The existing ABCI methods `BeginBlock`, `DeliverTx`, and `EndBlock` have existed
+since the dawn of ABCI-based applications. Thus, applications, tooling, and developers
+have grown used to these methods and their use-cases. Specifically, `BeginBlock`
+and `EndBlock` have grown to be pretty integral and powerful within ABCI-based
+applications. E.g. an application might want to run distribution and inflation
+related operations prior to executing transactions and then have staking related
+changes to happen after executing all transactions.
+
+We propose to keep `BeginBlock` and `EndBlock` within the SDK's core module
+interfaces only so application developers can continue to build against existing
+execution flows. However, we will remove `BeginBlock`, `DeliverTx` and `EndBlock`
+from the SDK's `BaseApp` implementation and thus the ABCI surface area.
+
+What will then exist is a single `FinalizeBlock` execution flow. Specifically, in
+`FinalizeBlock` we will execute the application's `BeginBlock`, followed by
+execution of all the transactions, finally followed by execution of the application's
+`EndBlock`.
+
+Note, we will still keep the existing transaction execution mechanics within
+`BaseApp`, but all notions of `DeliverTx` will be removed, i.e. `deliverState`
+will be replace with `finalizeState`, which will be committed on `Commit`.
+
+However, there are current parameters and fields that exist in the existing
+`BeginBlock` and `EndBlock` ABCI types, such as votes that are used in distribution
+and byzantine validators used in evidence handling. These parameters exist in the
+`FinalizeBlock` request type, and will need to be passed to the application's
+implementations of `BeginBlock` and `EndBlock`.
+
+This means the Cosmos SDK's core module interfaces will need to be updated to
+reflect these parameters. The easiest and most straightforward way to achieve
+this is to just pass `RequestFinalizeBlock` to `BeginBlock` and `EndBlock`.
+Alternatively, we can create dedicated proxy types in the SDK that reflect these
+legacy ABCI types, e.g. `LegacyBeginBlockRequest` and `LegacyEndBlockRequest`. Or,
+we can come up with new types and names altogether.
+
+```go
+func (app *BaseApp) FinalizeBlock(req abci.FinalizeBlockRequest) (*abci.FinalizeBlockResponse, error) {
+ ctx := ...
+
+ if app.preBlocker != nil {
+ ctx := app.finalizeBlockState.ctx
+ rsp, err := app.preBlocker(ctx, req)
+ if err != nil {
+ return nil, err
+ }
+ if rsp.ConsensusParamsChanged {
+ app.finalizeBlockState.ctx = ctx.WithConsensusParams(app.GetConsensusParams(ctx))
+ }
+ }
+ beginBlockResp, err := app.beginBlock(req)
+ appendBlockEventAttr(beginBlockResp.Events, "begin_block")
+
+ txExecResults := make([]abci.ExecTxResult, 0, len(req.Txs))
+ for _, tx := range req.Txs {
+ result := app.runTx(runTxModeFinalize, tx)
+ txExecResults = append(txExecResults, result)
+ }
+
+ endBlockResp, err := app.endBlock(app.finalizeBlockState.ctx)
+ appendBlockEventAttr(beginBlockResp.Events, "end_block")
+
+ return abci.FinalizeBlockResponse{
+ TxResults: txExecResults,
+ Events: joinEvents(beginBlockResp.Events, endBlockResp.Events),
+ ValidatorUpdates: endBlockResp.ValidatorUpdates,
+ ConsensusParamUpdates: endBlockResp.ConsensusParamUpdates,
+ AppHash: nil,
+ }
+}
+```
+
+#### Events
+
+Many tools, indexers and ecosystem libraries rely on the existence `BeginBlock`
+and `EndBlock` events. Since CometBFT now only exposes `FinalizeBlockEvents`, we
+find that it will still be useful for these clients and tools to still query for
+and rely on existing events, especially since applications will still define
+`BeginBlock` and `EndBlock` implementations.
+
+In order to facilitate existing event functionality, we propose that all `BeginBlock`
+and `EndBlock` events have a dedicated `EventAttribute` with `key=block` and
+`value=begin_block|end_block`. The `EventAttribute` will be appended to each event
+in both `BeginBlock` and `EndBlock` events.
+
+
+### Upgrading
+
+CometBFT defines a consensus parameter, [`VoteExtensionsEnableHeight`](https://github.com/cometbft/cometbft/blob/v0.38.0-alpha.1/spec/abci/abci%2B%2B_app_requirements.md#abciparamsvoteextensionsenableheight),
+which specifies the height at which vote extensions are enabled and **required**.
+If the value is set to zero, which is the default, then vote extensions are
+disabled and an application is not required to implement and use vote extensions.
+
+However, if the value `H` is positive, at all heights greater than the configured
+height `H` vote extensions must be present (even if empty). When the configured
+height `H` is reached, `PrepareProposal` will not include vote extensions yet,
+but `ExtendVote` and `VerifyVoteExtension` will be called. Then, when reaching
+height `H+1`, `PrepareProposal` will include the vote extensions from height `H`.
+
+It is very important to note, for all heights after H:
+
+* Vote extensions CANNOT be disabled
+* They are mandatory, i.e. all pre-commit messages sent MUST have an extension
+ attached (even if empty)
+
+When an application updates to the Cosmos SDK version with CometBFT v0.38 support,
+in the upgrade handler it must ensure to set the consensus parameter
+`VoteExtensionsEnableHeight` to the correct value. E.g. if an application is set
+to perform an upgrade at height `H`, then the value of `VoteExtensionsEnableHeight`
+should be set to any value `>=H+1`. This means that at the upgrade height, `H`,
+vote extensions will not be enabled yet, but at height `H+1` they will be enabled.
+
+## Consequences
+
+### Backwards Compatibility
+
+ABCI 2.0 is naturally not backwards compatible with prior versions of the Cosmos SDK
+and CometBFT. For example, an application that requests `RequestFinalizeBlock`
+to the same application that does not speak ABCI 2.0 will naturally fail.
+
+In addition, `BeginBlock`, `DeliverTx` and `EndBlock` will be removed from the
+application ABCI interfaces and along with the inputs and outputs being modified
+in the module interfaces.
+
+### Positive
+
+* `BeginBlock` and `EndBlock` semantics remain, so burden on application developers
+ should be limited.
+* Less communication overhead as multiple ABCI requests are condensed into a single
+ request.
+* Sets the groundwork for optimistic execution.
+* Vote extensions allow for an entirely new set of application primitives to be
+ developed, such as in-process price oracles and encrypted mempools.
+
+### Negative
+
+* Some existing Cosmos SDK core APIs may need to be modified and thus broken.
+* Signature verification in `ProcessProposal` of 100+ vote extension signatures
+ will add significant performance overhead to `ProcessProposal`. Granted, the
+ signature verification process can happen concurrently using an error group
+ with `GOMAXPROCS` goroutines.
+
+### Neutral
+
+* Having to manually "inject" vote extensions into the block proposal during
+ `PrepareProposal` is an awkward approach and takes up block space unnecessarily.
+* The requirement of `ResetProcessProposalState` can create a footgun for
+ application developers if they're not careful, but this is necessary in order
+ for applications to be able to commit state from vote extension computation.
+
+## Further Discussions
+
+Future discussions include design and implementation of ABCI 3.0, which is a
+continuation of ABCI++ and the general discussion of optimistic execution.
+
+## References
+
+* [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md)
diff --git a/copy-of-sdk-docs/build/architecture/adr-065-store-v2.md b/copy-of-sdk-docs/build/architecture/adr-065-store-v2.md
new file mode 100644
index 00000000..babf0eb7
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-065-store-v2.md
@@ -0,0 +1,290 @@
+# ADR-065: Store V2
+
+## Changelog
+
+* Feb 14, 2023: Initial Draft (@alexanderbez)
+
+## Status
+
+DRAFT
+
+## Abstract
+
+The storage and state primitives that Cosmos SDK based applications have used have
+by and large not changed since the launch of the inaugural Cosmos Hub. The demands
+and needs of Cosmos SDK based applications, from both developer and client UX
+perspectives, have evolved and outgrown the ecosystem since these primitives
+were first introduced.
+
+Over time as these applications have gained significant adoption, many critical
+shortcomings and flaws have been exposed in the state and storage primitives of
+the Cosmos SDK.
+
+In order to keep up with the evolving demands and needs of both clients and developers,
+a major overhaul to these primitives is necessary.
+
+## Context
+
+The Cosmos SDK provides application developers with various storage primitives
+for dealing with application state. Specifically, each module contains its own
+merkle commitment data structure -- an IAVL tree. In this data structure, a module
+can store and retrieve key-value pairs along with Merkle commitments, i.e. proofs,
+to those key-value pairs indicating that they do or do not exist in the global
+application state. This data structure is the base layer `KVStore`.
+
+In addition, the SDK provides abstractions on top of this Merkle data structure.
+Namely, a root multi-store (RMS) is a collection of each module's `KVStore`.
+Through the RMS, the application can serve queries and provide proofs to clients
+in addition to providing a module access to its own unique `KVStore` through the use
+of `StoreKey`, which is an OCAP primitive.
+
+There are further layers of abstraction that sit between the RMS and the underlying
+IAVL `KVStore`. A `GasKVStore` is responsible for tracking gas IO consumption for
+state machine reads and writes. A `CacheKVStore` is responsible for providing a
+way to cache reads and buffer writes to make state transitions atomic, e.g.
+transaction execution or governance proposal execution.
+
+There are a few critical drawbacks to these layers of abstraction and the overall
+design of storage in the Cosmos SDK:
+
+* Since each module has its own IAVL `KVStore`, commitments are not [atomic](https://github.com/cosmos/cosmos-sdk/issues/14625)
+ * Note, we can still allow modules to have their own IAVL `KVStore`, but the
+ IAVL library will need to support the ability to pass a DB instance as an
+ argument to various IAVL APIs.
+* Since IAVL is responsible for both state storage and commitment, running an
+ archive node becomes increasingly expensive as disk space grows exponentially.
+* As the size of a network increases, various performance bottlenecks start to
+ emerge in many areas such as query performance, network upgrades, state
+ migrations, and general application performance.
+* Developer UX is poor as it does not allow application developers to experiment
+ with different types of approaches to storage and commitments, along with the
+ complications of many layers of abstractions referenced above.
+
+See the [Storage Discussion](https://github.com/cosmos/cosmos-sdk/discussions/13545) for more information.
+
+## Alternatives
+
+There was a previous attempt to refactor the storage layer described in [ADR-040](./adr-040-storage-and-smt-state-commitments.md).
+However, this approach mainly stems from the shortcomings of IAVL and various performance
+issues around it. While there was a (partial) implementation of [ADR-040](./adr-040-storage-and-smt-state-commitments.md),
+it was never adopted for a variety of reasons, such as the reliance on using an
+SMT, which was more in a research phase, and some design choices that couldn't
+be fully agreed upon, such as the snapshotting mechanism that would result in
+massive state bloat.
+
+## Decision
+
+We propose to build upon some of the great ideas introduced in [ADR-040](./adr-040-storage-and-smt-state-commitments.md),
+while being a bit more flexible with the underlying implementations and overall
+less intrusive. Specifically, we propose to:
+
+* Separate the concerns of state commitment (**SC**), needed for consensus, and
+ state storage (**SS**), needed for state machine and clients.
+* Reduce layers of abstractions necessary between the RMS and underlying stores.
+* Provide atomic module store commitments by providing a batch database object
+ to core IAVL APIs.
+* Reduce complexities in the `CacheKVStore` implementation while also improving
+ performance[3].
+
+Furthermore, we will keep the IAVL is the backing [commitment](https://cryptography.fandom.com/wiki/Commitment_scheme)
+store for the time being. While we might not fully settle on the use of IAVL in
+the long term, we do not have strong empirical evidence to suggest a better
+alternative. Given that the SDK provides interfaces for stores, it should be sufficient
+to change the backing commitment store in the future should evidence arise to
+warrant a better alternative. However there is promising work being done to IAVL
+that should result in significant performance improvement [1,2].
+
+### Separating SS and SC
+
+By separating SS and SC, it will allow for us to optimize against primary use cases
+and access patterns to state. Specifically, The SS layer will be responsible for
+direct access to data in the form of (key, value) pairs, whereas the SC layer (IAVL)
+will be responsible for committing to data and providing Merkle proofs.
+
+Note, the underlying physical storage database will be the same between both the
+SS and SC layers. So to avoid collisions between (key, value) pairs, both layers
+will be namespaced.
+
+#### State Commitment (SC)
+
+Given that the existing solution today acts as both SS and SC, we can simply
+repurpose it to act solely as the SC layer without any significant changes to
+access patterns or behavior. In other words, the entire collection of existing
+IAVL-backed module `KVStore`s will act as the SC layer.
+
+However, in order for the SC layer to remain lightweight and not duplicate a
+majority of the data held in the SS layer, we encourage node operators to keep
+tight pruning strategies.
+
+#### State Storage (SS)
+
+In the RMS, we will expose a *single* `KVStore` backed by the same physical
+database that backs the SC layer. This `KVStore` will be explicitly namespaced
+to avoid collisions and will act as the primary storage for (key, value) pairs.
+
+While we most likely will continue the use of `cosmos-db`, or some local interface,
+to allow for flexibility and iteration over preferred physical storage backends
+as research and benchmarking continues. However, we propose to hardcode the use
+of RocksDB as the primary physical storage backend.
+
+Since the SS layer will be implemented as a `KVStore`, it will support the
+following functionality:
+
+* Range queries
+* CRUD operations
+* Historical queries and versioning
+* Pruning
+
+The RMS will keep track of all buffered writes using a dedicated and internal
+`MemoryListener` for each `StoreKey`. For each block height, upon `Commit`, the
+SS layer will write all buffered (key, value) pairs under a [RocksDB user-defined timestamp](https://github.com/facebook/rocksdb/wiki/User-defined-Timestamp-%28Experimental%29) column
+family using the block height as the timestamp, which is an unsigned integer.
+This will allow a client to fetch (key, value) pairs at historical and current
+heights along with making iteration and range queries relatively performant as
+the timestamp is the key suffix.
+
+Note, we choose not to use a more general approach of allowing any embedded key/value
+database, such as LevelDB or PebbleDB, using height key-prefixed keys to
+effectively version state because most of these databases use variable length
+keys which would effectively make actions likes iteration and range queries less
+performant.
+
+Since operators might want pruning strategies to differ in SS compared to SC,
+e.g. having a very tight pruning strategy in SC while having a looser pruning
+strategy for SS, we propose to introduce an additional pruning configuration,
+with parameters that are identical to what exists in the SDK today, and allow
+operators to control the pruning strategy of the SS layer independently of the
+SC layer.
+
+Note, the SC pruning strategy must be congruent with the operator's state sync
+configuration. This is so as to allow state sync snapshots to execute successfully,
+otherwise, a snapshot could be triggered on a height that is not available in SC.
+
+#### State Sync
+
+The state sync process should be largely unaffected by the separation of the SC
+and SS layers. However, if a node syncs via state sync, the SS layer of the node
+will not have the state synced height available, since the IAVL import process is
+not setup in way to easily allow direct key/value insertion. A modification of
+the IAVL import process would be necessary to facilitate having the state sync
+height available.
+
+Note, this is not problematic for the state machine itself because when a query
+is made, the RMS will automatically direct the query correctly (see [Queries](#queries)).
+
+#### Queries
+
+To consolidate the query routing between both the SC and SS layers, we propose to
+have a notion of a "query router" that is constructed in the RMS. This query router
+will be supplied to each `KVStore` implementation. The query router will route
+queries to either the SC layer or the SS layer based on a few parameters. If
+`prove: true`, then the query must be routed to the SC layer. Otherwise, if the
+query height is available in the SS layer, the query will be served from the SS
+layer. Otherwise, we fall back on the SC layer.
+
+If no height is provided, the SS layer will assume the latest height. The SS
+layer will store a reverse index to lookup `LatestVersion -> timestamp(version)`
+which is set on `Commit`.
+
+#### Proofs
+
+Since the SS layer is naturally a storage layer only, without any commitments
+to (key, value) pairs, it cannot provide Merkle proofs to clients during queries.
+
+Since the pruning strategy against the SC layer is configured by the operator,
+we can therefore have the RMS route the query to the SC layer if the version exists and
+`prove: true`. Otherwise, the query will fall back to the SS layer without a proof.
+
+We could explore the idea of using state snapshots to rebuild an in-memory IAVL
+tree in real time against a version closest to the one provided in the query.
+However, it is not clear what the performance implications will be of this approach.
+
+### Atomic Commitment
+
+We propose to modify the existing IAVL APIs to accept a batch DB object instead
+of relying on an internal batch object in `nodeDB`. Since each underlying IAVL
+`KVStore` shares the same DB in the SC layer, this will allow commits to be
+atomic.
+
+Specifically, we propose to:
+
+* Remove the `dbm.Batch` field from `nodeDB`
+* Update the `SaveVersion` method of the `MutableTree` IAVL type to accept a batch object
+* Update the `Commit` method of the `CommitKVStore` interface to accept a batch object
+* Create a batch object in the RMS during `Commit` and pass this object to each
+ `KVStore`
+* Write the database batch after all stores have committed successfully
+
+Note, this will require IAVL to be updated to not rely or assume on any batch
+being present during `SaveVersion`.
+
+## Consequences
+
+As a result of a new store V2 package, we should expect to see improved performance
+for queries and transactions due to the separation of concerns. We should also
+expect to see improved developer UX around experimentation of commitment schemes
+and storage backends for further performance, in addition to a reduced amount of
+abstraction around KVStores making operations such as caching and state branching
+more intuitive.
+
+However, due to the proposed design, there are drawbacks around providing state
+proofs for historical queries.
+
+### Backwards Compatibility
+
+This ADR proposes changes to the storage implementation in the Cosmos SDK through
+an entirely new package. Interfaces may be borrowed and extended from existing
+types that exist in `store`, but no existing implementations or interfaces will
+be broken or modified.
+
+### Positive
+
+* Improved performance of independent SS and SC layers
+* Reduced layers of abstraction making storage primitives easier to understand
+* Atomic commitments for SC
+* Redesign of storage types and interfaces will allow for greater experimentation
+ such as different physical storage backends and different commitment schemes
+ for different application modules
+
+### Negative
+
+* Providing proofs for historical state is challenging
+
+### Neutral
+
+* Keeping IAVL as the primary commitment data structure, although drastic
+ performance improvements are being made
+
+## Further Discussions
+
+### Module Storage Control
+
+Many modules store secondary indexes that are typically solely used to support
+client queries, but are actually not needed for the state machine's state
+transitions. What this means is that these indexes technically have no reason to
+exist in the SC layer at all, as they take up unnecessary space. It is worth
+exploring what an API would look like to allow modules to indicate what (key, value)
+pairs they want to be persisted in the SC layer, implicitly indicating the SS
+layer as well, as opposed to just persisting the (key, value) pair only in the
+SS layer.
+
+### Historical State Proofs
+
+It is not clear what the importance or demand is within the community of providing
+commitment proofs for historical state. While solutions can be devised such as
+rebuilding trees on the fly based on state snapshots, it is not clear what the
+performance implications are for such solutions.
+
+### Physical DB Backends
+
+This ADR proposes usage of RocksDB to utilize user-defined timestamps as a
+versioning mechanism. However, other physical DB backends are available that may
+offer alternative ways to implement versioning while also providing performance
+improvements over RocksDB. E.g. PebbleDB supports MVCC timestamps as well, but
+we'll need to explore how PebbleDB handles compaction and state growth over time.
+
+## References
+
+* [1] https://github.com/cosmos/iavl/pull/676
+* [2] https://github.com/cosmos/iavl/pull/664
+* [3] https://github.com/cosmos/cosmos-sdk/issues/14990
diff --git a/copy-of-sdk-docs/build/architecture/adr-068-preblock.md b/copy-of-sdk-docs/build/architecture/adr-068-preblock.md
new file mode 100644
index 00000000..6b50cf0c
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-068-preblock.md
@@ -0,0 +1,63 @@
+# ADR 068: Preblock
+
+## Changelog
+
+* Sept 13, 2023: Initial Draft
+
+## Status
+
+DRAFT
+
+## Abstract
+
+Introduce `PreBlock`, which runs before the begin blocker of other modules, and allows modifying consensus parameters, and the changes are visible to the following state machine logics.
+
+## Context
+
+When upgrading to sdk 0.47, the storage format for consensus parameters changed, but in the migration block, `ctx.ConsensusParams()` is always `nil`, because it fails to load the old format using new code, it's supposed to be migrated by the `x/upgrade` module first, but unfortunately, the migration happens in `BeginBlocker` handler, which runs after the `ctx` is initialized.
+When we try to solve this, we find the `x/upgrade` module can't modify the context to make the consensus parameters visible for the other modules, the context is passed by value, and sdk team want to keep it that way, that's good for isolation between modules.
+
+## Alternatives
+
+The first alternative solution introduced a `MigrateModuleManager`, which only includes the `x/upgrade` module right now, and baseapp will run their `BeginBlocker`s before the other modules, and reload context's consensus parameters in between.
+
+## Decision
+
+Suggested this new lifecycle method.
+
+### `PreBlocker`
+
+There are two semantics around the new lifecycle method:
+
+* It runs before the `BeginBlocker` of all modules
+* It can modify consensus parameters in storage, and signal the caller through the return value.
+
+When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameters in the finalize context:
+
+```
+app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams())
+```
+
+The new ctx must be passed to all the other lifecycle methods.
+
+
+## Consequences
+
+### Backwards Compatibility
+
+### Positive
+
+### Negative
+
+### Neutral
+
+## Further Discussions
+
+## Test Cases
+
+## References
+
+* [1] https://github.com/cosmos/cosmos-sdk/issues/16494
+* [2] https://github.com/cosmos/cosmos-sdk/pull/16583
+* [3] https://github.com/cosmos/cosmos-sdk/pull/17421
+* [4] https://github.com/cosmos/cosmos-sdk/pull/17713
diff --git a/copy-of-sdk-docs/build/architecture/adr-070-unordered-account.md b/copy-of-sdk-docs/build/architecture/adr-070-unordered-account.md
new file mode 100644
index 00000000..767d40d5
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-070-unordered-account.md
@@ -0,0 +1,327 @@
+# ADR 070: Unordered Transactions
+
+## Changelog
+
+* Dec 4, 2023: Initial Draft (@yihuang, @tac0turtle, @alexanderbez)
+* Jan 30, 2024: Include section on deterministic transaction encoding
+* Mar 18, 2025: Revise implementation to use Cosmos SDK KV Store and require unique timeouts per-address (@technicallyty)
+* Apr 25, 2025: Add note about rejecting unordered txs with sequence values.
+
+## Status
+
+ACCEPTED Not Implemented
+
+## Abstract
+
+We propose a way to do replay-attack protection without enforcing the order of
+transactions and without requiring the use of monotonically increasing sequences. Instead, we propose
+the use of a time-based, ephemeral sequence.
+
+## Context
+
+Account sequence values serve to prevent replay attacks and ensure transactions from the same sender are included in blocks and executed
+in sequential order. Unfortunately, this makes it difficult to reliably send many concurrent transactions from the
+same sender. Victims of such limitations include IBC relayers and crypto exchanges.
+
+## Decision
+
+We propose adding a boolean field `unordered` and a google.protobuf.Timestamp field `timeout_timestamp` to the transaction body.
+
+Unordered transactions will bypass the traditional account sequence rules and follow the rules described
+below, without impacting traditional ordered transactions which will follow the same sequence rules as before.
+
+We will introduce new storage of time-based, ephemeral unordered sequences using the SDK's existing KV Store library.
+Specifically, we will leverage the existing x/auth KV store to store the unordered sequences.
+
+When an unordered transaction is included in a block, a concatenation of the `timeout_timestamp` and sender’s address bytes
+will be recorded to state (i.e. `542939323/`). In cases of multi-party signing, one entry per signer
+will be recorded to state.
+
+New transactions will be checked against the state to prevent duplicate submissions. To prevent the state from growing indefinitely, we propose the following:
+
+* Define an upper bound for the value of `timeout_timestamp` (i.e. 10 minutes).
+* Add PreBlocker method to x/auth that removes state entries with a `timeout_timestamp` earlier than the current block time.
+
+### Transaction Format
+
+```protobuf
+message TxBody {
+ ...
+
+ bool unordered = 4;
+ google.protobuf.Timestamp timeout_timestamp = 5;
+}
+```
+
+### Replay Protection
+
+We facilitate replay protection by storing the unordered sequence in the Cosmos SDK KV store. Upon transaction ingress, we check if the transaction's unordered
+sequence exists in state, or if the TTL value is stale, i.e. before the current block time. If so, we reject it. Otherwise,
+we add the unordered sequence to the state. This section of the state will belong to the `x/auth` module.
+
+The state is evaluated during x/auth's `PreBlocker`. All transactions with an unordered sequence earlier than the current block time
+will be deleted.
+
+```go
+func (am AppModule) PreBlock(ctx context.Context) (appmodule.ResponsePreBlock, error) {
+ err := am.accountKeeper.RemoveExpired(sdk.UnwrapSDKContext(ctx))
+ if err != nil {
+ return nil, err
+ }
+ return &sdk.ResponsePreBlock{ConsensusParamsChanged: false}, nil
+}
+```
+
+```golang
+package keeper
+
+import (
+ sdk "github.com/cosmos/cosmos-sdk/types"
+
+ "cosmossdk.io/collections"
+ "cosmossdk.io/core/store"
+)
+
+var (
+ // just arbitrarily picking some upper bound number.
+ unorderedSequencePrefix = collections.NewPrefix(90)
+)
+
+type AccountKeeper struct {
+ // ...
+ unorderedSequences collections.KeySet[collections.Pair[uint64, []byte]]
+}
+
+func (m *AccountKeeper) Contains(ctx sdk.Context, sender []byte, timestamp uint64) (bool, error) {
+ return m.unorderedSequences.Has(ctx, collections.Join(timestamp, sender))
+}
+
+func (m *AccountKeeper) Add(ctx sdk.Context, sender []byte, timestamp uint64) error {
+ return m.unorderedSequences.Set(ctx, collections.Join(timestamp, sender))
+}
+
+func (m *AccountKeeper) RemoveExpired(ctx sdk.Context) error {
+ blkTime := ctx.BlockTime().UnixNano()
+ it, err := m.unorderedSequences.Iterate(ctx, collections.NewPrefixUntilPairRange[uint64, []byte](uint64(blkTime)))
+ if err != nil {
+ return err
+ }
+ defer it.Close()
+
+ keys, err := it.Keys()
+ if err != nil {
+ return err
+ }
+
+ for _, key := range keys {
+ if err := m.unorderedSequences.Remove(ctx, key); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+```
+
+### AnteHandler Decorator
+
+To facilitate bypassing nonce verification, we must modify the existing
+`IncrementSequenceDecorator` AnteHandler decorator to skip the nonce verification
+when the transaction is marked as unordered.
+
+```golang
+func (isd IncrementSequenceDecorator) AnteHandle(ctx sdk.Context, tx sdk.Tx, simulate bool, next sdk.AnteHandler) (sdk.Context, error) {
+ if tx.UnOrdered() {
+ return next(ctx, tx, simulate)
+ }
+
+ // ...
+}
+```
+
+We also introduce a new decorator to perform the unordered transaction verification.
+
+```golang
+package ante
+
+import (
+ "slices"
+ "strings"
+ "time"
+
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ sdkerrors "github.com/cosmos/cosmos-sdk/types/errors"
+ authkeeper "github.com/cosmos/cosmos-sdk/x/auth/keeper"
+ authsigning "github.com/cosmos/cosmos-sdk/x/auth/signing"
+
+ errorsmod "cosmossdk.io/errors"
+)
+
+var _ sdk.AnteDecorator = (*UnorderedTxDecorator)(nil)
+
+// UnorderedTxDecorator defines an AnteHandler decorator that is responsible for
+// checking if a transaction is intended to be unordered and, if so, evaluates
+// the transaction accordingly. An unordered transaction will bypass having its
+// nonce incremented, which allows fire-and-forget transaction broadcasting,
+// removing the necessity of ordering on the sender-side.
+//
+// The transaction sender must ensure that unordered=true and a timeout_height
+// is appropriately set. The AnteHandler will check that the transaction is not
+// a duplicate and will evict it from state when the timeout is reached.
+//
+// The UnorderedTxDecorator should be placed as early as possible in the AnteHandler
+// chain to ensure that during DeliverTx, the transaction is added to the unordered sequence state.
+type UnorderedTxDecorator struct {
+ // maxUnOrderedTTL defines the maximum TTL a transaction can define.
+ maxTimeoutDuration time.Duration
+ txManager authkeeper.UnorderedTxManager
+}
+
+func NewUnorderedTxDecorator(
+ utxm authkeeper.UnorderedTxManager,
+) *UnorderedTxDecorator {
+ return &UnorderedTxDecorator{
+ maxTimeoutDuration: 10 * time.Minute,
+ txManager: utxm,
+ }
+}
+
+func (d *UnorderedTxDecorator) AnteHandle(
+ ctx sdk.Context,
+ tx sdk.Tx,
+ _ bool,
+ next sdk.AnteHandler,
+) (sdk.Context, error) {
+ if err := d.ValidateTx(ctx, tx); err != nil {
+ return ctx, err
+ }
+ return next(ctx, tx, false)
+}
+
+func (d *UnorderedTxDecorator) ValidateTx(ctx sdk.Context, tx sdk.Tx) error {
+ unorderedTx, ok := tx.(sdk.TxWithUnordered)
+ if !ok || !unorderedTx.GetUnordered() {
+ // If the transaction does not implement unordered capabilities or has the
+ // unordered value as false, we bypass.
+ return nil
+ }
+
+ blockTime := ctx.BlockTime()
+ timeoutTimestamp := unorderedTx.GetTimeoutTimeStamp()
+ if timeoutTimestamp.IsZero() || timeoutTimestamp.Unix() == 0 {
+ return errorsmod.Wrap(
+ sdkerrors.ErrInvalidRequest,
+ "unordered transaction must have timeout_timestamp set",
+ )
+ }
+ if timeoutTimestamp.Before(blockTime) {
+ return errorsmod.Wrap(
+ sdkerrors.ErrInvalidRequest,
+ "unordered transaction has a timeout_timestamp that has already passed",
+ )
+ }
+ if timeoutTimestamp.After(blockTime.Add(d.maxTimeoutDuration)) {
+ return errorsmod.Wrapf(
+ sdkerrors.ErrInvalidRequest,
+ "unordered tx ttl exceeds %s",
+ d.maxTimeoutDuration.String(),
+ )
+ }
+
+ execMode := ctx.ExecMode()
+ if execMode == sdk.ExecModeSimulate {
+ return nil
+ }
+
+ signerAddrs, err := getSigners(tx)
+ if err != nil {
+ return err
+ }
+
+ for _, signer := range signerAddrs {
+ contains, err := d.txManager.Contains(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix()))
+ if err != nil {
+ return errorsmod.Wrap(
+ sdkerrors.ErrIO,
+ "failed to check contains",
+ )
+ }
+ if contains {
+ return errorsmod.Wrapf(
+ sdkerrors.ErrInvalidRequest,
+ "tx is duplicated for signer %x", signer,
+ )
+ }
+
+ if err := d.txManager.Add(ctx, signer, uint64(unorderedTx.GetTimeoutTimeStamp().Unix())); err != nil {
+ return errorsmod.Wrap(
+ sdkerrors.ErrIO,
+ "failed to add unordered sequence to state",
+ )
+ }
+ }
+
+
+ return nil
+}
+
+func getSigners(tx sdk.Tx) ([][]byte, error) {
+ sigTx, ok := tx.(authsigning.SigVerifiableTx)
+ if !ok {
+ return nil, errorsmod.Wrap(sdkerrors.ErrTxDecode, "invalid tx type")
+ }
+ return sigTx.GetSigners()
+}
+
+```
+
+### Unordered Sequences
+
+Unordered sequences provide a simple, straightforward mechanism to protect against both transaction malleability and
+transaction duplication. It is important to note that the unordered sequence must still be unique. However,
+the value is not required to be strictly increasing as with regular sequences, and the order in which the node receives
+the transactions no longer matters. Clients can handle building unordered transactions similarly to the code below:
+
+```go
+for _, tx := range txs {
+ tx.SetUnordered(true)
+ tx.SetTimeoutTimestamp(time.Now() + 1 * time.Nanosecond)
+}
+```
+
+We will reject transactions that have both sequence and unordered timeouts set. We do this to avoid assuming the intent of the user.
+
+### State Management
+
+The storage of unordered sequences will be facilitated using the Cosmos SDK's KV Store service.
+
+## Note On Previous Design Iteration
+
+The previous iteration of unordered transactions worked by using an ad-hoc state-management system that posed severe
+risks and a vector for duplicated tx processing. It relied on graceful app closure which would flush the current state
+of the unordered sequence mapping. If 2/3 of the network crashed, and the graceful closure did not trigger,
+the system would lose track of all sequences in the mapping, allowing those transactions to be replayed. The
+implementation proposed in the updated version of this ADR solves this by writing directly to the Cosmos KV Store.
+While this is less performant, for the initial implementation, we opted to choose a safer path and postpone performance optimizations until we have more data on real-world impacts and a more battle-tested approach to optimization.
+
+Additionally, the previous iteration relied on using hashes to create what we call an "unordered sequence." There are known
+issues with transaction malleability in Cosmos SDK signing modes. This ADR gets away from this problem by enforcing
+single-use unordered nonces, instead of deriving nonces from bytes in the transaction.
+
+## Consequences
+
+### Positive
+
+* Support unordered transaction inclusion, enabling the ability to "fire and forget" many transactions at once.
+
+### Negative
+
+* Requires additional storage overhead.
+* Requirement of unique timestamps per transaction causes a small amount of additional overhead for clients. Clients must ensure each transaction's timeout timestamp is different. However, nanosecond differentials suffice.
+* Usage of Cosmos SDK KV store is slower in comparison to using a non-merkleized store or ad-hoc methods, and block times may slow down as a result.
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/issues/13009
+
diff --git a/copy-of-sdk-docs/build/architecture/adr-076-tx-malleability.md b/copy-of-sdk-docs/build/architecture/adr-076-tx-malleability.md
new file mode 100644
index 00000000..9843b17f
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-076-tx-malleability.md
@@ -0,0 +1,172 @@
+# Cosmos SDK Transaction Malleability Risk Review and Recommendations
+
+## Changelog
+
+* 2025-03-10: Initial draft (@aaronc)
+
+## Status
+
+PROPOSED: Not Implemented
+
+## Abstract
+
+Several encoding and sign mode related issues have historically resulted in the possibility
+that Cosmos SDK transactions may be re-encoded in such a way as to change their hash
+(and in rare cases, their meaning) without invalidating the signature.
+This document details these cases, their potential risks, the extent to which they have been
+addressed, and provides recommendations for future improvements.
+
+## Review
+
+One naive assumption about Cosmos SDK transactions is that hashing the raw bytes of a submitted transaction creates a safe unique identifier for the transaction. In reality, there are multiple ways in which transactions could be manipulated to create different transaction bytes (and as a result different hashes) that still pass signature verification.
+
+This document attempts to enumerate the various potential transaction "malleability" risks that we have identified and the extent to which they have or have not been addressed in various sign modes. We also identify vulnerabilities that could be introduced if developers make changes in the future without careful consideration of the complexities involved with transaction encoding, sign modes and signatures.
+
+### Risks Associated with Malleability
+
+The malleability of transactions poses the following potential risks to end users:
+
+* unsigned data could get added to transactions and be processed by state machines
+* clients often rely on transaction hashes for checking transaction status, but whether or not submitted transaction hashes match processed transaction hashes depends primarily on good network actors rather than fundamental protocol guarantees
+* transactions could potentially get executed more than once (faulty replay protection)
+
+If a client generates a transaction, keeps a record of its hash and then attempts to query nodes to check the transaction's status, this process may falsely conclude that the transaction had not been processed if an intermediary
+processor decoded and re-encoded the transaction with different encoding rules (either maliciously or unintentionally).
+As long as no malleability is present in the signature bytes themselves, clients _should_ query transactions by signature instead of hash.
+
+Not being cognizant of this risk may lead clients to submit the same transaction multiple times if they believe that
+earlier transactions had failed or gotten lost in processing.
+This could be an attack vector against users if wallets primarily query transactions by hash.
+
+If the state machine were to rely on transaction hashes as a replay mechanism itself, this would be faulty and not
+provide the intended replay protection. Instead, the state machine should rely on deterministic representations of
+transactions rather than the raw encoding, or other nonces,
+if they want to provide some replay protection that doesn't rely on a monotonically
+increasing account sequence number.
+
+
+### Sources of Malleability
+
+#### Non-deterministic Protobuf Encoding
+
+Cosmos SDK transactions are encoded using protobuf binary encoding when they are submitted to the network. Protobuf binary is not inherently a deterministic encoding meaning that the same logical payload could have several valid bytes representations. In a basic sense, this means that protobuf in general can be decoded and re-encoded to produce a different byte stream (and thus different hash) without changing the logical meaning of the bytes. [ADR 027: Deterministic Protobuf Serialization](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-027-deterministic-protobuf-serialization.md) describes in detail what needs to be done to produce what we consider to be a "canonical", deterministic protobuf serialization. Briefly, the following sources of malleability at the encoding level have been identified and are addressed by this specification:
+
+* fields can be emitted in any order
+* default field values can be included or omitted, and this doesn't change meaning unless `optional` is used
+* `repeated` fields of scalars may use packed or "regular" encoding
+* `varint`s can include extra ignored bits
+* extra fields may be added and are usually simply ignored by decoders. [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) specifies that in general such extra fields should cause messages and transactions to be rejected
+
+When using `SIGN_MODE_DIRECT` none of the above malleabilities will be tolerated because:
+
+* signatures of messages and extensions must be done over the raw encoded bytes of those fields
+* the outer tx envelope (`TxRaw`) must follow ADR 027 rules or be rejected
+
+Transactions signed with `SIGN_MODE_LEGACY_AMINO_JSON`, however, have no way of protecting against the above malleabilities because what is signed is a JSON representation of the logical contents of the transaction. These logical contents could have any number of valid protobuf binary encodings, so in general there are no guarantees regarding transaction hash with Amino JSON signing.
+
+In addition to being aware of the general non-determinism of protobuf binary, developers need to pay special attention to make sure that unknown protobuf fields get rejected when developing new capabilities related to protobuf transactions. The protobuf serialization format was designed with the assumption that unknown data known to encoders could safely be ignored by decoders. This assumption may have been fairly safe within the walled garden of Google's centralized infrastructure. However, in distributed blockchain systems, this assumption is generally unsafe. If a newer client encodes a protobuf message with data intended for a newer server, it is not safe for an older server to simply ignore and discard instructions that it does not understand. These instructions could include critical information that the transaction signer is relying upon and just assuming that it is unimportant is not safe.
+
+[ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering) specifies some provisions for "non-critical" fields which can safely be ignored by older servers. In practice, I have not seen any valid usages of this. It is something in the design that maintainers should be aware of, but it may not be necessary or even 100% safe.
+
+#### Non-deterministic Value Encoding
+
+In addition to the non-determinism present in protobuf binary itself, some protobuf field data is encoded using a micro-format which itself may not be deterministic. Consider for instance integer or decimal encoding. Some decoders may allow for the presence of leading or trailing zeros without changing the logical meaning, ex. `00100` vs `100` or `100.00` vs `100`. So if a sign mode encodes numbers deterministically, but decoders accept multiple representations,
+a user may sign over the value `100` while `0100` gets encoded. This would be possible with Amino JSON to the extent that the integer decoder accepts leading zeros. I believe the current `Int` implementation will reject this, however, it is
+probably possible to encode an octal or hexadecimal representation in the transaction whereas the user signs over a decimal integer.
+
+#### Signature Encoding
+
+Signatures themselves are encoded using a micro-format specific to the signature algorithm being used and sometimes these
+micro-formats can allow for non-determinism (multiple valid bytes for the same signature).
+Most of the signature algorithms supported by the SDK should reject non-canonical bytes in their current implementation.
+However, the `Multisignature` protobuf type uses normal protobuf encoding and there is no check as to whether the
+decoded bytes followed canonical ADR 027 rules or not. Therefore, multisig transactions can have malleability in
+their signatures.
+Any new or custom signature algorithms must make sure that they reject any non-canonical bytes, otherwise even
+with `SIGN_MODE_DIRECT` there can be transaction hash malleability by re-encoding signatures with a non-canonical
+representation.
+
+#### Fields not covered by Amino JSON
+
+Another area that needs to be addressed carefully is the discrepancy between `AminoSignDoc` (see [`aminojson.proto`](../../x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto)) used for `SIGN_MODE_LEGACY_AMINO_JSON` and the actual contents of `TxBody` and `AuthInfo` (see [`tx.proto`](../../proto/cosmos/tx/v1beta1/tx.proto)).
+If fields get added to `TxBody` or `AuthInfo`, they must either have a corresponding representation in `AminoSignDoc` or Amino JSON signatures must be rejected when those new fields are set. Making sure that this is done is a
+highly manual process, and developers could easily make the mistake of updating `TxBody` or `AuthInfo`
+without paying any attention to the implementation of `GetSignBytes` for Amino JSON. This is a critical
+vulnerability in which unsigned content can now get into the transaction and signature verification will
+pass.
+
+## Sign Mode Summary and Recommendations
+
+The sign modes officially supported by the SDK are `SIGN_MODE_DIRECT`, `SIGN_MODE_TEXTUAL`, `SIGN_MODE_DIRECT_AUX`,
+and `SIGN_MODE_LEGACY_AMINO_JSON`.
+`SIGN_MODE_LEGACY_AMINO_JSON` is used commonly by wallets and is currently the only sign mode supported on Nano Ledger hardware devices
+(although `SIGN_MODE_TEXTUAL` was designed to also support hardware devices).
+`SIGN_MODE_DIRECT` is the simplest sign mode and its usage is also fairly common.
+`SIGN_MODE_DIRECT_AUX` is a variant of `SIGN_MODE_DIRECT` that can be used by auxiliary signers in a multi-signer
+transaction by those signers who are not paying gas.
+`SIGN_MODE_TEXTUAL` was intended as a replacement for `SIGN_MODE_LEGACY_AMINO_JSON`, but as far as we know it
+has not been adopted by any clients yet and thus is not in active use.
+
+All known malleability concerns have been addressed in the current implementation of `SIGN_MODE_DIRECT`.
+The only known malleability that could occur with a transaction signed with `SIGN_MODE_DIRECT` would
+need to be in the signature bytes themselves.
+Since signatures are not signed over, it is impossible for any sign mode to address this directly
+and instead signature algorithms need to take care to reject any non-canonically encoded signature bytes
+to prevent malleability.
+For the known malleability of the `Multisignature` type, we should make sure that any valid signatures
+were encoded following canonical ADR 027 rules when doing signature verification.
+
+`SIGN_MODE_DIRECT_AUX` provides the same level of safety as `SIGN_MODE_DIRECT` because
+
+* the raw encoded `TxBody` bytes are signed over in `SignDocDirectAux`, and
+* a transaction using `SIGN_MODE_DIRECT_AUX` still requires the primary signer to sign the transaction with `SIGN_MODE_DIRECT`
+
+`SIGN_MODE_TEXTUAL` also provides the same level of safety as `SIGN_MODE_DIRECT` because the hash of the raw encoded
+`TxBody` and `AuthInfo` bytes are signed over.
+
+Unfortunately, the vast majority of unaddressed malleability risks affect `SIGN_MODE_LEGACY_AMINO_JSON` and this
+sign mode is still commonly used.
+It is recommended that the following improvements be made to Amino JSON signing:
+
+* hashes of `TxBody` and `AuthInfo` should be added to `AminoSignDoc` so that encoding-level malleability is addressed
+* when constructing `AminoSignDoc`, [protoreflect](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect) API should be used to ensure that there are no fields in `TxBody` or `AuthInfo` which do not have a mapping in `AminoSignDoc` have been set
+* fields present in `TxBody` or `AuthInfo` that are not present in `AminoSignDoc` (such as extension options) should
+be added to `AminoSignDoc` if possible
+
+## Testing
+
+To test that transactions are resistant to malleability,
+we can develop a test suite to run against all sign modes that
+attempts to manipulate transaction bytes in the following ways:
+
+* changing protobuf encoding by
+ * reordering fields
+ * setting default values
+ * adding extra bits to varints, or
+ * setting new unknown fields
+* modifying integer and decimal values encoded as strings with leading or trailing zeros
+
+Whenever any of these manipulations is done, we should observe that the sign doc bytes for the sign mode being
+tested also change, meaning that the corresponding signatures will also have to change.
+
+In the case of Amino JSON, we should also develop tests which ensure that if any `TxBody` or `AuthInfo`
+field not supported by Amino's `AminoSignDoc` is set that signing fails.
+
+In the general case of transaction decoding, we should have unit tests to ensure that
+
+* any `TxRaw` bytes which do not follow ADR 027 canonical encoding cause decoding to fail, and
+* any top-level transaction elements including `TxBody`, `AuthInfo`, public keys, and messages which
+have unknown fields set cause the transaction to be rejected
+(this ensures that ADR 020 unknown field filtering is properly applied)
+
+For each supported signature algorithm,
+there should also be unit tests to ensure that signatures must be encoded canonically
+or get rejected.
+
+## References
+
+* [ADR 027: Deterministic Protobuf Serialization](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-027-deterministic-protobuf-serialization.md)
+* [ADR 020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#unknown-field-filtering)
+* [`aminojson.proto`](../../x/tx/signing/aminojson/internal/aminojsonpb/aminojson.proto)
+* [`tx.proto`](../../proto/cosmos/tx/v1beta1/tx.proto)
+
diff --git a/copy-of-sdk-docs/build/architecture/adr-template.md b/copy-of-sdk-docs/build/architecture/adr-template.md
new file mode 100644
index 00000000..7a2c1549
--- /dev/null
+++ b/copy-of-sdk-docs/build/architecture/adr-template.md
@@ -0,0 +1,83 @@
+# ADR {ADR-NUMBER}: {TITLE}
+
+## Changelog
+
+* {date}: {changelog}
+
+## Status
+
+{DRAFT | PROPOSED} Not Implemented
+
+> Please have a look at the [PROCESS](./PROCESS.md#adr-status) page.
+> Use DRAFT if the ADR is in a draft stage (draft PR) or PROPOSED if it's in review.
+
+## Abstract
+
+> "If you can't explain it simply, you don't understand it well enough." Provide
+> a simplified and layman-accessible explanation of the ADR.
+> A short (~200 words) description of the issue being addressed.
+
+## Context
+
+> This section describes the forces at play, including technological, political,
+> social, and project local. These forces are probably in tension, and should be
+> called out as such. The language in this section is value-neutral. It is simply
+> describing facts. It should clearly explain the problem and motivation that the
+> proposal aims to resolve.
+> {context body}
+
+## Alternatives
+
+> This section describes alternative designs to the chosen design. This section
+> is important and if an adr does not have any alternatives then it should be
+> considered that the ADR was not thought through.
+
+## Decision
+
+> This section describes our response to these forces. It is stated in full
+> sentences, with active voice. "We will ..."
+> {decision body}
+
+## Consequences
+
+> This section describes the resulting context, after applying the decision. All
+> consequences should be listed here, not just the "positive" ones. A particular
+> decision may have positive, negative, and neutral consequences, but all of them
+> affect the team and project in the future.
+
+### Backwards Compatibility
+
+> All ADRs that introduce backwards incompatibilities must include a section
+> describing these incompatibilities and their severity. The ADR must explain
+> how the author proposes to deal with these incompatibilities. ADR submissions
+> without a sufficient backwards compatibility treatise may be rejected outright.
+
+### Positive
+
+> {positive consequences}
+
+### Negative
+
+> {negative consequences}
+
+### Neutral
+
+> {neutral consequences}
+
+## Further Discussions
+
+> While an ADR is in the DRAFT or PROPOSED stage, this section should contain a
+> summary of issues to be solved in future iterations (usually referencing comments
+> from a pull-request discussion).
+>
+> Later, this section can optionally list ideas or improvements the author or
+> reviewers found during the analysis of this ADR.
+
+## Test Cases [optional]
+
+Test cases for an implementation are mandatory for ADRs that are affecting consensus
+changes. Other ADRs can choose to include links to test cases if applicable.
+
+## References
+
+* {reference link}
diff --git a/copy-of-sdk-docs/build/architecture/bankv2.png b/copy-of-sdk-docs/build/architecture/bankv2.png
new file mode 100644
index 00000000..4123dbf5
Binary files /dev/null and b/copy-of-sdk-docs/build/architecture/bankv2.png differ
diff --git a/copy-of-sdk-docs/build/build.md b/copy-of-sdk-docs/build/build.md
new file mode 100644
index 00000000..b3c64a6e
--- /dev/null
+++ b/copy-of-sdk-docs/build/build.md
@@ -0,0 +1,13 @@
+---
+sidebar_position: 0
+---
+
+# Build
+
+* [Building Apps](./building-apps/00-app-go.md) - The documentation in this section will guide you through the process of developing your dApp using the Cosmos SDK framework.
+* [Modules](../../../x/README.md) - Information about the various modules available in the Cosmos SDK: Auth, Authz, Bank, Circuit, Consensus, Distribution, Epochs, Evidence, Feegrant, Governance, Group, Mint, NFT, Protocolpool, Slashing, Staking, Upgrade, Genutil.
+* [Migrations](./migrations/01-intro.md) - See what has been updated in each release the process of the transition between versions.
+* [Packages](./packages/README.md) - Explore a curated collection of pre-built modules and functionalities, streamlining the development process.
+* [Tooling](./tooling/README.md) - A suite of utilities designed to enhance the development workflow, optimizing the efficiency of Cosmos SDK-based projects.
+* [ADR's](./architecture/README.md) - Provides a structured repository of key decisions made during the development process, which have been documented and offers rationale behind key decisions being made.
+* [REST API](https://docs.cosmos.network/api) - A comprehensive reference for the application programming interfaces (APIs) provided by the SDK.
diff --git a/copy-of-sdk-docs/build/building-apps/00-app-go.md b/copy-of-sdk-docs/build/building-apps/00-app-go.md
new file mode 100644
index 00000000..5a0524f3
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/00-app-go.md
@@ -0,0 +1,14 @@
+---
+sidebar_position: 1
+---
+
+# Overview of `app.go`
+
+This section is intended to provide an overview of the `SimApp` `app.go` file and is still a work in progress.
+For now please instead read the [tutorials](https://tutorials.cosmos.network) for a deep dive on how to build a chain.
+
+## Complete `app.go`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app.go
+```
diff --git a/copy-of-sdk-docs/build/building-apps/00-runtime.md b/copy-of-sdk-docs/build/building-apps/00-runtime.md
new file mode 100644
index 00000000..44a25a67
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/00-runtime.md
@@ -0,0 +1,152 @@
+---
+sidebar_position: 1
+---
+
+# What is `runtime`?
+
+The `runtime` package in the Cosmos SDK provides a flexible framework for configuring and managing blockchain applications. It serves as the foundation for creating modular blockchain applications using a declarative configuration approach.
+
+## Overview
+
+The runtime package acts as a wrapper around the `BaseApp` and `ModuleManager`, offering a hybrid approach where applications can be configured both declaratively through configuration files and programmatically through traditional methods.
+It is a layer of abstraction between `baseapp` and the application modules that simplifies the process of building a Cosmos SDK application.
+
+## Core Components
+
+### App Structure
+
+The runtime App struct contains several key components:
+
+```go
+type App struct {
+ *baseapp.BaseApp
+ ModuleManager *module.Manager
+ configurator module.Configurator
+ config *runtimev1alpha1.Module
+ storeKeys []storetypes.StoreKey
+ // ... other fields
+}
+```
+
+Cosmos SDK applications should embed the `*runtime.App` struct to leverage the runtime module.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app_di.go#L60-L61
+```
+
+### Configuration
+
+The runtime module is configured using App Wiring. The main configuration object is the [`Module` message](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/app/runtime/v1alpha1/module.proto), which supports the following key settings:
+
+* `app_name`: The name of the application
+* `begin_blockers`: List of module names to call during BeginBlock
+* `end_blockers`: List of module names to call during EndBlock
+* `init_genesis`: Order of module initialization during genesis
+* `export_genesis`: Order for exporting module genesis data
+* `pre_blockers`: Modules to execute before block processing
+
+Learn more about wiring `runtime` in the [next section](./01-app-go-di.md).
+
+#### Store Configuration
+
+By default, the runtime module uses the module name as the store key.
+However it provides a flexible store key configuration through:
+
+* `override_store_keys`: Allows customizing module store keys
+* `skip_store_keys`: Specifies store keys to skip during keeper construction
+
+Example configuration:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app_config.go#L133-L138
+```
+
+## Key Features
+
+### 1. BaseApp and other Core SDK components integration
+
+The runtime module integrates with the `BaseApp` and other core SDK components to provide a seamless experience for developers.
+
+The developer only needs to embed the `runtime.App` struct in their application to leverage the runtime module.
+The configuration of the module manager and other core components is handled internally via the [`AppBuilder`](#4-application-building).
+
+### 2. Module Registration
+
+Runtime has built-in support for [`depinject`-enabled modules](../building-modules/15-depinject.md).
+Such modules can be registered through the configuration file (often named `app_config.go`), with no additional code required.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app_config.go#L210-L216
+```
+
+Additionally, the runtime package facilitates manual module registration through the `RegisterModules` method. This is the primary integration point for modules not registered via configuration.
+
+:::warning
+Even when using manual registration, the module should still be configured in the `Module` message in AppConfig.
+:::
+
+```go
+func (a *App) RegisterModules(modules ...module.AppModule) error
+```
+
+The SDK recommends using the declarative approach with `depinject` for module registration whenever possible.
+
+### 3. Service Registration
+
+Runtime registers all [core services](https://pkg.go.dev/cosmossdk.io/core) required by modules.
+These services include `store`, `event manager`, `context`, and `logger`.
+Runtime ensures that services are scoped to their respective modules during the wiring process.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/runtime/module.go#L201-L235
+```
+
+Additionally, runtime provides automatic registration of other essential (i.e., gRPC routes) services available to the App:
+
+* AutoCLI Query Service
+* Reflection Service
+* Custom module services
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/runtime/builder.go#L52-L54
+```
+
+### 4. Application Building
+
+The `AppBuilder` type provides a structured way to build applications:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/runtime/builder.go#L14-L19
+```
+
+Key building steps:
+
+1. Configuration loading
+2. Module registration
+3. Service setup
+4. Store mounting
+5. Router configuration
+
+An application only needs to call `AppBuilder.Build` to create a fully configured application (`runtime.App`).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/runtime/builder.go#L26-L57
+```
+
+More information on building applications can be found in the [next section](./02-app-building.md).
+
+## Best Practices
+
+1. **Module Order**: Carefully consider the order of modules in begin_blockers, end_blockers, and pre_blockers.
+2. **Store Keys**: Use override_store_keys only when necessary to maintain clarity
+3. **Genesis Order**: Maintain correct initialization order in init_genesis
+4. **Migration Management**: Use order_migrations to control upgrade paths
+
+### Migration Considerations
+
+When upgrading between versions:
+
+1. Review the migration order specified in `order_migrations`
+2. Ensure all required modules are included in the configuration
+3. Validate store key configurations
+4. Test the upgrade path thoroughly
diff --git a/copy-of-sdk-docs/build/building-apps/01-app-go-di.md b/copy-of-sdk-docs/build/building-apps/01-app-go-di.md
new file mode 100644
index 00000000..00ab7883
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/01-app-go-di.md
@@ -0,0 +1,164 @@
+---
+sidebar_position: 1
+---
+
+# Overview of `app_di.go`
+
+:::note Synopsis
+
+The Cosmos SDK makes wiring of an `app.go` much easier thanks to [runtime](./00-runtime.md) and app wiring.
+Learn more about the rationale of App Wiring in [ADR-057](../../../architecture/adr-057-app-wiring.md).
+
+:::
+
+:::note Pre-requisite Readings
+
+* [What is `runtime`?](./00-runtime.md)
+* [Depinject documentation](../packages/01-depinject.md)
+* [Modules depinject-ready](../building-modules/15-depinject.md)
+* [ADR 057: App Wiring](../../../architecture/adr-057-app-wiring.md)
+
+:::
+
+This section is intended to provide an overview of the `SimApp` `app_di.go` file with App Wiring.
+
+## `app_config.go`
+
+The `app_config.go` file is the single place to configure all modules parameters.
+
+1. Create the `AppConfig` variable:
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go#L289-L303
+ ```
+
+ Where the `appConfig` combines the [runtime](./00-runtime.md) configuration and the (extra) modules configuration.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go#L113-L161
+ ```
+
+2. Configure the `runtime` module:
+
+ In this configuration, the order in which the modules are defined in PreBlockers, BeginBlocks, and EndBlockers is important.
+ They are named in the order they should be executed by the module manager.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go#L103-L188
+ ```
+
+3. Wire the other modules:
+
+ Next to runtime, the other (depinject-enabled) modules are wired in the `AppConfig`:
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go#L103-L286
+ ```
+
+ Note: the `tx` isn't a module, but a configuration. It should be wired in the `AppConfig` as well.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go#L222-L227
+ ```
+
+See the complete `app_config.go` file for `SimApp` [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_config.go).
+
+### Alternative formats
+
+:::tip
+The example above shows how to create an `AppConfig` using Go. However, it is also possible to create an `AppConfig` using YAML, or JSON.
+The configuration can then be embedded with `go:embed` and read with [`appconfig.LoadYAML`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadYAML), or [`appconfig.LoadJSON`](https://pkg.go.dev/cosmossdk.io/core/appconfig#LoadJSON), in `app_di.go`.
+
+```go
+//go:embed app_config.yaml
+var (
+ appConfigYaml []byte
+ appConfig = appconfig.LoadYAML(appConfigYaml)
+)
+```
+
+:::
+
+```yaml
+modules:
+ - name: runtime
+ config:
+ "@type": cosmos.app.runtime.v1alpha1.Module
+ app_name: SimApp
+ begin_blockers: [staking, auth, bank]
+ end_blockers: [bank, auth, staking]
+ init_genesis: [bank, auth, staking]
+ - name: auth
+ config:
+ "@type": cosmos.auth.module.v1.Module
+ bech32_prefix: cosmos
+ - name: bank
+ config:
+ "@type": cosmos.bank.module.v1.Module
+ - name: staking
+ config:
+ "@type": cosmos.staking.module.v1.Module
+ - name: tx
+ config:
+ "@type": cosmos.tx.module.v1.Module
+```
+
+A more complete example of `app.yaml` can be found [here](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/simapp/example_app.yaml).
+
+## `app_di.go`
+
+`app_di.go` is the place where `SimApp` is constructed. `depinject.Inject` automatically wires the app modules and keepers when provided with an application configuration (`AppConfig`). `SimApp` is constructed upon calling the injected `*runtime.AppBuilder` with `appBuilder.Build(...)`.
+In short `depinject` and the [`runtime` package](./00-runtime.md) abstract the wiring of the app, and the `AppBuilder` is the place where the app is constructed. [`runtime`](./00-runtime.md) takes care of registering the codecs, KV store, subspaces and instantiating `baseapp`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go#L100-L270
+```
+
+:::warning
+When using `depinject.Inject`, the injected types must be pointers.
+:::
+
+### Advanced Configuration
+
+In advanced cases, it is possible to inject extra (module) configuration in a way that is not (yet) supported by `AppConfig`.
+In this case, use `depinject.Configs` for combining the extra configuration, and `AppConfig` and `depinject.Supply` for providing the extra configuration.
+More information on how `depinject.Configs` and `depinject.Supply` function can be found in the [`depinject` documentation](https://pkg.go.dev/cosmossdk.io/depinject).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go#L114-L162
+```
+
+### Registering non app wiring modules
+
+It is possible to combine app wiring / depinject enabled modules with non-app wiring modules.
+To do so, use the `app.RegisterModules` method to register the modules on your app, as well as `app.RegisterStores` for registering the extra stores needed.
+
+```go
+// ....
+app.App = appBuilder.Build(db, traceStore, baseAppOptions...)
+
+// register module manually
+app.RegisterStores(storetypes.NewKVStoreKey(example.ModuleName))
+app.ExampleKeeper = examplekeeper.NewKeeper(app.appCodec, app.AccountKeeper.AddressCodec(), runtime.NewKVStoreService(app.GetKey(example.ModuleName)), authtypes.NewModuleAddress(govtypes.ModuleName).String())
+exampleAppModule := examplemodule.NewAppModule(app.ExampleKeeper)
+if err := app.RegisterModules(&exampleAppModule); err != nil {
+ panic(err)
+}
+
+// ....
+```
+
+:::warning
+When using AutoCLI and combining app wiring and non-app wiring modules. The AutoCLI options should be manually constructed instead of injected.
+Otherwise it will miss the non depinject modules and not register their CLI.
+:::
+
+### Complete `app_di.go`
+
+:::tip
+Note that in the complete `SimApp` `app_di.go` file, testing utilities are also defined, but they could as well be defined in a separate file.
+:::
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go
+```
diff --git a/copy-of-sdk-docs/build/building-apps/02-app-mempool.md b/copy-of-sdk-docs/build/building-apps/02-app-mempool.md
new file mode 100644
index 00000000..dbe6783c
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/02-app-mempool.md
@@ -0,0 +1,94 @@
+---
+sidebar_position: 1
+---
+
+# Application Mempool
+
+:::note Synopsis
+This section describes how the app-side mempool can be used and replaced.
+:::
+
+Since `v0.47` the application has its own mempool to allow much more granular
+block building than previous versions. This change was enabled by
+[ABCI 1.0](https://github.com/cometbft/cometbft/blob/v0.37.0/spec/abci).
+Notably it introduces the `PrepareProposal` and `ProcessProposal` steps of ABCI++.
+
+:::note Pre-requisite Readings
+
+* [BaseApp](../../learn/advanced/00-baseapp.md)
+* [ABCI](../abci/00-introduction.md)
+
+:::
+
+## Mempool
+
+There are countless designs that an application developer can write for a mempool, the SDK opted to provide only simple mempool implementations.
+Namely, the SDK provides the following mempools:
+
+* [No-op Mempool](#no-op-mempool)
+* [Sender Nonce Mempool](#sender-nonce-mempool)
+* [Priority Nonce Mempool](#priority-nonce-mempool)
+
+By default, the SDK uses the [No-op Mempool](#no-op-mempool), but it can be replaced by the application developer in [`app.go`](./01-app-go-di.md):
+
+```go
+nonceMempool := mempool.NewSenderNonceMempool()
+mempoolOpt := baseapp.SetMempool(nonceMempool)
+baseAppOptions = append(baseAppOptions, mempoolOpt)
+```
+
+### No-op Mempool
+
+A no-op mempool is a mempool where transactions are completely discarded and ignored when BaseApp interacts with the mempool.
+When this mempool is used, it is assumed that an application will rely on CometBFT's transaction ordering defined in `RequestPrepareProposal`,
+which is FIFO-ordered by default.
+
+> Note: If a NoOp mempool is used, PrepareProposal and ProcessProposal both should be aware of this as
+> PrepareProposal could include transactions that could fail verification in ProcessProposal.
+
+### Sender Nonce Mempool
+
+The nonce mempool is a mempool that keeps transactions from a sender sorted by nonce in order to avoid the issues with nonces.
+It works by storing the transaction in a list sorted by the transaction nonce. When the proposer asks for transactions to be included in a block it randomly selects a sender and gets the first transaction in the list. It repeats this until the mempool is empty or the block is full.
+
+It is configurable with the following parameters:
+
+#### MaxTxs
+
+It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*.
+
+* **negative**: Disabled, mempool does not insert new transaction and return early.
+* **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`.
+* **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when the `maxTx` value is the same as `CountTx()`
+
+#### Seed
+
+Set the seed for the random number generator used to select transactions from the mempool.
+
+### Priority Nonce Mempool
+
+The [priority nonce mempool](https://github.com/cosmos/cosmos-sdk/blob/main/types/mempool/priority_nonce_spec.md) is a mempool implementation that stores txs in a partially ordered set by 2 dimensions:
+
+* priority
+* sender-nonce (sequence number)
+
+Internally it uses one priority ordered [skip list](https://pkg.go.dev/github.com/huandu/skiplist) and one skip list per sender ordered by sender-nonce (sequence number). When there are multiple txs from the same sender, they are not always comparable by priority to other sender txs and must be partially ordered by both sender-nonce and priority.
+
+It is configurable with the following parameters:
+
+#### MaxTxs
+
+It is an integer value that sets the mempool in one of three modes, *bounded*, *unbounded*, or *disabled*.
+
+* **negative**: Disabled, mempool does not insert new transaction and return early.
+* **zero**: Unbounded mempool has no transaction limit and will never fail with `ErrMempoolTxMaxCapacity`.
+* **positive**: Bounded, it fails with `ErrMempoolTxMaxCapacity` when the `maxTx` value is the same as `CountTx()`
+
+#### Callback
+
+The priority nonce mempool provides mempool options allowing the application to set callback(s).
+
+* **OnRead**: Set a callback to be called when a transaction is read from the mempool.
+* **TxReplacement**: Sets a callback to be called when duplicate transaction nonce detected during mempool insert. Application can define a transaction replacement rule based on tx priority or certain transaction fields.
+
+More information on the SDK mempool implementation can be found in the [godocs](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/mempool).
diff --git a/copy-of-sdk-docs/build/building-apps/03-app-upgrade.md b/copy-of-sdk-docs/build/building-apps/03-app-upgrade.md
new file mode 100644
index 00000000..541530a1
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/03-app-upgrade.md
@@ -0,0 +1,218 @@
+---
+sidebar_position: 1
+---
+
+# Application Upgrade
+
+:::note
+This document describes how to upgrade your application. If you are looking specifically for the changes to perform between SDK versions, see the [SDK migrations documentation](https://docs.cosmos.network/main/migrations/intro).
+:::
+
+:::warning
+This section is currently incomplete. Track the progress of this document [here](https://github.com/cosmos/cosmos-sdk/issues/11504).
+:::
+
+:::note Pre-requisite Readings
+
+* [`x/upgrade` Documentation](https://docs.cosmos.network/main/modules/upgrade)
+
+:::
+
+## General Workflow
+
+Let's assume we are running v0.38.0 of our software in our testnet and want to upgrade to v0.40.0.
+How would this look in practice? First, we want to finalize the v0.40.0 release candidate
+and then install a specially named upgrade handler (e.g. "testnet-v2" or even "v0.40.0"). An upgrade
+handler should be defined in a new version of the software to define what migrations
+to run to migrate from the older version of the software. Naturally, this is app-specific rather
+than module-specific, and must be defined in `app.go`, even if it imports logic from various
+modules to perform the actions. You can register them with `upgradeKeeper.SetUpgradeHandler`
+during the app initialization (before starting the abci server), and they serve not only to
+perform a migration, but also to identify if this is the old or new version (e.g. presence of
+a handler registered for the named upgrade).
+
+Once the release candidate along with an appropriate upgrade handler is frozen,
+we can have a governance vote to approve this upgrade at some future block height (e.g. 200000).
+This is known as an upgrade.Plan. The v0.38.0 code will not know of this handler, but will
+continue to run until block 200000, when the plan kicks in at `BeginBlock`. It will check
+for the existence of the handler, and finding it missing, know that it is running the obsolete software,
+and gracefully exit.
+
+Generally the application binary will restart on exit, but then will execute this BeginBlocker
+again and exit, causing a restart loop. Either the operator can manually install the new software,
+or you can make use of an external watcher daemon to possibly download and then switch binaries,
+also potentially doing a backup. The SDK tool for doing such, is called [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor).
+
+When the binary restarts with the upgraded version (here v0.40.0), it will detect we have registered the
+"testnet-v2" upgrade handler in the code, and realize it is the new version. It then will run the upgrade handler
+and *migrate the database in-place*. Once finished, it marks the upgrade as done, and continues processing
+the rest of the block as normal. Once 2/3 of the voting power has upgraded, the blockchain will immediately
+resume the consensus mechanism. If the majority of operators add a custom `do-upgrade` script, this should
+be a matter of minutes and not even require them to be awake at that time.
+
+## Integrating With An App
+
+:::tip
+The following is not required for users using `depinject`, this is abstracted for them.
+:::
+
+In addition to basic module wiring, set up the upgrade Keeper for the app and then define a `PreBlocker` that calls the upgrade
+keeper's PreBlocker method:
+
+```go
+func (app *myApp) PreBlocker(ctx sdk.Context, req req.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) {
+ // For demonstration sake, the app PreBlocker only returns the upgrade module pre-blocker.
+ // In a real app, the module manager should call all pre-blockers
+ // return app.ModuleManager.PreBlock(ctx, req)
+ return app.upgradeKeeper.PreBlocker(ctx, req)
+}
+```
+
+The app must then integrate the upgrade keeper with its governance module as appropriate. The governance module
+should call ScheduleUpgrade to schedule an upgrade and ClearUpgradePlan to cancel a pending upgrade.
+
+## Performing Upgrades
+
+Upgrades can be scheduled at a predefined block height. Once this block height is reached, the
+existing software will cease to process ABCI messages and a new version with code that handles the upgrade must be deployed.
+All upgrades are coordinated by a unique upgrade name that cannot be reused on the same blockchain. In order for the upgrade
+module to know that the upgrade has been safely applied, a handler with the name of the upgrade must be installed.
+Here is an example handler for an upgrade named "my-fancy-upgrade":
+
+```go
+app.upgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) {
+ // Perform any migrations of the state store needed for this upgrade
+})
+```
+
+This upgrade handler performs the dual function of alerting the upgrade module that the named upgrade has been applied,
+as well as providing the opportunity for the upgraded software to perform any necessary state migrations. Both the halt
+(with the old binary) and applying the migration (with the new binary) are enforced in the state machine. Actually
+switching the binaries is an ops task and not handled inside the sdk / abci app.
+
+Here is a sample code to set store migrations with an upgrade:
+
+```go
+// this configures a no-op upgrade handler for the "my-fancy-upgrade" upgrade
+app.UpgradeKeeper.SetUpgradeHandler("my-fancy-upgrade", func(ctx context.Context, plan upgrade.Plan) {
+ // upgrade changes here
+})
+upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk()
+if err != nil {
+ // handle error
+}
+if upgradeInfo.Name == "my-fancy-upgrade" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) {
+ storeUpgrades := store.StoreUpgrades{
+ Renamed: []store.StoreRename{{
+ OldKey: "foo",
+ NewKey: "bar",
+ }},
+ Deleted: []string{},
+ }
+ // configure store loader that checks if version == upgradeHeight and applies store upgrades
+ app.SetStoreLoader(upgrade.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades))
+}
+```
+
+## Halt Behavior
+
+Before halting the ABCI state machine in the BeginBlocker method, the upgrade module will log an error
+that looks like:
+
+```text
+ UPGRADE "" NEEDED at height :
+```
+
+where `Name` and `Info` are the values of the respective fields on the upgrade Plan.
+
+To perform the actual halt of the blockchain, the upgrade keeper simply panics which prevents the ABCI state machine
+from proceeding but doesn't actually exit the process. Exiting the process can cause issues for other nodes that start
+to lose connectivity with the exiting nodes, thus this module prefers to just halt but not exit.
+
+## Automation
+
+Read more about [Cosmovisor](https://docs.cosmos.network/main/tooling/cosmovisor), the tool for automating upgrades.
+
+## Canceling Upgrades
+
+There are two ways to cancel a planned upgrade - with on-chain governance or off-chain social consensus.
+For the first one, there is a `CancelSoftwareUpgrade` governance proposal, which can be voted on and will
+remove the scheduled upgrade plan. Of course this requires that the upgrade was known to be a bad idea
+well before the upgrade itself, to allow time for a vote. If you want to allow such a possibility, you
+should set the upgrade height to be `2 * (votingperiod + depositperiod) + (safety delta)` from the beginning of
+the first upgrade proposal. Safety delta is the time available from the success of an upgrade proposal
+and the realization it was a bad idea (due to external testing). You can also start a `CancelSoftwareUpgrade`
+proposal while the original `SoftwareUpgrade` proposal is still being voted upon, as long as the voting
+period ends after the `SoftwareUpgrade` proposal.
+
+However, let's assume that we don't realize the upgrade has a bug until shortly before it will occur
+(or while we try it out - hitting some panic in the migration). It would seem the blockchain is stuck,
+but we need to allow an escape for social consensus to overrule the planned upgrade. To do so, there's
+a `--unsafe-skip-upgrades` flag to the start command, which will cause the node to mark the upgrade
+as done upon hitting the planned upgrade height(s), without halting and without actually performing a migration.
+If over two-thirds run their nodes with this flag on the old binary, it will allow the chain to continue through
+the upgrade with a manual override. (This must be well-documented for anyone syncing from genesis later on).
+
+Example:
+
+```shell
+ start --unsafe-skip-upgrades ...
+```
+
+## Pre-Upgrade Handling
+
+Cosmovisor supports custom pre-upgrade handling. Use pre-upgrade handling when you need to implement application config changes that are required in the newer version before you perform the upgrade.
+
+Using Cosmovisor pre-upgrade handling is optional. If pre-upgrade handling is not implemented, the upgrade continues.
+
+For example, make the required new-version changes to `app.toml` settings during the pre-upgrade handling. The pre-upgrade handling process means that the file does not have to be manually updated after the upgrade.
+
+Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` command that can be implemented by the application.
+
+The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes:
+
+| Exit status code | How it is handled in Cosmosvisor |
+|------------------|---------------------------------------------------------------------------------------------------------------------|
+| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. |
+| `1` | Default exit code when `pre-upgrade` command has not been implemented. |
+| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. |
+| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. |
+
+## Sample
+
+Here is a sample structure of the `pre-upgrade` command:
+
+```go
+func preUpgradeCommand() *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "pre-upgrade",
+ Short: "Pre-upgrade command",
+ Long: "Pre-upgrade command to implement custom pre-upgrade handling",
+ Run: func(cmd *cobra.Command, args []string) {
+
+ err := HandlePreUpgrade()
+
+ if err != nil {
+ os.Exit(30)
+ }
+
+ os.Exit(0)
+
+ },
+ }
+
+ return cmd
+}
+```
+
+Ensure that the pre-upgrade command has been registered in the application:
+
+```go
+rootCmd.AddCommand(
+ // ..
+ preUpgradeCommand(),
+ // ..
+ )
+```
+
+When not using Cosmovisor, ensure to run ` pre-upgrade` before starting the application binary.
diff --git a/copy-of-sdk-docs/build/building-apps/04-vote-extensions.md b/copy-of-sdk-docs/build/building-apps/04-vote-extensions.md
new file mode 100644
index 00000000..f20ebde6
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/04-vote-extensions.md
@@ -0,0 +1,121 @@
+---
+sidebar_position: 1
+---
+
+# Vote Extensions
+
+:::note Synopsis
+This section describes how the application can define and use vote extensions
+defined in ABCI++.
+:::
+
+## Extend Vote
+
+ABCI++ allows an application to extend a pre-commit vote with arbitrary data. This
+process does NOT have to be deterministic, and the data returned can be unique to the
+validator process. The Cosmos SDK defines `baseapp.ExtendVoteHandler`:
+
+```go
+type ExtendVoteHandler func(Context, *abci.ExtendVoteRequest) (*abci.ExtendVoteResponse, error)
+```
+
+An application can set this handler in `app.go` via the `baseapp.SetExtendVoteHandler`
+`BaseApp` option function. The `sdk.ExtendVoteHandler`, if defined, is called during
+the `ExtendVote` ABCI method. Note, if an application decides to implement
+`baseapp.ExtendVoteHandler`, it MUST return a non-nil `VoteExtension`. However, the vote
+extension can be empty. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#extendvote)
+for more details.
+
+There are many decentralized censorship-resistant use cases for vote extensions.
+For example, a validator may want to submit prices for a price oracle or encryption
+shares for an encrypted transaction mempool. Note, an application should be careful
+to consider the size of the vote extensions as they could increase latency in block
+production. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/docs/qa/CometBFT-QA-38.md#vote-extensions-testbed)
+for more details.
+
+## Verify Vote Extension
+
+Similar to extending a vote, an application can also verify vote extensions from
+other validators when validating their pre-commits. For a given vote extension,
+this process MUST be deterministic. The Cosmos SDK defines `sdk.VerifyVoteExtensionHandler`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/abci.go#L26-L27
+```
+
+An application can set this handler in `app.go` via the `baseapp.SetVerifyVoteExtensionHandler`
+`BaseApp` option function. The `sdk.VerifyVoteExtensionHandler`, if defined, is called
+during the `VerifyVoteExtension` ABCI method. If an application defines a vote
+extension handler, it should also define a verification handler. Note, not all
+validators will share the same view of what vote extensions they verify depending
+on how votes are propagated. See [here](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci++_methods.md#verifyvoteextension)
+for more details.
+
+## Vote Extension Propagation
+
+The agreed upon vote extensions at height `H` are provided to the proposing validator
+at height `H+1` during `PrepareProposal`. As a result, the vote extensions are
+not natively provided or exposed to the remaining validators during `ProcessProposal`.
+As a result, if an application requires that the agreed upon vote extensions from
+height `H` are available to all validators at `H+1`, the application must propagate
+these vote extensions manually in the block proposal itself. This can be done by
+"injecting" them into the block proposal, since the `Txs` field in `PrepareProposal`
+is just a slice of byte slices.
+
+`FinalizeBlock` will ignore any byte slice that doesn't implement an `sdk.Tx`, so
+any injected vote extensions will safely be ignored in `FinalizeBlock`. For more
+details on propagation, see the [ABCI++ 2.0 ADR](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-064-abci-2.0.md#vote-extension-propagation--verification).
+
+### Recovery of injected Vote Extensions
+
+As stated before, vote extensions can be injected into a block proposal (along with
+other transactions in the `Txs` field). The Cosmos SDK provides a pre-FinalizeBlock
+hook to allow applications to recover vote extensions, perform any necessary
+computation on them, and then store the results in the cached store. These results
+will be available to the application during the subsequent `FinalizeBlock` call.
+
+An example of how a pre-FinalizeBlock hook could look is shown below:
+
+```go
+app.SetPreBlocker(func(ctx sdk.Context, req *abci.FinalizeBlockRequest) error {
+ allVEs := []VE{} // store all parsed vote extensions here
+ for _, tx := range req.Txs {
+ // define a custom function that tries to parse the tx as a vote extension
+ ve, ok := parseVoteExtension(tx)
+ if !ok {
+ continue
+ }
+
+ allVEs = append(allVEs, ve)
+ }
+
+ // perform any necessary computation on the vote extensions and store the result
+ // in the cached store
+ result := compute(allVEs)
+ err := storeVEResult(ctx, result)
+ if err != nil {
+ return err
+ }
+
+ return nil
+})
+
+```
+
+Then, in an app's module, the application can retrieve the result of the computation
+of vote extensions from the cached store:
+
+```go
+func (k Keeper) BeginBlocker(ctx context.Context) error {
+ // retrieve the result of the computation of vote extensions from the cached store
+ result, err := k.GetVEResult(ctx)
+ if err != nil {
+ return err
+ }
+
+ // use the result of the computation of vote extensions
+ k.setSomething(result)
+
+ return nil
+}
+```
diff --git a/copy-of-sdk-docs/build/building-apps/05-app-testnet.md b/copy-of-sdk-docs/build/building-apps/05-app-testnet.md
new file mode 100644
index 00000000..a9fe93d9
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/05-app-testnet.md
@@ -0,0 +1,235 @@
+---
+sidebar_position: 1
+---
+
+# Application Testnets
+
+Building an application is complicated and requires a lot of testing. The Cosmos SDK provides a way to test your application in a real-world environment: a testnet.
+
+We allow developers to take the state from their mainnet and run tests against the state. This is useful for testing upgrade migrations, or for testing the application in a real-world environment.
+
+## Testnet Setup
+
+We will be breaking down the steps to create a testnet from mainnet state.
+
+```go
+ // InitSimAppForTestnet is broken down into two sections:
+ // Required Changes: Changes that, if not made, will cause the testnet to halt or panic
+ // Optional Changes: Changes to customize the testnet to one's liking (lower vote times, fund accounts, etc)
+ func InitSimAppForTestnet(app *SimApp, newValAddr bytes.HexBytes, newValPubKey crypto.PubKey, newOperatorAddress, upgradeToTrigger string) *SimApp {
+ ...
+ }
+```
+
+### Required Changes
+
+#### Staking
+
+When creating a testnet the important part is to migrate the validator set from many validators to one or a few. This allows developers to spin up the chain without needing to replace validator keys.
+
+```go
+ ctx := app.BaseApp.NewUncachedContext(true, tmproto.Header{})
+ pubkey := &ed25519.PubKey{Key: newValPubKey.Bytes()}
+ pubkeyAny, err := types.NewAnyWithValue(pubkey)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+
+ // STAKING
+ //
+
+ // Create Validator struct for our new validator.
+ _, bz, err := bech32.DecodeAndConvert(newOperatorAddress)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+ bech32Addr, err := bech32.ConvertAndEncode("simvaloper", bz)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+ newVal := stakingtypes.Validator{
+ OperatorAddress: bech32Addr,
+ ConsensusPubkey: pubkeyAny,
+ Jailed: false,
+ Status: stakingtypes.Bonded,
+ Tokens: sdk.NewInt(900000000000000),
+ DelegatorShares: sdk.MustNewDecFromStr("10000000"),
+ Description: stakingtypes.Description{
+ Moniker: "Testnet Validator",
+ },
+ Commission: stakingtypes.Commission{
+ CommissionRates: stakingtypes.CommissionRates{
+ Rate: sdk.MustNewDecFromStr("0.05"),
+ MaxRate: sdk.MustNewDecFromStr("0.1"),
+ MaxChangeRate: sdk.MustNewDecFromStr("0.05"),
+ },
+ },
+ MinSelfDelegation: sdk.OneInt(),
+ }
+
+ // Remove all validators from power store
+ stakingKey := app.GetKey(stakingtypes.ModuleName)
+ stakingStore := ctx.KVStore(stakingKey)
+ iterator := app.StakingKeeper.ValidatorsPowerStoreIterator(ctx)
+ for ; iterator.Valid(); iterator.Next() {
+ stakingStore.Delete(iterator.Key())
+ }
+ iterator.Close()
+
+ // Remove all validators from last validators store
+ iterator = app.StakingKeeper.LastValidatorsIterator(ctx)
+ for ; iterator.Valid(); iterator.Next() {
+ app.StakingKeeper.LastValidatorPower.Delete(iterator.Key())
+ }
+ iterator.Close()
+
+ // Add our validator to power and last validators store
+ app.StakingKeeper.SetValidator(ctx, newVal)
+ err = app.StakingKeeper.SetValidatorByConsAddr(ctx, newVal)
+ if err != nil {
+ panic(err)
+ }
+ app.StakingKeeper.SetValidatorByPowerIndex(ctx, newVal)
+ app.StakingKeeper.SetLastValidatorPower(ctx, newVal.GetOperator(), 0)
+ if err := app.StakingKeeper.Hooks().AfterValidatorCreated(ctx, newVal.GetOperator()); err != nil {
+ panic(err)
+ }
+```
+
+#### Distribution
+
+Since the validator set has changed, we need to update the distribution records for the new validator.
+
+
+```go
+ // Initialize records for this validator across all distribution stores
+ app.DistrKeeper.ValidatorHistoricalRewards.Set(ctx, newVal.GetOperator(), 0, distrtypes.NewValidatorHistoricalRewards(sdk.DecCoins{}, 1))
+ app.DistrKeeper.ValidatorCurrentRewards.Set(ctx, newVal.GetOperator(), distrtypes.NewValidatorCurrentRewards(sdk.DecCoins{}, 1))
+ app.DistrKeeper.ValidatorAccumulatedCommission.Set(ctx, newVal.GetOperator(), distrtypes.InitialValidatorAccumulatedCommission())
+ app.DistrKeeper.ValidatorOutstandingRewards.Set(ctx, newVal.GetOperator(), distrtypes.ValidatorOutstandingRewards{Rewards: sdk.DecCoins{}})
+```
+
+#### Slashing
+
+We also need to set the validator signing info for the new validator.
+
+```go
+ // SLASHING
+ //
+
+ // Set validator signing info for our new validator.
+ newConsAddr := sdk.ConsAddress(newValAddr.Bytes())
+ newValidatorSigningInfo := slashingtypes.ValidatorSigningInfo{
+ Address: newConsAddr.String(),
+ StartHeight: app.LastBlockHeight() - 1,
+ Tombstoned: false,
+ }
+ app.SlashingKeeper.ValidatorSigningInfo.Set(ctx, newConsAddr, newValidatorSigningInfo)
+```
+
+#### Bank
+
+It is useful to create new accounts for your testing purposes. This avoids the need to have the same key as you may have on mainnet.
+
+```go
+ // BANK
+ //
+
+ defaultCoins := sdk.NewCoins(sdk.NewInt64Coin("ustake", 1000000000000))
+
+ localSimAppAccounts := []sdk.AccAddress{
+ sdk.MustAccAddressFromBech32("cosmos12smx2wdlyttvyzvzg54y2vnqwq2qjateuf7thj"),
+ sdk.MustAccAddressFromBech32("cosmos1cyyzpxplxdzkeea7kwsydadg87357qnahakaks"),
+ sdk.MustAccAddressFromBech32("cosmos18s5lynnmx37hq4wlrw9gdn68sg2uxp5rgk26vv"),
+ sdk.MustAccAddressFromBech32("cosmos1qwexv7c6sm95lwhzn9027vyu2ccneaqad4w8ka"),
+ sdk.MustAccAddressFromBech32("cosmos14hcxlnwlqtq75ttaxf674vk6mafspg8xwgnn53"),
+ sdk.MustAccAddressFromBech32("cosmos12rr534cer5c0vj53eq4y32lcwguyy7nndt0u2t"),
+ sdk.MustAccAddressFromBech32("cosmos1nt33cjd5auzh36syym6azgc8tve0jlvklnq7jq"),
+ sdk.MustAccAddressFromBech32("cosmos10qfrpash5g2vk3hppvu45x0g860czur8ff5yx0"),
+ sdk.MustAccAddressFromBech32("cosmos1f4tvsdukfwh6s9swrc24gkuz23tp8pd3e9r5fa"),
+ sdk.MustAccAddressFromBech32("cosmos1myv43sqgnj5sm4zl98ftl45af9cfzk7nhjxjqh"),
+ sdk.MustAccAddressFromBech32("cosmos14gs9zqh8m49yy9kscjqu9h72exyf295afg6kgk"),
+ sdk.MustAccAddressFromBech32("cosmos1jllfytsz4dryxhz5tl7u73v29exsf80vz52ucc")}
+
+ // Fund localSimApp accounts
+ for _, account := range localSimAppAccounts {
+ err := app.BankKeeper.MintCoins(ctx, minttypes.ModuleName, defaultCoins)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+ err = app.BankKeeper.SendCoinsFromModuleToAccount(ctx, minttypes.ModuleName, account, defaultCoins)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+ }
+```
+
+#### Upgrade
+
+If you would like to schedule an upgrade the below can be used.
+
+```go
+ // UPGRADE
+ //
+
+ if upgradeToTrigger != "" {
+ upgradePlan := upgradetypes.Plan{
+ Name: upgradeToTrigger,
+ Height: app.LastBlockHeight(),
+ }
+ err = app.UpgradeKeeper.ScheduleUpgrade(ctx, upgradePlan)
+ if err != nil {
+ panic(err)
+ }
+ }
+```
+
+### Optional Changes
+
+If you have custom modules that rely on specific state from the above modules and/or you would like to test your custom module, you will need to update the state of your custom module to reflect your needs
+
+## Running the Testnet
+
+Before we can run the testnet we must plug everything together.
+
+in `root.go`, in the `initRootCmd` function we add:
+
+```diff
+ server.AddCommands(rootCmd, simapp.DefaultNodeHome, newApp, createSimAppAndExport, addModuleInitFlags)
+ ++ server.AddTestnetCreatorCommand(rootCmd, simapp.DefaultNodeHome, newTestnetApp, addModuleInitFlags)
+```
+
+Next we will add a newTestnetApp helper function:
+
+```diff
+// newTestnetApp starts by running the normal newApp method. From there, the app interface returned is modified in order
+// for a testnet to be created from the provided app.
+func newTestnetApp(logger log.Logger, db cometbftdb.DB, traceStore io.Writer, appOpts servertypes.AppOptions) servertypes.Application {
+ // Create an app and type cast to an SimApp
+ app := newApp(logger, db, traceStore, appOpts)
+ simApp, ok := app.(*simapp.SimApp)
+ if !ok {
+ panic("app created from newApp is not of type simApp")
+ }
+
+ newValAddr, ok := appOpts.Get(server.KeyNewValAddr).(bytes.HexBytes)
+ if !ok {
+ panic("newValAddr is not of type bytes.HexBytes")
+ }
+ newValPubKey, ok := appOpts.Get(server.KeyUserPubKey).(crypto.PubKey)
+ if !ok {
+ panic("newValPubKey is not of type crypto.PubKey")
+ }
+ newOperatorAddress, ok := appOpts.Get(server.KeyNewOpAddr).(string)
+ if !ok {
+ panic("newOperatorAddress is not of type string")
+ }
+ upgradeToTrigger, ok := appOpts.Get(server.KeyTriggerTestnetUpgrade).(string)
+ if !ok {
+ panic("upgradeToTrigger is not of type string")
+ }
+
+ // Make modifications to the normal SimApp required to run the network locally
+ return simapp.InitSimAppForTestnet(simApp, newValAddr, newValPubKey, newOperatorAddress, upgradeToTrigger)
+}
+```
diff --git a/copy-of-sdk-docs/build/building-apps/_category_.json b/copy-of-sdk-docs/build/building-apps/_category_.json
new file mode 100644
index 00000000..342732cc
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Building Apps",
+ "position": 0,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/building-apps/upgrades/_category_.json b/copy-of-sdk-docs/build/building-apps/upgrades/_category_.json
new file mode 100644
index 00000000..949dd331
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-apps/upgrades/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Upgrade Tutorials",
+ "position": 0,
+ "link": null
+}
diff --git a/copy-of-sdk-docs/build/building-modules/00-intro.md b/copy-of-sdk-docs/build/building-modules/00-intro.md
new file mode 100644
index 00000000..fba93f3e
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/00-intro.md
@@ -0,0 +1,73 @@
+---
+sidebar_position: 1
+---
+
+# Introduction to Cosmos SDK Modules
+
+:::note Synopsis
+Modules define most of the logic of Cosmos SDK applications. Developers compose modules together using the Cosmos SDK to build their custom application-specific blockchains. This document outlines the basic concepts behind SDK modules and how to approach module management.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK application](../../learn/beginner/00-app-anatomy.md)
+* [Lifecycle of a Cosmos SDK transaction](../../learn/beginner/01-tx-lifecycle.md)
+
+:::
+
+## Role of Modules in a Cosmos SDK Application
+
+The Cosmos SDK can be thought of as the Ruby-on-Rails of blockchain development. It comes with a core that provides the basic functionalities every blockchain application needs, like a [boilerplate implementation of the ABCI](../../learn/advanced/00-baseapp.md) to communicate with the underlying consensus engine, a [`multistore`](../../learn/advanced/04-store.md#multistore) to persist state, a [server](../../learn/advanced/03-node.md) to form a full-node and [interfaces](./09-module-interfaces.md) to handle queries.
+
+On top of this core, the Cosmos SDK enables developers to build modules that implement the business logic of their application. In other words, SDK modules implement the bulk of the logic of applications, while the core does the wiring and enables modules to be composed together. The end goal is to build a robust ecosystem of open-source Cosmos SDK modules, making it increasingly easier to build complex blockchain applications.
+
+Cosmos SDK modules can be seen as little state-machines within the state-machine. They generally define a subset of the state using one or more `KVStore`s in the [main multistore](../../learn/advanced/04-store.md), as well as a subset of [message types](./02-messages-and-queries.md#messages). These messages are routed by one of the main components of Cosmos SDK core, [`BaseApp`](../../learn/advanced/00-baseapp.md), to a module Protobuf [`Msg` service](./03-msg-services.md) that defines them.
+
+```mermaid
+flowchart TD
+ A[Transaction relayed from the full-node's consensus engine to the node's application via DeliverTx]
+ A --> B[APPLICATION]
+ B --> C["Using baseapp's methods: Decode the Tx, extract and route the message(s)"]
+ C --> D[Message routed to the correct module to be processed]
+ D --> E[AUTH MODULE]
+ D --> F[BANK MODULE]
+ D --> G[STAKING MODULE]
+ D --> H[GOV MODULE]
+ H --> I[Handles message, Updates state]
+ E --> I
+ F --> I
+ G --> I
+ I --> J["Return result to the underlying consensus engine (e.g. CometBFT) (0=Ok, 1=Err)"]
+```
+
+As a result of this architecture, building a Cosmos SDK application usually revolves around writing modules to implement the specialized logic of the application and composing them with existing modules to complete the application. Developers will generally work on modules that implement logic needed for their specific use case that do not exist yet, and will use existing modules for more generic functionalities like staking, accounts, or token management.
+
+
+### Modules as super-users
+
+Modules have the ability to perform actions that are not available to regular users. This is because modules are given sudo permissions by the state machine. Modules can reject another modules desire to execute a function but this logic must be explicit. Examples of this can be seen when modules create functions to modify parameters:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/61da5d1c29c16a1eb5bb5488719fde604ec07b10/x/bank/keeper/msg_server.go#L147-L149
+```
+
+## How to Approach Building Modules as a Developer
+
+While there are no definitive guidelines for writing modules, here are some important design principles developers should keep in mind when building them:
+
+* **Composability**: Cosmos SDK applications are almost always composed of multiple modules. This means developers need to carefully consider the integration of their module not only with the core of the Cosmos SDK, but also with other modules. The former is achieved by following standard design patterns outlined [here](#main-components-of-cosmos-sdk-modules), while the latter is achieved by properly exposing the store(s) of the module via the [`keeper`](./06-keeper.md).
+* **Specialization**: A direct consequence of the **composability** feature is that modules should be **specialized**. Developers should carefully establish the scope of their module and not batch multiple functionalities into the same module. This separation of concerns enables modules to be reused in other projects and improves the upgradability of the application. **Specialization** also plays an important role in the [object-capabilities model](../../docs/learn/advanced/10-ocap.md) of the Cosmos SDK.
+* **Capabilities**: Most modules need to read and/or write to the store(s) of other modules. However, in an open-source environment, it is possible for some modules to be malicious. That is why module developers need to carefully think not only about how their module interacts with other modules, but also about how to give access to the module's store(s). The Cosmos SDK takes a capabilities-oriented approach to inter-module security. This means that each store defined by a module is accessed by a `key`, which is held by the module's [`keeper`](./06-keeper.md). This `keeper` defines how to access the store(s) and under what conditions. Access to the module's store(s) is done by passing a reference to the module's `keeper`.
+
+## Main Components of Cosmos SDK Modules
+
+Modules are by convention defined in the `./x/` subfolder (e.g. the `bank` module will be defined in the `./x/bank` folder). They generally share the same core components:
+
+* A [`keeper`](./06-keeper.md), used to access the module's store(s) and update the state.
+* A [`Msg` service](./02-messages-and-queries.md#messages), used to process messages when they are routed to the module by [`BaseApp`](../../learn/advanced/00-baseapp.md#message-routing) and trigger state-transitions.
+* A [query service](./04-query-services.md), used to process user queries when they are routed to the module by [`BaseApp`](../../learn/advanced/00-baseapp.md#query-routing).
+* Interfaces, for end users to query the subset of the state defined by the module and create `message`s of the custom types defined in the module.
+
+In addition to these components, modules implement the `AppModule` interface in order to be managed by the [`module manager`](./01-module-manager.md).
+
+Please refer to the [structure document](./11-structure.md) to learn about the recommended structure of a module's directory.
diff --git a/copy-of-sdk-docs/build/building-modules/01-module-manager.md b/copy-of-sdk-docs/build/building-modules/01-module-manager.md
new file mode 100644
index 00000000..ee2a83a8
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/01-module-manager.md
@@ -0,0 +1,328 @@
+---
+sidebar_position: 1
+---
+
+# Module Manager
+
+:::note Synopsis
+Cosmos SDK modules need to implement the [`AppModule` interfaces](#application-module-interfaces), in order to be managed by the application's [module manager](#module-manager). The module manager plays an important role in [`message` and `query` routing](../../learn/advanced/00-baseapp.md#routing), and allows application developers to set the order of execution of a variety of functions like [`PreBlocker`](../../learn/beginner/00-app-anatomy.md#preblocker) and [`BeginBlocker` and `EndBlocker`](../../learn/beginner/00-app-anatomy.md#beginblocker-and-endblocker).
+:::
+
+:::note Pre-requisite Readings
+
+* [Introduction to Cosmos SDK Modules](./00-intro.md)
+
+:::
+
+## Application Module Interfaces
+
+Application module interfaces exist to facilitate the composition of modules together to form a functional Cosmos SDK application.
+
+:::note
+
+It is recommended to implement interfaces from the [Core API](https://docs.cosmos.network/main/architecture/adr-063-core-module-api) `appmodule` package. This makes modules less dependent on the SDK.
+For legacy reason modules can still implement interfaces from the SDK `module` package.
+:::
+
+There are 2 main application module interfaces:
+
+* [`appmodule.AppModule` / `module.AppModule`](#appmodule) for inter-dependent module functionalities (except genesis-related functionalities).
+* (legacy) [`module.AppModuleBasic`](#appmodulebasic) for independent module functionalities. New modules can use `module.CoreAppModuleBasicAdaptor` instead.
+
+The above interfaces are mostly embedding smaller interfaces (extension interfaces), that define specific functionalities:
+
+* (legacy) `module.HasName`: Allows the module to provide its own name for legacy purposes.
+* (legacy) [`module.HasGenesisBasics`](#modulehasgenesisbasics): The legacy interface for stateless genesis methods.
+* [`module.HasGenesis`](#modulehasgenesis) for inter-dependent genesis-related module functionalities.
+* [`module.HasABCIGenesis`](#modulehasabcigenesis) for inter-dependent genesis-related module functionalities.
+* [`appmodule.HasGenesis` / `module.HasGenesis`](#appmodulehasgenesis): The extension interface for stateful genesis methods.
+* [`appmodule.HasPreBlocker`](#haspreblocker): The extension interface that contains information about the `AppModule` and `PreBlock`.
+* [`appmodule.HasBeginBlocker`](#hasbeginblocker): The extension interface that contains information about the `AppModule` and `BeginBlock`.
+* [`appmodule.HasEndBlocker`](#hasendblocker): The extension interface that contains information about the `AppModule` and `EndBlock`.
+* [`appmodule.HasPrecommit`](#hasprecommit): The extension interface that contains information about the `AppModule` and `Precommit`.
+* [`appmodule.HasPrepareCheckState`](#haspreparecheckstate): The extension interface that contains information about the `AppModule` and `PrepareCheckState`.
+* [`appmodule.HasService` / `module.HasServices`](#hasservices): The extension interface for modules to register services.
+* [`module.HasABCIEndBlock`](#hasabciendblock): The extension interface that contains information about the `AppModule`, `EndBlock` and returns an updated validator set.
+* (legacy) [`module.HasInvariants`](#hasinvariants): The extension interface for registering invariants.
+* (legacy) [`module.HasConsensusVersion`](#hasconsensusversion): The extension interface for declaring a module consensus version.
+
+The `AppModuleBasic` interface exists to define independent methods of the module, i.e. those that do not depend on other modules in the application. This allows for the construction of the basic application structure early in the application definition, generally in the `init()` function of the [main application file](../../learn/beginner/00-app-anatomy.md#core-application-file).
+
+The `AppModule` interface exists to define inter-dependent module methods. Many modules need to interact with other modules, typically through [`keeper`s](./06-keeper.md), which means there is a need for an interface where modules list their `keeper`s and other methods that require a reference to another module's object. `AppModule` interface extension, such as `HasBeginBlocker` and `HasEndBlocker`, also enables the module manager to set the order of execution between module's methods like `BeginBlock` and `EndBlock`, which is important in cases where the order of execution between modules matters in the context of the application.
+
+The usage of extension interfaces allows modules to define only the functionalities they need. For example, a module that does not need an `EndBlock` does not need to define the `HasEndBlocker` interface and thus the `EndBlock` method. `AppModule` and `AppModuleGenesis` are voluntarily small interfaces, that can take advantage of the `Module` patterns without having to define many placeholder functions.
+
+### `AppModuleBasic`
+
+:::note
+Use `module.CoreAppModuleBasicAdaptor` instead for creating an `AppModuleBasic` from an `appmodule.AppModule`.
+:::
+
+The `AppModuleBasic` interface defines the independent methods modules need to implement.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L56-L61
+```
+
+* `RegisterLegacyAminoCodec(*codec.LegacyAmino)`: Registers the `amino` codec for the module, which is used to marshal and unmarshal structs to/from `[]byte` in order to persist them in the module's `KVStore`.
+* `RegisterInterfaces(codectypes.InterfaceRegistry)`: Registers a module's interface types and their concrete implementations as `proto.Message`.
+* `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module.
+
+All the `AppModuleBasic` of an application are managed by the [`BasicManager`](#basicmanager).
+
+### `HasName`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L66-L68
+```
+
+* `HasName` is an interface that has a method `Name()`. This method returns the name of the module as a `string`.
+
+### Genesis
+
+:::tip
+For easily creating an `AppModule` that only has genesis functionalities, use `module.GenesisOnlyAppModule`.
+:::
+
+#### `module.HasGenesisBasics`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L71-L74
+```
+
+Let us go through the methods:
+
+* `DefaultGenesis(codec.JSONCodec)`: Returns a default [`GenesisState`](./08-genesis.md#genesisstate) for the module, marshalled to `json.RawMessage`. The default `GenesisState` need to be defined by the module developer and is primarily used for testing.
+* `ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`: Used to validate the `GenesisState` defined by a module, given in its `json.RawMessage` form. It will usually unmarshall the `json` before running a custom [`ValidateGenesis`](./08-genesis.md#validategenesis) function defined by the module developer.
+
+#### `module.HasGenesis`
+
+`HasGenesis` is an extension interface for allowing modules to implement genesis functionalities.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/6ce2505/types/module/module.go#L184-L189
+```
+
+#### `module.HasABCIGenesis`
+
+`HasABCIGenesis` is an extension interface for allowing modules to implement genesis functionalities and returns validator set updates.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/6ce2505/types/module/module.go#L191-L196
+```
+
+#### `appmodule.HasGenesis`
+
+:::warning
+`appmodule.HasGenesis` is experimental and should be considered unstable, it is recommended to not use this interface at this time.
+:::
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/6ce2505/core/appmodule/genesis.go#L8-L25
+```
+
+### `AppModule`
+
+The `AppModule` interface defines a module. Modules can declare their functionalities by implementing extensions interfaces.
+`AppModule`s are managed by the [module manager](#manager), which checks which extension interfaces are implemented by the module.
+
+#### `appmodule.AppModule`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/6afece6/core/appmodule/module.go#L11-L20
+```
+
+#### `module.AppModule`
+
+:::note
+Previously the `module.AppModule` interface was containing all the methods that are defined in the extensions interfaces. This was leading to much boilerplate for modules that did not need all the functionalities.
+:::
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L199-L206
+```
+
+### `HasInvariants`
+
+This interface defines one method. It allows checking if a module can register invariants.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L211-L214
+```
+
+* `RegisterInvariants(sdk.InvariantRegistry)`: Registers the [`invariants`](./07-invariants.md) of the module. If an invariant deviates from its predicted value, the [`InvariantRegistry`](./07-invariants.md#registry) triggers appropriate logic (most often the chain will be halted).
+
+### `HasServices`
+
+This interface defines one method. It allows checking if a module can register services.
+
+#### `appmodule.HasService`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/6afece6/core/appmodule/module.go#L22-L40
+```
+
+#### `module.HasServices`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L217-L220
+```
+
+* `RegisterServices(Configurator)`: Allows a module to register services.
+
+### `HasConsensusVersion`
+
+This interface defines one method for checking a module consensus version.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L223-L229
+```
+
+* `ConsensusVersion() uint64`: Returns the consensus version of the module.
+
+### `HasPreBlocker`
+
+The `HasPreBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `PreBlock` method implement this interface.
+
+### `HasBeginBlocker`
+
+The `HasBeginBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `BeginBlock` method implement this interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/core/appmodule/module.go#L73-L80
+```
+
+* `BeginBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the beginning of each block.
+
+### `HasEndBlocker`
+
+The `HasEndBlocker` is an extension interface from `appmodule.AppModule`. All modules that have an `EndBlock` method implement this interface. If a module needs to return validator set updates (staking), they can use `HasABCIEndBlock`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/core/appmodule/module.go#L83-L89
+```
+
+* `EndBlock(context.Context) error`: This method gives module developers the option to implement logic that is automatically triggered at the end of each block.
+
+### `HasABCIEndBlock`
+
+The `HasABCIEndBlock` is an extension interface from `module.AppModule`. All modules that have an `EndBlock` which return validator set updates implement this interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L236-L239
+```
+
+* `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: This method gives module developers the option to inform the underlying consensus engine of validator set changes (e.g. the `staking` module).
+
+### `HasPrecommit`
+
+`HasPrecommit` is an extension interface from `appmodule.AppModule`. All modules that have a `Precommit` method implement this interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/core/appmodule/module.go#L50-L53
+```
+
+* `Precommit(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during [`Commit'](../../learn/advanced/00-baseapp.md#commit) of each block using the [`finalizeblockstate`](../../learn/advanced/00-baseapp.md#state-updates) of the block to be committed. Implement empty if no logic needs to be triggered during `Commit` of each block for this module.
+
+### `HasPrepareCheckState`
+
+`HasPrepareCheckState` is an extension interface from `appmodule.AppModule`. All modules that have a `PrepareCheckState` method implement this interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/core/appmodule/module.go#L44-L47
+```
+
+* `PrepareCheckState(context.Context)`: This method gives module developers the option to implement logic that is automatically triggered during [`Commit'](../../learn/advanced/00-baseapp.md#commit) of each block using the [`checkState`](../../learn/advanced/00-baseapp.md#state-updates) of the next block. Implement empty if no logic needs to be triggered during `Commit` of each block for this module.
+
+### Implementing the Application Module Interfaces
+
+Typically, the various application module interfaces are implemented in a file called `module.go`, located in the module's folder (e.g. `./x/module/module.go`).
+
+Almost every module needs to implement the `AppModuleBasic` and `AppModule` interfaces. If the module is only used for genesis, it will implement `AppModuleGenesis` instead of `AppModule`. The concrete type that implements the interface can add parameters that are required for the implementation of the various methods of the interface. For example, the `Route()` function often calls a `NewMsgServerImpl(k keeper)` function defined in `keeper/msg_server.go` and therefore needs to pass the module's [`keeper`](./06-keeper.md) as a parameter.
+
+```go
+// example
+type AppModule struct {
+ AppModuleBasic
+ keeper Keeper
+}
+```
+
+In the example above, you can see that the `AppModule` concrete type references an `AppModuleBasic`, and not an `AppModuleGenesis`. That is because `AppModuleGenesis` only needs to be implemented in modules that focus on genesis-related functionalities. In most modules, the concrete `AppModule` type will have a reference to an `AppModuleBasic` and implement the two added methods of `AppModuleGenesis` directly in the `AppModule` type.
+
+If no parameter is required (which is often the case for `AppModuleBasic`), just declare an empty concrete type like so:
+
+```go
+type AppModuleBasic struct{}
+```
+
+## Module Managers
+
+Module managers are used to manage collections of `AppModuleBasic` and `AppModule`.
+
+### `BasicManager`
+
+The `BasicManager` is a structure that lists all the `AppModuleBasic` of an application:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L77
+```
+
+It implements the following methods:
+
+* `NewBasicManager(modules ...AppModuleBasic)`: Constructor function. It takes a list of the application's `AppModuleBasic` and builds a new `BasicManager`. This function is generally called in the `init()` function of [`app.go`](../../learn/beginner/00-app-anatomy.md#core-application-file) to quickly initialize the independent elements of the application's modules (click [here](https://github.com/cosmos/gaia/blob/main/app/app.go#L59-L74) to see an example).
+* `NewBasicManagerFromManager(manager *Manager, customModuleBasics map[string]AppModuleBasic)`: Constructor function. It creates a new `BasicManager` from a `Manager`. The `BasicManager` will contain all `AppModuleBasic` from the `AppModule` manager using `CoreAppModuleBasicAdaptor` whenever possible. Module's `AppModuleBasic` can be overridden by passing a custom AppModuleBasic map
+* `RegisterLegacyAminoCodec(cdc *codec.LegacyAmino)`: Registers the [`codec.LegacyAmino`s](../../learn/advanced/05-encoding.md#amino) of each of the application's `AppModuleBasic`. This function is usually called early on in the [application's construction](../../learn/beginner/00-app-anatomy.md#constructor).
+* `RegisterInterfaces(registry codectypes.InterfaceRegistry)`: Registers interface types and implementations of each of the application's `AppModuleBasic`.
+* `DefaultGenesis(cdc codec.JSONCodec)`: Provides default genesis information for modules in the application by calling the [`DefaultGenesis(cdc codec.JSONCodec)`](./08-genesis.md#defaultgenesis) function of each module. It only calls the modules that implements the `HasGenesisBasics` interfaces.
+* `ValidateGenesis(cdc codec.JSONCodec, txEncCfg client.TxEncodingConfig, genesis map[string]json.RawMessage)`: Validates the genesis information modules by calling the [`ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`](./08-genesis.md#validategenesis) function of modules implementing the `HasGenesisBasics` interface.
+* `RegisterGRPCGatewayRoutes(clientCtx client.Context, rtr *runtime.ServeMux)`: Registers gRPC routes for modules.
+* `AddTxCommands(rootTxCmd *cobra.Command)`: Adds modules' transaction commands (defined as `GetTxCmd() *cobra.Command`) to the application's [`rootTxCommand`](../../learn/advanced/07-cli.md#transaction-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](../../learn/advanced/07-cli.md).
+* `AddQueryCommands(rootQueryCmd *cobra.Command)`: Adds modules' query commands (defined as `GetQueryCmd() *cobra.Command`) to the application's [`rootQueryCommand`](../../learn/advanced/07-cli.md#query-commands). This function is usually called function from the `main.go` function of the [application's command-line interface](../../learn/advanced/07-cli.md).
+
+### `Manager`
+
+The `Manager` is a structure that holds all the `AppModule` of an application, and defines the order of execution between several key components of these modules:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/module/module.go#L278-L288
+```
+
+The module manager is used throughout the application whenever an action on a collection of modules is required. It implements the following methods:
+
+* `NewManager(modules ...AppModule)`: Constructor function. It takes a list of the application's `AppModule`s and builds a new `Manager`. It is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderInitGenesis(moduleNames ...string)`: Sets the order in which the [`InitGenesis`](./08-genesis.md#initgenesis) function of each module will be called when the application is first started. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+ To initialize modules successfully, module dependencies should be considered. For example, the `genutil` module must occur after `staking` module so that the pools are properly initialized with tokens from genesis accounts, the `genutils` module must also occur after `auth` so that it can access the params from auth, IBC's `capability` module should be initialized before all other modules so that it can initialize any capabilities.
+* `SetOrderExportGenesis(moduleNames ...string)`: Sets the order in which the [`ExportGenesis`](./08-genesis.md#exportgenesis) function of each module will be called in case of an export. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderPreBlockers(moduleNames ...string)`: Sets the order in which the `PreBlock()` function of each module will be called before `BeginBlock()` of all modules. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderBeginBlockers(moduleNames ...string)`: Sets the order in which the `BeginBlock()` function of each module will be called at the beginning of each block. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderEndBlockers(moduleNames ...string)`: Sets the order in which the `EndBlock()` function of each module will be called at the end of each block. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderPrecommiters(moduleNames ...string)`: Sets the order in which the `Precommit()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderPrepareCheckStaters(moduleNames ...string)`: Sets the order in which the `PrepareCheckState()` function of each module will be called during commit of each block. This function is generally called from the application's main [constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+* `SetOrderMigrations(moduleNames ...string)`: Sets the order of migrations to be run. If not set then migrations will be run with an order defined in `DefaultMigrationsOrder`.
+* `RegisterInvariants(ir sdk.InvariantRegistry)`: Registers the [invariants](./07-invariants.md) of module implementing the `HasInvariants` interface.
+* `RegisterServices(cfg Configurator)`: Registers the services of modules implementing the `HasServices` interface.
+* `InitGenesis(ctx context.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage)`: Calls the [`InitGenesis`](./08-genesis.md#initgenesis) function of each module when the application is first started, in the order defined in `OrderInitGenesis`. Returns an `abci.InitChainResponse` to the underlying consensus engine, which can contain validator updates.
+* `ExportGenesis(ctx context.Context, cdc codec.JSONCodec)`: Calls the [`ExportGenesis`](./08-genesis.md#exportgenesis) function of each module, in the order defined in `OrderExportGenesis`. The export constructs a genesis file from a previously existing state, and is mainly used when a hard-fork upgrade of the chain is required.
+* `ExportGenesisForModules(ctx context.Context, cdc codec.JSONCodec, modulesToExport []string)`: Behaves the same as `ExportGenesis`, except takes a list of modules to export.
+* `BeginBlock(ctx context.Context) error`: At the beginning of each block, this function is called from [`BaseApp`](../../learn/advanced/00-baseapp.md#beginblock) and, in turn, calls the [`BeginBlock`](./06-beginblock-endblock.md) function of each modules implementing the `appmodule.HasBeginBlocker` interface, in the order defined in `OrderBeginBlockers`. It creates a child [context](../../learn/advanced/02-context.md) with an event manager to aggregate [events](../../learn/advanced/08-events.md) emitted from each modules.
+* `EndBlock(ctx context.Context) error`: At the end of each block, this function is called from [`BaseApp`](../../learn/advanced/00-baseapp.md#endblock) and, in turn, calls the [`EndBlock`](./06-beginblock-endblock.md) function of each modules implementing the `appmodule.HasEndBlocker` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](../../learn/advanced/02-context.md) with an event manager to aggregate [events](../../learn/advanced/08-events.md) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any).
+* `EndBlock(context.Context) ([]abci.ValidatorUpdate, error)`: At the end of each block, this function is called from [`BaseApp`](../../learn/advanced/00-baseapp.md#endblock) and, in turn, calls the [`EndBlock`](./06-beginblock-endblock.md) function of each modules implementing the `module.HasABCIEndBlock` interface, in the order defined in `OrderEndBlockers`. It creates a child [context](../../learn/advanced/02-context.md) with an event manager to aggregate [events](../../learn/advanced/08-events.md) emitted from all modules. The function returns an `abci` which contains the aforementioned events, as well as validator set updates (if any).
+* `Precommit(ctx context.Context)`: During [`Commit`](../../learn/advanced/00-baseapp.md#commit), this function is called from `BaseApp` immediately before the [`deliverState`](../../learn/advanced/00-baseapp.md#state-updates) is written to the underlying [`rootMultiStore`](../../learn/advanced/04-store.md#commitmultistore) and, in turn calls the `Precommit` function of each modules implementing the `HasPrecommit` interface, in the order defined in `OrderPrecommiters`. It creates a child [context](../../learn/advanced/02-context.md) where the underlying `CacheMultiStore` is that of the newly committed block's [`finalizeblockstate`](../../learn/advanced/00-baseapp.md#state-updates).
+* `PrepareCheckState(ctx context.Context)`: During [`Commit`](../../learn/advanced/00-baseapp.md#commit), this function is called from `BaseApp` immediately after the [`deliverState`](../../learn/advanced/00-baseapp.md#state-updates) is written to the underlying [`rootMultiStore`](../../learn/advanced/04-store.md#commitmultistore) and, in turn calls the `PrepareCheckState` function of each module implementing the `HasPrepareCheckState` interface, in the order defined in `OrderPrepareCheckStaters`. It creates a child [context](../../learn/advanced/02-context.md) where the underlying `CacheMultiStore` is that of the next block's [`checkState`](../../learn/advanced/00-baseapp.md#state-updates). Writes to this state will be present in the [`checkState`](../../learn/advanced/00-baseapp.md#state-updates) of the next block, and therefore this method can be used to prepare the `checkState` for the next block.
+
+Here's an example of a concrete integration within an `simapp`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L510-L533
+```
+
+This is the same example from `runtime` (the package that powers app di):
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/runtime/module.go#L63
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/runtime/module.go#L85
+```
diff --git a/copy-of-sdk-docs/build/building-modules/02-messages-and-queries.md b/copy-of-sdk-docs/build/building-modules/02-messages-and-queries.md
new file mode 100644
index 00000000..e6048c31
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/02-messages-and-queries.md
@@ -0,0 +1,137 @@
+---
+sidebar_position: 1
+---
+
+# Messages and Queries
+
+:::note Synopsis
+`Msg`s and `Queries` are the two primary objects handled by modules. Most of the core components defined in a module, like `Msg` services, `keeper`s and `Query` services, exist to process `message`s and `queries`.
+:::
+
+:::note Pre-requisite Readings
+
+* [Introduction to Cosmos SDK Modules](./00-intro.md)
+
+:::
+
+## Messages
+
+`Msg`s are objects whose end-goal is to trigger state-transitions. They are wrapped in [transactions](../../learn/advanced/01-transactions.md), which may contain one or more of them.
+
+When a transaction is relayed from the underlying consensus engine to the Cosmos SDK application, it is first decoded by [`BaseApp`](../../learn/advanced/00-baseapp.md). Then, each message contained in the transaction is extracted and routed to the appropriate module via `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's [`Msg` service](./03-msg-services.md). For a more detailed explanation of the lifecycle of a transaction, click [here](../../learn/beginner/01-tx-lifecycle.md).
+
+### `Msg` Services
+
+Defining Protobuf `Msg` services is the recommended way to handle messages. A Protobuf `Msg` service should be created for each module, typically in `tx.proto` (see more info about [conventions and naming](../../learn/advanced/05-encoding.md#faq)). It must have an RPC service method defined for each message in the module.
+
+
+Each `Msg` service method must have exactly one argument, which must implement the `sdk.Msg` interface, and a Protobuf response. The naming convention is to call the RPC argument `Msg` and the RPC response `MsgResponse`. For example:
+
+```protobuf
+ rpc Send(MsgSend) returns (MsgSendResponse);
+```
+
+See an example of a `Msg` service definition from `x/bank` module:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/bank/v1beta1/tx.proto#L13-L36
+```
+
+### `sdk.Msg` Interface
+
+`sdk.Msg` is an alias of `proto.Message`.
+
+To attach a `ValidateBasic()` method to a message then you must add methods to the type adhering to the `HasValidateBasic`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/9c1e8b247cd47b5d3decda6e86fbc3bc996ee5d7/types/tx_msg.go#L84-L88
+```
+
+In 0.50+ signers from the `GetSigners()` call are automated via a protobuf annotation.
+
+Read more about the signer field [here](./05-protobuf-annotations.md).
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/e6848d99b55a65d014375b295bdd7f9641aac95e/proto/cosmos/bank/v1beta1/tx.proto#L40
+```
+
+If there is a need for custom signers then there is an alternative path which can be taken. A function which returns `signing.CustomGetSigner` for a specific message can be defined.
+
+```go
+func ProvideBankSendTransactionGetSigners() signing.CustomGetSigner {
+
+ // Extract the signer from the signature.
+ signer, err := coretypes.LatestSigner(Tx).Sender(ethTx)
+ if err != nil {
+ return nil, err
+ }
+
+ // Return the signer in the required format.
+ return [][]byte{signer.Bytes()}, nil
+}
+```
+
+When using dependency injection (depinject) this can be provided to the application via the provide method.
+
+```go
+depinject.Provide(banktypes.ProvideBankSendTransactionGetSigners)
+```
+
+The Cosmos SDK uses Protobuf definitions to generate client and server code:
+
+* `MsgServer` interface defines the server API for the `Msg` service and its implementation is described as part of the [`Msg` services](./03-msg-services.md) documentation.
+* Structures are generated for all RPC request and response types.
+
+A `RegisterMsgServer` method is also generated and should be used to register the module's `MsgServer` implementation in `RegisterServices` method from the [`AppModule` interface](./01-module-manager.md#appmodule).
+
+In order for clients (CLI and grpc-gateway) to have these URLs registered, the Cosmos SDK provides the function `RegisterMsgServiceDesc(registry codectypes.InterfaceRegistry, sd *grpc.ServiceDesc)` that should be called inside module's [`RegisterInterfaces`](01-module-manager.md#appmodulebasic) method, using the proto-generated `&_Msg_serviceDesc` as `*grpc.ServiceDesc` argument.
+
+
+## Queries
+
+A `query` is a request for information made by end-users of applications through an interface and processed by a full-node. A `query` is received by a full-node through its consensus engine and relayed to the application via the ABCI. It is then routed to the appropriate module via `BaseApp`'s `QueryRouter` so that it can be processed by the module's query service (./04-query-services.md). For a deeper look at the lifecycle of a `query`, click [here](../../learn/beginner/02-query-lifecycle.md).
+
+### gRPC Queries
+
+Queries should be defined using [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services). A `Query` service should be created per module in `query.proto`. This service lists endpoints starting with `rpc`.
+
+Here's an example of such a `Query` service definition:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/auth/v1beta1/query.proto#L14-L89
+```
+
+As `proto.Message`s, generated `Response` types implement by default `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer).
+
+A `RegisterQueryServer` method is also generated and should be used to register the module's query server in the `RegisterServices` method from the [`AppModule` interface](./01-module-manager.md#appmodule).
+
+### Legacy Queries
+
+Before the introduction of Protobuf and gRPC in the Cosmos SDK, there was usually no specific `query` object defined by module developers, contrary to `message`s. Instead, the Cosmos SDK took the simpler approach of using a simple `path` to define each `query`. The `path` contains the `query` type and all the arguments needed to process it. For most module queries, the `path` should look like the following:
+
+```text
+queryCategory/queryRoute/queryType/arg1/arg2/...
+```
+
+where:
+
+* `queryCategory` is the category of the `query`, typically `custom` for module queries. It is used to differentiate between different kinds of queries within `BaseApp`'s [`Query` method](../../learn/advanced/00-baseapp.md#query).
+* `queryRoute` is used by `BaseApp`'s [`queryRouter`](../../learn/advanced/00-baseapp.md#query-routing) to map the `query` to its module. Usually, `queryRoute` should be the name of the module.
+* `queryType` is used by the module's [`querier`](./04-query-services.md#legacy-queriers) to map the `query` to the appropriate `querier function` within the module.
+* `args` are the actual arguments needed to process the `query`. They are filled out by the end-user. Note that for bigger queries, you might prefer passing arguments in the `Data` field of the request `req` instead of the `path`.
+
+The `path` for each `query` must be defined by the module developer in the module's [command-line interface file](./09-module-interfaces.md#query-commands). Overall, there are 3 mains components module developers need to implement in order to make the subset of the state defined by their module queryable:
+
+* A [`querier`](./04-query-services.md#legacy-queriers), to process the `query` once it has been [routed to the module](../../learn/advanced/00-baseapp.md#query-routing).
+* [Query commands](./09-module-interfaces.md#query-commands) in the module's CLI file, where the `path` for each `query` is specified.
+* `query` return types. Typically defined in a file `types/querier.go`, they specify the result type of each of the module's `queries`. These custom types must implement the `String()` method of [`fmt.Stringer`](https://pkg.go.dev/fmt#Stringer).
+
+### Store Queries
+
+Store queries access store keys directly. They use `clientCtx.QueryABCI(req abci.QueryRequest)` to return the full `abci.QueryResponse` with inclusion Merkle proofs.
+
+See following examples:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/abci.go#L864-L894
+```
diff --git a/copy-of-sdk-docs/build/building-modules/03-msg-services.md b/copy-of-sdk-docs/build/building-modules/03-msg-services.md
new file mode 100644
index 00000000..910d6f88
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/03-msg-services.md
@@ -0,0 +1,119 @@
+---
+sidebar_position: 1
+---
+
+# `Msg` Services
+
+:::note Synopsis
+A Protobuf `Msg` service processes [messages](./02-messages-and-queries.md#messages). Protobuf `Msg` services are specific to the module in which they are defined, and only process messages defined within the said module. They are called from `BaseApp` during [`DeliverTx`](../../learn/advanced/00-baseapp.md#delivertx).
+:::
+
+:::note Pre-requisite Readings
+
+* [Module Manager](./01-module-manager.md)
+* [Messages and Queries](./02-messages-and-queries.md)
+
+:::
+
+## Implementation of a module `Msg` service
+
+Each module should define a Protobuf `Msg` service, which will be responsible for processing requests (implementing `sdk.Msg`) and returning responses.
+
+As further described in [ADR 031](../../../architecture/adr-031-msg-service.md), this approach has the advantage of clearly specifying return types and generating server and client code.
+
+Protobuf generates a `MsgServer` interface based on a definition of `Msg` service. It is the role of the module developer to implement this interface, by implementing the state transition logic that should happen upon receival of each `sdk.Msg`. As an example, here is the generated `MsgServer` interface for `x/bank`, which exposes two `sdk.Msg`s:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/types/tx.pb.go#L550-L568
+```
+
+When possible, the existing module's [`Keeper`](./06-keeper.md) should implement `MsgServer`, otherwise a `msgServer` struct that embeds the `Keeper` can be created, typically in `./keeper/msg_server.go`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/keeper/msg_server.go#L17-L19
+```
+
+`msgServer` methods can retrieve the `sdk.Context` from the `context.Context` parameter using the `sdk.UnwrapSDKContext` method:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/keeper/msg_server.go#L56
+```
+
+`sdk.Msg` processing usually follows these 3 steps:
+
+### Validation
+
+The message server must perform all validation required (both *stateful* and *stateless*) to make sure the `message` is valid.
+The `signer` is charged for the gas cost of this validation.
+
+For example, a `msgServer` method for a `transfer` message should check that the sending account has enough funds to actually perform the transfer.
+
+It is recommended to implement all validation checks in a separate function that passes state values as arguments. This implementation simplifies testing. As expected, expensive validation functions charge additional gas. Example:
+
+```go
+ValidateMsgA(msg MsgA, now Time, gm GasMeter) error {
+ if now.Before(msg.Expire) {
+ return sdkerrors.ErrInvalidRequest.Wrap("msg expired")
+ }
+ gm.ConsumeGas(1000, "signature verification")
+ return signatureVerification(msg.Prover, msg.Data)
+}
+```
+
+:::warning
+Previously, the `ValidateBasic` method was used to perform simple and stateless validation checks.
+This way of validating is deprecated, this means the `msgServer` must perform all validation checks.
+:::
+
+### State Transition
+
+After the validation is successful, the `msgServer` method uses the [`keeper`](./06-keeper.md) functions to access the state and perform a state transition.
+
+### Events
+
+Before returning, `msgServer` methods generally emit one or more [events](../../learn/advanced/08-events.md) by using the `EventManager` held in the `ctx`. Use the new `EmitTypedEvent` function that uses protobuf-based event types:
+
+```go
+ctx.EventManager().EmitTypedEvent(
+ &group.EventABC{Key1: Value1, Key2: Value2})
+```
+
+or the older `EmitEvent` function:
+
+```go
+ctx.EventManager().EmitEvent(
+ sdk.NewEvent(
+ eventType, // e.g. sdk.EventTypeMessage for a message, types.CustomEventType for a custom event defined in the module
+ sdk.NewAttribute(key1, value1),
+ sdk.NewAttribute(key2, value2),
+ ),
+)
+```
+
+These events are relayed back to the underlying consensus engine and can be used by service providers to implement services around the application. Click [here](../../learn/advanced/08-events.md) to learn more about events.
+
+The invoked `msgServer` method returns a `proto.Message` response and an `error`. These return values are then wrapped into an `*sdk.Result` or an `error` using `sdk.WrapServiceResult(ctx context.Context, res proto.Message, err error)`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/baseapp/msg_service_router.go#L160
+```
+
+This method takes care of marshaling the `res` parameter to protobuf and attaching any events on the `ctx.EventManager()` to the `sdk.Result`.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/base/abci/v1beta1/abci.proto#L93-L113
+```
+
+This diagram shows a typical structure of a Protobuf `Msg` service, and how the message propagates through the module.
+
+
+
+## Telemetry
+
+New [telemetry metrics](../../learn/advanced/09-telemetry.md) can be created from `msgServer` methods when handling messages.
+
+This is an example from the `x/auth/vesting` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/vesting/msg_server.go#L76-L88
+```
diff --git a/copy-of-sdk-docs/build/building-modules/04-query-services.md b/copy-of-sdk-docs/build/building-modules/04-query-services.md
new file mode 100644
index 00000000..a787a0c2
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/04-query-services.md
@@ -0,0 +1,57 @@
+---
+sidebar_position: 1
+---
+
+# Query Services
+
+:::note Synopsis
+A Protobuf Query service processes [`queries`](./02-messages-and-queries.md#queries). Query services are specific to the module in which they are defined, and only process `queries` defined within said module. They are called from `BaseApp`'s [`Query` method](../../learn/advanced/00-baseapp.md#query).
+:::
+
+:::note Pre-requisite Readings
+
+* [Module Manager](./01-module-manager.md)
+* [Messages and Queries](./02-messages-and-queries.md)
+
+:::
+
+## Implementation of a module query service
+
+### gRPC Service
+
+When defining a Protobuf `Query` service, a `QueryServer` interface is generated for each module with all the service methods:
+
+```go
+type QueryServer interface {
+ QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error)
+ QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error)
+}
+```
+
+These custom queries methods should be implemented by a module's keeper, typically in `./keeper/grpc_query.go`. The first parameter of these methods is a generic `context.Context`. Therefore, the Cosmos SDK provides a function `sdk.UnwrapSDKContext` to retrieve the `context.Context` from the provided
+`context.Context`.
+
+Here's an example implementation for the bank module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/keeper/grpc_query.go
+```
+
+### Calling queries from the State Machine
+
+The Cosmos SDK v0.47 introduces a new `cosmos.query.v1.module_query_safe` Protobuf annotation which is used to state that a query that is safe to be called from within the state machine, for example:
+
+* a Keeper's query function can be called from another module's Keeper,
+* ADR-033 intermodule query calls,
+* CosmWasm contracts can also directly interact with these queries.
+
+If the `module_query_safe` annotation set to `true`, it means:
+
+* The query is deterministic: given a block height it will return the same response upon multiple calls, and doesn't introduce any state-machine breaking changes across SDK patch versions.
+* Gas consumption never fluctuates across calls and across patch versions.
+
+If you are a module developer and want to use `module_query_safe` annotation for your own query, you have to ensure the following things:
+
+* the query is deterministic and won't introduce state-machine-breaking changes without coordinated upgrades
+* it has its gas tracked, to avoid the attack vector where no gas is accounted for
+ on potentially high-computation queries.
diff --git a/copy-of-sdk-docs/build/building-modules/05-protobuf-annotations.md b/copy-of-sdk-docs/build/building-modules/05-protobuf-annotations.md
new file mode 100644
index 00000000..942b9a89
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/05-protobuf-annotations.md
@@ -0,0 +1,131 @@
+---
+sidebar_position: 1
+---
+
+# ProtocolBuffer Annotations
+
+This document explains the various protobuf scalars that have been added to make working with protobuf easier for Cosmos SDK application developers
+
+## Signer
+
+Signer specifies which field should be used to determine the signer of a message for the Cosmos SDK. This field can be used for clients as well to infer which field should be used to determine the signer of a message.
+
+Read more about the signer field [here](./02-messages-and-queries.md).
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/e6848d99b55a65d014375b295bdd7f9641aac95e/proto/cosmos/bank/v1beta1/tx.proto#L40
+```
+
+```proto
+option (cosmos.msg.v1.signer) = "from_address";
+```
+
+## Scalar
+
+The scalar type defines a way for clients to understand how to construct protobuf messages according to what is expected by the module and sdk.
+
+```proto
+(cosmos_proto.scalar) = "cosmos.AddressString"
+```
+
+Example of account address string scalar:
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e6848d99b55a65d014375b295bdd7f9641aac95e/proto/cosmos/bank/v1beta1/tx.proto#L46
+```
+
+Example of validator address string scalar:
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/distribution/v1beta1/query.proto#L87
+```
+
+Example of Decimals scalar:
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/distribution/v1beta1/distribution.proto#L26
+```
+
+Example of Int scalar:
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/gov/v1/gov.proto#L137
+```
+
+There are a few options for what can be provided as a scalar: `cosmos.AddressString`, `cosmos.ValidatorAddressString`, `cosmos.ConsensusAddressString`, `cosmos.Int`, `cosmos.Dec`.
+
+## Implements_Interface
+
+`Implements_Interface` is used to provide information to client tooling like [telescope](https://github.com/cosmology-tech/telescope) on how to encode and decode protobuf messages.
+
+```proto
+option (cosmos_proto.implements_interface) = "cosmos.auth.v1beta1.AccountI";
+```
+
+## Method,Field,Message Added In
+
+`method_added_in`, `field_added_in` and `message_added_in` are annotations to denote to clients that a field has been supported in a later version. This is useful when new methods or fields are added in later versions and that the client needs to be aware of what it can call.
+
+The annotation should be worded as follows:
+
+```proto
+option (cosmos_proto.method_added_in) = "cosmos-sdk v0.50.1";
+option (cosmos_proto.method_added_in) = "x/epochs v1.0.0";
+option (cosmos_proto.method_added_in) = "simapp v24.0.0";
+```
+
+## Amino
+
+The amino codec was removed in `v0.50+`, this means there is no need to register `legacyAminoCodec`. To replace the amino codec, Amino protobuf annotations are used to provide information to the amino codec on how to encode and decode protobuf messages.
+
+Amino annotations are only used for backwards compatibility with amino. New modules are not required to use amino annotations.
+
+The below annotations are used to provide information to the amino codec on how to encode and decode protobuf messages in a backwards compatible manner.
+
+### Name
+
+Name specifies the amino name that would show up for the user in order for them to see which message they are signing.
+
+```proto
+option (amino.name) = "cosmos-sdk/BaseAccount";
+```
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/bank/v1beta1/tx.proto#L41
+```
+
+### Field_Name
+
+Field name specifies the amino name that would show up for the user in order for them to see which field they are signing.
+
+```proto
+uint64 height = 1 [(amino.field_name) = "public_key"];
+```
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/distribution/v1beta1/distribution.proto#L166
+```
+
+### Dont_OmitEmpty
+
+Dont omitempty specifies that the field should not be omitted when encoding to amino.
+
+```proto
+repeated cosmos.base.v1beta1.Coin amount = 3 [(amino.dont_omitempty) = true];
+```
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/bank/v1beta1/bank.proto#L56
+```
+
+### Encoding
+
+Encoding instructs the amino json marshaler how to encode certain fields that may differ from the standard encoding behaviour. The most common example of this is how `repeated cosmos.base.v1beta1.Coin` is encoded when using the amino json encoding format. The `legacy_coins` option tells the json marshaler [how to encode a null slice](https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/x/tx/signing/aminojson/json_marshal.go#L65) of `cosmos.base.v1beta1.Coin`.
+
+```proto
+(amino.encoding) = "legacy_coins",
+```
+
+```proto reference
+https://github.com/cosmos/cosmos-sdk/blob/e8f28bf5db18b8d6b7e0d94b542ce4cf48fed9d6/proto/cosmos/bank/v1beta1/genesis.proto#L23
+```
diff --git a/copy-of-sdk-docs/build/building-modules/06-beginblock-endblock.md b/copy-of-sdk-docs/build/building-modules/06-beginblock-endblock.md
new file mode 100644
index 00000000..93e07a54
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/06-beginblock-endblock.md
@@ -0,0 +1,47 @@
+---
+sidebar_position: 1
+---
+
+# BeginBlocker and EndBlocker
+
+:::note Synopsis
+`BeginBlocker` and `EndBlocker` are optional methods module developers can implement in their module. They will be triggered at the beginning and at the end of each block respectively, when the [`BeginBlock`](../../learn/advanced/00-baseapp.md#beginblock) and [`EndBlock`](../../learn/advanced/00-baseapp.md#endblock) ABCI messages are received from the underlying consensus engine.
+:::
+
+:::note Pre-requisite Readings
+
+* [Module Manager](./01-module-manager.md)
+
+:::
+
+## BeginBlocker and EndBlocker
+
+`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain.
+
+In 0.47.0, Prepare and Process Proposal were added that allow app developers to do arbitrary work at those phases, but they do not influence the work that will be done in BeginBlock. If an application requires `BeginBlock` to execute prior to any sort of work is done then this is not possible today (0.50.0).
+
+When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`HasBeginBlocker`, `HasABCIEndBlocker` and `EndBlocker` interfaces](./01-module-manager.md#appmodule). This means either can be left-out if not required. The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`.
+
+The actual implementation of `BeginBlocker` and `EndBlocker` in `abci.go` is very similar to that of a [`Msg` service](./03-msg-services.md):
+
+* They generally use the [`keeper`](./06-keeper.md) and [`ctx`](../../learn/advanced/02-context.md) to retrieve information about the latest state.
+* If needed, they use the `keeper` and `ctx` to trigger state-transitions.
+* If needed, they can emit [`events`](../../learn/advanced/08-events.md) via the `ctx`'s `EventManager`.
+
+A specific type of `EndBlocker` is available to return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://docs.cometbft.com/v0.37/spec/abci/abci++_methods#endblock). This is the preferred way to implement custom validator changes.
+
+It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](./01-module-manager.md#manager).
+
+See an example implementation of `BeginBlocker` from the `distribution` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/distribution/abci.go#L14-L38
+```
+
+and an example implementation of `EndBlocker` from the `staking` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/abci.go#L22-L27
+```
+
+
diff --git a/copy-of-sdk-docs/build/building-modules/06-keeper.md b/copy-of-sdk-docs/build/building-modules/06-keeper.md
new file mode 100644
index 00000000..f942750e
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/06-keeper.md
@@ -0,0 +1,92 @@
+---
+sidebar_position: 1
+---
+
+# Keepers
+
+:::note Synopsis
+`Keeper`s refer to a Cosmos SDK abstraction whose role is to manage access to the subset of the state defined by various modules. `Keeper`s are module-specific, i.e. the subset of state defined by a module can only be accessed by a `keeper` defined in said module. If a module needs to access the subset of state defined by another module, a reference to the second module's internal `keeper` needs to be passed to the first one. This is done in `app.go` during the instantiation of module keepers.
+:::
+
+:::note Pre-requisite Readings
+
+* [Introduction to Cosmos SDK Modules](./00-intro.md)
+
+:::
+
+## Motivation
+
+The Cosmos SDK is a framework that makes it easy for developers to build complex decentralized applications from scratch, mainly by composing modules together. As the ecosystem of open-source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developers.
+
+The Cosmos SDK adopts an [object-capabilities-based approach](../../docs/learn/advanced/10-ocap.md) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be considered quite literally to be the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](../../learn/advanced/04-store.md#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s).
+
+The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers.
+
+## Type Definition
+
+`keeper`s are generally implemented in a `/keeper/keeper.go` file located in the module's folder. By convention, the type `keeper` of a module is simply named `Keeper` and usually follows the following structure:
+
+```go
+type Keeper struct {
+ // External keepers, if any
+
+ // Store key(s)
+
+ // codec
+
+ // authority
+}
+```
+
+For example, here is the type definition of the `keeper` from the `staking` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go#L23-L31
+```
+
+Let us go through the different parameters:
+
+* An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself.
+* `storeKey`s grant access to the store(s) of the [multistore](../../learn/advanced/04-store.md) managed by the module. They should always remain unexposed to external modules.
+* `cdc` is the [codec](../../learn/advanced/05-encoding.md) used to marshall and unmarshall structs to/from `[]byte`. The `cdc` can be any of `codec.BinaryCodec`, `codec.JSONCodec` or `codec.Codec` based on your requirements. It can be either a proto or amino codec as long as they implement these interfaces.
+* The authority listed is a module account or user account that has the right to change module level parameters. Previously this was handled by the param module, which has been deprecated.
+
+Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](../../learn/beginner/00-app-anatomy.md). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them.
+
+## Implementing Methods
+
+`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed by the [`Msg` server](./03-msg-services.md) when `keeper`s' methods are called.
+
+Typically, a *getter* method will have the following signature
+
+```go
+func (k Keeper) Get(ctx context.Context, key string) returnType
+```
+
+and the method will go through the following steps:
+
+1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. Then it's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety.
+2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store.
+3. Unmarshall the retrieved value from `[]byte` to `returnType` using the codec `cdc`. Return the value.
+
+Similarly, a *setter* method will have the following signature
+
+```go
+func (k Keeper) Set(ctx context.Context, key string, value valueType)
+```
+
+and the method will go through the following steps:
+
+1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. It's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety.
+2. Marshal `value` to `[]byte` using the codec `cdc`.
+3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store.
+
+For more, see an example of `keeper`'s [methods implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/keeper.go).
+
+The [module `KVStore`](../../learn/advanced/04-store.md#kvstore-and-commitkvstore-interfaces) also provides an `Iterator()` method which returns an `Iterator` object to iterate over a domain of keys.
+
+This is an example from the `auth` module to iterate accounts:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/keeper/account.go
+```
diff --git a/copy-of-sdk-docs/build/building-modules/07-invariants.md b/copy-of-sdk-docs/build/building-modules/07-invariants.md
new file mode 100644
index 00000000..018796f7
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/07-invariants.md
@@ -0,0 +1,90 @@
+---
+sidebar_position: 1
+---
+
+# Invariants
+
+:::note Synopsis
+An invariant is a property of the application that should always be true. In the context of the Cosmos SDK, an `Invariant` is a function that checks for a particular invariant. These functions are useful to detect bugs early on and act upon them to limit their potential consequences (e.g. by halting the chain). They are also useful in the development process of the application to detect bugs via simulations.
+:::
+
+:::note Pre-requisite Readings
+
+* [Keepers](./06-keeper.md)
+
+:::
+
+## Implementing `Invariant`s
+
+An `Invariant` is a function that checks for a particular invariant within a module. Module `Invariant`s must follow the `Invariant` type:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/invariant.go#L9
+```
+
+The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check.
+
+In practice, each module implements `Invariant`s in a `keeper/invariants.go` file within the module's folder. The standard is to implement one `Invariant` function per logical grouping of invariants with the following model:
+
+```go
+// Example for an Invariant that checks balance-related invariants
+
+func BalanceInvariants(k Keeper) sdk.Invariant {
+ return func(ctx context.Context) (string, bool) {
+ // Implement checks for balance-related invariants
+ }
+}
+```
+
+Additionally, module developers should generally implement an `AllInvariants` function that runs all the `Invariant`s functions of the module:
+
+```go
+// AllInvariants runs all invariants of the module.
+// In this example, the module implements two Invariants: BalanceInvariants and DepositsInvariants
+
+func AllInvariants(k Keeper) sdk.Invariant {
+
+ return func(ctx context.Context) (string, bool) {
+ res, stop := BalanceInvariants(k)(ctx)
+ if stop {
+ return res, stop
+ }
+
+ return DepositsInvariant(k)(ctx)
+ }
+}
+```
+
+Finally, module developers need to implement the `RegisterInvariants` method as part of the [`AppModule` interface](./01-module-manager.md#appmodule). Indeed, the `RegisterInvariants` method of the module, implemented in the `module/module.go` file, typically only defers the call to a `RegisterInvariants` method implemented in the `keeper/invariants.go` file. The `RegisterInvariants` method registers a route for each `Invariant` function in the [`InvariantRegistry`](#invariant-registry):
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/invariants.go#L12-L22
+```
+
+For more, see an example of [`Invariant`s implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/staking/keeper/invariants.go).
+
+## Invariant Registry
+
+The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers.
+
+At its core, the `InvariantRegistry` is defined in the Cosmos SDK as an interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/types/invariant.go#L14-L17
+```
+
+Typically, this interface is implemented in the `keeper` of a specific module. The most used implementation of an `InvariantRegistry` can be found in the `crisis` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/crisis/keeper/keeper.go#L48-L50
+```
+
+The `InvariantRegistry` is therefore typically instantiated by instantiating the `keeper` of the `crisis` module in the [application's constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+
+`Invariant`s can be checked manually via [`message`s](./02-messages-and-queries.md), but most often they are checked automatically at the end of each block. Here is an example from the `crisis` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/crisis/abci.go#L13-L23
+```
+
+In both cases, if one of the `Invariant`s returns false, the `InvariantRegistry` can trigger special logic (e.g. have the application panic and print the `Invariant`s message in the log).
diff --git a/copy-of-sdk-docs/build/building-modules/08-genesis.md b/copy-of-sdk-docs/build/building-modules/08-genesis.md
new file mode 100644
index 00000000..28ff911b
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/08-genesis.md
@@ -0,0 +1,78 @@
+---
+sidebar_position: 1
+---
+
+# Module Genesis
+
+:::note Synopsis
+Modules generally handle a subset of the state and, as such, they need to define the related subset of the genesis file as well as methods to initialize, verify and export it.
+:::
+
+:::note Pre-requisite Readings
+
+* [Module Manager](./01-module-manager.md)
+* [Keepers](./06-keeper.md)
+
+:::
+
+## Type Definition
+
+The subset of the genesis state defined by a given module is generally defined in a `genesis.proto` file ([more info](../../learn/advanced/05-encoding.md#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process.
+
+See an example of `GenesisState` protobuf message definition from the `auth` module:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/auth/v1beta1/genesis.proto
+```
+
+Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications.
+
+### `DefaultGenesis`
+
+The `DefaultGenesis()` method is a simple function that calls the constructor function for `GenesisState` with the default value for each parameter. See an example from the `auth` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/module.go#L63-L67
+```
+
+### `ValidateGenesis`
+
+The `ValidateGenesis(data GenesisState)` method is called to verify that the provided `genesisState` is correct. It should perform validity checks on each of the parameters listed in `GenesisState`. See an example from the `auth` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/types/genesis.go#L62-L75
+```
+
+## Other Genesis Methods
+
+Other than the methods related directly to `GenesisState`, module developers are expected to implement two other methods as part of the [`AppModuleGenesis` interface](./01-module-manager.md#appmodulegenesis) (only if the module needs to initialize a subset of state in genesis). These methods are [`InitGenesis`](#initgenesis) and [`ExportGenesis`](#exportgenesis).
+
+### `InitGenesis`
+
+The `InitGenesis` method is executed during [`InitChain`](../../learn/advanced/00-baseapp.md#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](./06-keeper.md) setter function on each parameter within the `GenesisState`.
+
+The [module manager](./01-module-manager.md#manager) of the application is responsible for calling the `InitGenesis` method of each of the application's modules in order. This order is set by the application developer via the manager's `SetOrderGenesisMethod`, which is called in the [application's constructor function](../../learn/beginner/00-app-anatomy.md#constructor-function).
+
+See an example of `InitGenesis` from the `auth` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/keeper/genesis.go#L8-L35
+```
+
+### `ExportGenesis`
+
+The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork.
+
+See an example of `ExportGenesis` from the `auth` module.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/keeper/genesis.go#L37-L49
+```
+
+### GenesisTxHandler
+
+`GenesisTxHandler` is a way for modules to submit state transitions prior to the first block. This is used by `x/genutil` to submit the genesis transactions for the validators to be added to staking.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/core/genesis/txhandler.go#L3-L6
+```
diff --git a/copy-of-sdk-docs/build/building-modules/09-module-interfaces.md b/copy-of-sdk-docs/build/building-modules/09-module-interfaces.md
new file mode 100644
index 00000000..63a939d0
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/09-module-interfaces.md
@@ -0,0 +1,165 @@
+---
+sidebar_position: 1
+---
+
+# Module Interfaces
+
+:::note Synopsis
+This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included.
+:::
+
+:::note Pre-requisite Readings
+
+* [Building Modules Intro](./00-intro.md)
+
+:::
+
+## CLI
+
+One of the main interfaces for an application is the [command-line interface](../../learn/advanced/07-cli.md). This entrypoint adds commands from the application's modules enabling end-users to create [**messages**](./02-messages-and-queries.md#messages) wrapped in transactions and [**queries**](./02-messages-and-queries.md#queries). The CLI files are typically found in the module's `./client/cli` folder.
+
+### Transaction Commands
+
+In order to create messages that trigger state changes, end-users must create [transactions](../../learn/advanced/01-transactions.md) that wrap and deliver the messages. A transaction command creates a transaction that includes one or more messages.
+
+Transaction commands typically have their own `tx.go` file that lives within the module's `./client/cli` folder. The commands are specified in getter functions and the name of the function should include the name of the command.
+
+Here is an example from the `x/bank` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/client/cli/tx.go#L37-L76
+```
+
+In the example, `NewSendTxCmd()` creates and returns the transaction command for a transaction that wraps and delivers `MsgSend`. `MsgSend` is the message used to send tokens from one account to another.
+
+In general, the getter function does the following:
+
+* **Constructs the command:** Read the [Cobra Documentation](https://pkg.go.dev/github.com/spf13/cobra) for more detailed information on how to create commands.
+ * **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments.
+ * **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`.
+ * **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag.
+ * **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction.
+ * The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`.
+ * If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed.
+ * A [message](./02-messages-and-queries.md) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call, if possible, the necessary [message validation methods](../building-modules/03-msg-services.md#Validation) before broadcasting the message.
+ * Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`.
+* **Adds transaction flags:** All transaction commands must add a set of transaction [flags](#flags). The transaction flags are used to collect additional information from the user (e.g. the amount of fees the user is willing to pay). The transaction flags are added to the constructed command using `AddTxFlagsToCmd(cmd)`.
+* **Returns the command:** Finally, the transaction command is returned.
+
+Each module can implement `NewTxCmd()`, which aggregates all of the transaction commands of the module. Here is an example from the `x/bank` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/client/cli/tx.go#L20-L35
+```
+
+Each module then can also implement a `GetTxCmd()` method that simply returns `NewTxCmd()`. This allows the root command to easily aggregate all of the transaction commands for each module. Here is an example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/module.go#L84-L86
+```
+
+### Query Commands
+
+:::warning
+This section is being rewritten. Refer to [AutoCLI](https://docs.cosmos.network/main/core/autocli) while this section is being updated.
+:::
+
+
+
+## gRPC
+
+[gRPC](https://grpc.io/) is a Remote Procedure Call (RPC) framework. RPC is the preferred way for external clients like wallets and exchanges to interact with a blockchain.
+
+In addition to providing an ABCI query pathway, the Cosmos SDK provides a gRPC proxy server that routes gRPC query requests to ABCI query requests.
+
+In order to do that, modules must implement `RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux)` on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module.
+
+Here's an example from the `x/auth` module:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/auth/module.go#L71-L76
+```
+
+## gRPC-gateway REST
+
+Applications need to support web services that use HTTP requests (e.g. a web wallet like [Keplr](https://keplr.app)). [grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) translates REST calls into gRPC calls, which might be useful for clients that do not use gRPC.
+
+Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods, such as in the example below from the `x/auth` module:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/auth/v1beta1/query.proto#L14-L89
+```
+
+gRPC gateway is started in-process along with the application and CometBFT. It can be enabled or disabled by setting gRPC Configuration `enable` in [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml).
+
+The Cosmos SDK provides a command for generating [Swagger](https://swagger.io/) documentation (`protoc-gen-swagger`). Setting `swagger` in [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml) defines if swagger documentation should be automatically registered.
diff --git a/copy-of-sdk-docs/build/building-modules/11-structure.md b/copy-of-sdk-docs/build/building-modules/11-structure.md
new file mode 100644
index 00000000..a36b9a49
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/11-structure.md
@@ -0,0 +1,95 @@
+---
+sidebar_position: 1
+---
+
+# Recommended Folder Structure
+
+:::note Synopsis
+This document outlines the recommended structure of Cosmos SDK modules. These ideas are meant to be applied as suggestions. Application developers are encouraged to improve upon and contribute to module structure and development design.
+:::
+
+## Structure
+
+A typical Cosmos SDK module can be structured as follows:
+
+```shell
+proto
+└── {project_name}
+ └── {module_name}
+ └── {proto_version}
+ ├── {module_name}.proto
+ ├── event.proto
+ ├── genesis.proto
+ ├── query.proto
+ └── tx.proto
+```
+
+* `{module_name}.proto`: The module's common message type definitions.
+* `event.proto`: The module's message type definitions related to events.
+* `genesis.proto`: The module's message type definitions related to genesis state.
+* `query.proto`: The module's Query service and related message type definitions.
+* `tx.proto`: The module's Msg service and related message type definitions.
+
+```shell
+x/{module_name}
+├── client
+│ ├── cli
+│ │ ├── query.go
+│ │ └── tx.go
+│ └── testutil
+│ ├── cli_test.go
+│ └── suite.go
+├── exported
+│ └── exported.go
+├── keeper
+│ ├── genesis.go
+│ ├── grpc_query.go
+│ ├── hooks.go
+│ ├── invariants.go
+│ ├── keeper.go
+│ ├── keys.go
+│ ├── msg_server.go
+│ └── querier.go
+├── module
+│ └── module.go
+│ └── abci.go
+│ └── autocli.go
+├── simulation
+│ ├── decoder.go
+│ ├── genesis.go
+│ ├── operations.go
+│ └── params.go
+├── {module_name}.pb.go
+├── codec.go
+├── errors.go
+├── events.go
+├── events.pb.go
+├── expected_keepers.go
+├── genesis.go
+├── genesis.pb.go
+├── keys.go
+├── msgs.go
+├── params.go
+├── query.pb.go
+├── tx.pb.go
+└── README.md
+```
+
+* `client/`: The module's CLI client functionality implementation and the module's CLI testing suite.
+* `exported/`: The module's exported types - typically interface types. If a module relies on keepers from another module, it is expected to receive the keepers as interface contracts through the `expected_keepers.go` file (see below) in order to avoid a direct dependency on the module implementing the keepers. However, these interface contracts can define methods that operate on and/or return types that are specific to the module that is implementing the keepers and this is where `exported/` comes into play. The interface types that are defined in `exported/` use canonical types, allowing for the module to receive the keepers as interface contracts through the `expected_keepers.go` file. This pattern allows for code to remain DRY and also alleviates import cycle chaos.
+* `keeper/`: The module's `Keeper` and `MsgServer` implementation.
+* `module/`: The module's `AppModule` and `AppModuleBasic` implementation.
+ * `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined).
+ * `autocli.go`: The module [autocli](https://docs.cosmos.network/main/core/autocli) options.
+* `simulation/`: The module's [simulation](./14-simulator.md) package defines functions used by the blockchain simulator application (`simapp`).
+* `README.md`: The module's specification documents outlining important concepts, state storage structure, and message and event type definitions. Learn more about how to write module specs in the [spec guidelines](../../../spec/SPEC_MODULE.md).
+* The root directory includes type definitions for messages, events, and genesis state, including the type definitions generated by Protocol Buffers.
+ * `codec.go`: The module's registry methods for interface types.
+ * `errors.go`: The module's sentinel errors.
+ * `events.go`: The module's event types and constructors.
+ * `expected_keepers.go`: The module's [expected keeper](./06-keeper.md#type-definition) interfaces.
+ * `genesis.go`: The module's genesis state methods and helper functions.
+ * `keys.go`: The module's store keys and associated helper functions.
+ * `msgs.go`: The module's message type definitions and associated methods.
+ * `params.go`: The module's parameter type definitions and associated methods.
+ * `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above).
diff --git a/copy-of-sdk-docs/build/building-modules/12-errors.md b/copy-of-sdk-docs/build/building-modules/12-errors.md
new file mode 100644
index 00000000..214ab70e
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/12-errors.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 1
+---
+
+# Errors
+
+:::note Synopsis
+This document outlines the recommended usage and APIs for error handling in Cosmos SDK modules.
+:::
+
+Modules are encouraged to define and register their own errors to provide better
+context on failed message or handler execution. Typically, these errors should be
+common or general errors which can be further wrapped to provide additional specific
+execution context.
+
+## Registration
+
+Modules should define and register their custom errors in `x/{module}/errors.go`.
+Registration of errors is handled via the [`errors` package](https://github.com/cosmos/cosmos-sdk/blob/main/errors/errors.go).
+
+Example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/distribution/types/errors.go
+```
+
+Each custom module error must provide the codespace, which is typically the module name
+(e.g. "distribution") and is unique per module, and a uint32 code. Together, the codespace and code
+provide a globally unique Cosmos SDK error. Typically, the code is monotonically increasing but does not
+necessarily have to be. The only restrictions on error codes are the following:
+
+* Must be greater than one, as a code value of one is reserved for internal errors.
+* Must be unique within the module.
+
+Note, the Cosmos SDK provides a core set of *common* errors. These errors are defined in [`types/errors/errors.go`](https://github.com/cosmos/cosmos-sdk/blob/main/types/errors/errors.go).
+
+## Wrapping
+
+The custom module errors can be returned as their concrete type as they already fulfill the `error`
+interface. However, module errors can be wrapped to provide further context and meaning to failed
+execution.
+
+Example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/keeper/keeper.go#L141-L182
+```
+
+Regardless if an error is wrapped or not, the Cosmos SDK's `errors` package provides a function to determine if
+an error is of a particular kind via `Is`.
+
+## ABCI
+
+If a module error is registered, the Cosmos SDK `errors` package allows ABCI information to be extracted
+through the `ABCIInfo` function. The package also provides `ResponseCheckTx` and `ResponseDeliverTx` as
+auxiliary functions to automatically get `CheckTx` and `DeliverTx` responses from an error.
diff --git a/copy-of-sdk-docs/build/building-modules/13-upgrade.md b/copy-of-sdk-docs/build/building-modules/13-upgrade.md
new file mode 100644
index 00000000..20c02e9f
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/13-upgrade.md
@@ -0,0 +1,63 @@
+---
+sidebar_position: 1
+---
+
+# Upgrading Modules
+
+:::note Synopsis
+[In-Place Store Migrations](../../learn/advanced/15-upgrade.md) allow your modules to upgrade to new versions that include breaking changes. This document outlines how to build modules to take advantage of this functionality.
+:::
+
+:::note Pre-requisite Readings
+
+* [In-Place Store Migration](../../learn/advanced/15-upgrade.md)
+
+:::
+
+## Consensus Version
+
+Successful upgrades of existing modules require each `AppModule` to implement the function `ConsensusVersion() uint64`.
+
+* The versions must be hard-coded by the module developer.
+* The initial version **must** be set to 1.
+
+Consensus versions serve as state-breaking versions of app modules and must be incremented when the module introduces breaking changes.
+
+## Registering Migrations
+
+To register the functionality that takes place during a module upgrade, you must register which migrations you want to take place.
+
+Migration registration takes place in the `Configurator` using the `RegisterMigration` method. The `AppModule` reference to the configurator is in the `RegisterServices` method.
+
+You can register one or more migrations. If you register more than one migration script, list the migrations in increasing order and ensure there are enough migrations that lead to the desired consensus version. For example, to migrate to version 3 of a module, register separate migrations for version 1 and version 2 as shown in the following example:
+
+```go
+func (am AppModule) RegisterServices(cfg module.Configurator) {
+ // --snip--
+ cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error {
+ // Perform in-place store migrations from ConsensusVersion 1 to 2.
+ })
+ cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error {
+ // Perform in-place store migrations from ConsensusVersion 2 to 3.
+ })
+}
+```
+
+Since these migrations are functions that need access to a Keeper's store, use a wrapper around the keepers called `Migrator` as shown in this example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/keeper/migrations.go
+```
+
+## Writing Migration Scripts
+
+To define the functionality that takes place during an upgrade, write a migration script and place the functions in a `migrations/` directory. For example, to write migration scripts for the bank module, place the functions in `x/bank/migrations/`. Use the recommended naming convention for these functions. For example, `v2bank` is the script that migrates the package `x/bank/migrations/v2`:
+
+```go
+// Migrating bank module from version 1 to 2
+func (m Migrator) Migrate1to2(ctx sdk.Context) error {
+ return v2bank.MigrateStore(ctx, m.keeper.storeKey) // v2bank is package `x/bank/migrations/v2`.
+}
+```
+
+To see example code of changes that were implemented in a migration of balance keys, check out [migrateBalanceKeys](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/bank/migrations/v2/store.go#L55-L76). For context, this code introduced migrations of the bank store that updated addresses to be prefixed by their length in bytes as outlined in [ADR-028](../../../architecture/adr-028-public-key-addresses.md).
diff --git a/copy-of-sdk-docs/build/building-modules/14-simulator.md b/copy-of-sdk-docs/build/building-modules/14-simulator.md
new file mode 100644
index 00000000..a9763715
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/14-simulator.md
@@ -0,0 +1,177 @@
+---
+sidebar_position: 1
+---
+
+# Module Simulation
+
+:::note Pre-requisite Readings
+
+* [Cosmos Blockchain Simulator](../../learn/advanced/12-simulation.md)
+
+:::
+
+## Synopsis
+
+This document guides developers on integrating their custom modules with the Cosmos SDK `Simulations`.
+Simulations are useful for testing edge cases in module implementations.
+
+* [Simulation Package](#simulation-package)
+* [Simulation App Module](#simulation-app-module)
+* [SimsX](#simsx)
+ * [Example Implementations](#example-implementations)
+* [Store decoders](#store-decoders)
+* [Randomized genesis](#randomized-genesis)
+* [Random weighted operations](#random-weighted-operations)
+ * [Using Simsx](#using-simsx)
+* [App Simulator manager](#app-simulator-manager)
+* [Running Simulations](#running-simulations)
+
+
+
+## Simulation Package
+
+The Cosmos SDK suggests organizing your simulation related code in a `x//simulation` package.
+
+## Simulation App Module
+
+To integrate with the Cosmos SDK `SimulationManager`, app modules must implement the `AppModuleSimulation` interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/3c6deab626648e47de752c33dac5d06af83e3ee3/types/module/simulation.go#L16-L27
+```
+
+See an example implementation of these methods from `x/distribution` [here](https://github.com/cosmos/cosmos-sdk/blob/b55b9e14fb792cc8075effb373be9d26327fddea/x/distribution/module.go#L170-L194).
+
+## SimsX
+
+Cosmos SDK v0.53.0 introduced a new package, `simsx`, providing improved DevX for writing simulation code.
+
+It exposes the following extension interfaces that modules may implement to integrate with the new `simsx` runner.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/testutil/simsx/runner.go#L223-L234
+```
+
+These methods allow constructing randomized messages and/or proposal messages.
+
+:::tip
+Note that modules should **not** implement both `HasWeightedOperationsX` and `HasWeightedOperationsXWithProposals`.
+See the runner code [here](https://github.com/cosmos/cosmos-sdk/blob/main/testutil/simsx/runner.go#L330-L339) for details
+
+If the module does **not** have message handlers or governance proposal handlers, these interface methods do **not** need to be implemented.
+:::
+
+### Example Implementations
+
+* `HasWeightedOperationsXWithProposals`: [x/gov](https://github.com/cosmos/cosmos-sdk/blob/main/x/gov/module.go#L242-L261)
+* `HasWeightedOperationsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L199-L203)
+* `HasProposalMsgsX`: [x/bank](https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L194-L197)
+
+## Store decoders
+
+Registering the store decoders is required for the `AppImportExport` simulation. This allows
+for the key-value pairs from the stores to be decoded to their corresponding types.
+In particular, it matches the key to a concrete type and then unmarshals the value from the `KVPair` to the type provided.
+
+Modules using [collections](https://github.com/cosmos/cosmos-sdk/blob/main/collections/README.md) can use the `NewStoreDecoderFuncFromCollectionsSchema` function that builds the decoder for you:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/x/bank/module.go#L181-L184
+```
+
+Modules not using collections must manually build the store decoder.
+See the implementation [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/simulation/decoder.go) from the distribution module for an example.
+
+## Randomized genesis
+
+The simulator tests different scenarios and values for genesis parameters.
+App modules must implement a `GenerateGenesisState` method to generate the initial random `GenesisState` from a given seed.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/types/module/simulation.go#L20
+```
+
+See an example from `x/auth` [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/auth/module.go#L169-L172).
+
+Once the module's genesis parameters are generated randomly (or with the key and
+values defined in a `params` file), they are marshaled to JSON format and added
+to the app genesis JSON for the simulation.
+
+## Random weighted operations
+
+Operations are one of the crucial parts of the Cosmos SDK simulation. They are the transactions
+(`Msg`) that are simulated with random field values. The sender of the operation
+is also assigned randomly.
+
+Operations on the simulation are simulated using the full [transaction cycle](../../learn/advanced/01-transactions.md) of a
+`ABCI` application that exposes the `BaseApp`.
+
+### Using Simsx
+
+Simsx introduces the ability to define a `MsgFactory` for each of a module's messages.
+
+These factories are registered in `WeightedOperationsX` and/or `ProposalMsgsX`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/module.go#L196-L206
+```
+
+Note that the name passed in to `weights.Get` must match the name of the operation set in the `WeightedOperations`.
+
+For example, if the module contains an operation `op_weight_msg_set_withdraw_address`, the name passed to `weights.Get` should be `msg_set_withdraw_address`.
+
+See the `x/distribution` for an example of implementing message factories [here](https://github.com/cosmos/cosmos-sdk/blob/main/x/distribution/simulation/msg_factory.go)
+
+## App Simulator manager
+
+The following step is setting up the `SimulatorManager` at the app level. This
+is required for the simulation test files in the next step.
+
+```go
+type CoolApp struct {
+...
+sm *module.SimulationManager
+}
+```
+
+Within the constructor of the application, construct the simulation manager using the modules from `ModuleManager` and call the `RegisterStoreDecoders` method.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/simapp/app.go#L650-L660
+```
+
+Note that you may override some modules.
+This is useful if the existing module configuration in the `ModuleManager` should be different in the `SimulationManager`.
+
+Finally, the application should expose the `SimulationManager` via the following method defined in the `Runtime` interface:
+
+```go
+// SimulationManager implements the SimulationApp interface
+func (app *SimApp) SimulationManager() *module.SimulationManager {
+return app.sm
+}
+```
+
+## Running Simulations
+
+To run the simulation, use the `simsx` runner.
+
+Call the following function from the `simsx` package to begin simulating with a default seed:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/testutil/simsx/runner.go#L69-L88
+```
+
+If a custom seed is desired, tests should use `RunWithSeed`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/b55b9e14fb792cc8075effb373be9d26327fddea/testutil/simsx/runner.go#L151-L168
+```
+
+These functions should be called in tests (i.e., app_test.go, app_sim_test.go, etc.)
+
+Example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/simapp/sim_test.go#L53-L65
+```
diff --git a/copy-of-sdk-docs/build/building-modules/15-depinject.md b/copy-of-sdk-docs/build/building-modules/15-depinject.md
new file mode 100644
index 00000000..64aa3711
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/15-depinject.md
@@ -0,0 +1,124 @@
+---
+sidebar_position: 1
+---
+
+# Modules depinject-ready
+
+:::note Pre-requisite Readings
+
+* [Depinject Documentation](../packages/01-depinject.md)
+
+:::
+
+[`depinject`](../packages/01-depinject.md) is used to wire any module in `app.go`.
+All core modules are already configured to support dependency injection.
+
+To work with `depinject` a module must define its configuration and requirements so that `depinject` can provide the right dependencies.
+
+In brief, as a module developer, the following steps are required:
+
+1. Define the module configuration using Protobuf
+2. Define the module dependencies in `x/{moduleName}/module.go`
+
+A chain developer can then use the module by following these two steps:
+
+1. Configure the module in `app_config.go` or `app.yaml`
+2. Inject the module in `app.go`
+
+## Module Configuration
+
+The module available configuration is defined in a Protobuf file, located at `{moduleName}/module/v1/module.proto`.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/proto/cosmos/group/module/v1/module.proto
+```
+
+* `go_import` must point to the Go package of the custom module.
+* Message fields define the module configuration.
+ That configuration can be set in the `app_config.go` / `app.yaml` file for a chain developer to configure the module.
+ Taking `group` as an example, a chain developer is able to decide, thanks to `uint64 max_metadata_len`, what the maximum metadata length allowed for a group proposal is.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/simapp/app_config.go#L228-L234
+ ```
+
+That message is generated using [`pulsar`](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protocgen-pulsar.sh) (by running `make proto-gen`).
+In the case of the `group` module, this file is generated here: https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/api/cosmos/group/module/v1/module.pulsar.go.
+
+The part that is relevant for the module configuration is:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/api/cosmos/group/module/v1/module.pulsar.go#L515-L527
+```
+
+:::note
+Pulsar is optional. The official [`protoc-gen-go`](https://developers.google.com/protocol-buffers/docs/reference/go-generated) can be used as well.
+:::
+
+## Dependency Definition
+
+Once the configuration proto is defined, the module's `module.go` must define what dependencies are required by the module.
+The boilerplate is similar for all modules.
+
+:::warning
+All methods, structs and their fields must be public for `depinject`.
+:::
+
+1. Import the module configuration generated package:
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/module/module.go#L12-L14
+ ```
+
+ Define an `init()` function for defining the `providers` of the module configuration:
+ This registers the module configuration message and the wiring of the module.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/module/module.go#L194-L199
+ ```
+
+2. Ensure that the module implements the `appmodule.AppModule` interface:
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.47.0/x/group/module/module.go#L58-L64
+ ```
+
+3. Define a struct that inherits `depinject.In` and define the module inputs (i.e. module dependencies):
+ * `depinject` provides the right dependencies to the module.
+ * `depinject` also checks that all dependencies are provided.
+
+ :::tip
+ For making a dependency optional, add the `optional:"true"` struct tag.
+ :::
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/module/module.go#L201-L211
+ ```
+
+4. Define the module outputs with a public struct that inherits `depinject.Out`:
+ The module outputs are the dependencies that the module provides to other modules. It is usually the module itself and its keeper.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/module/module.go#L213-L218
+ ```
+
+5. Create a function named `ProvideModule` (as called in 1.) and use the inputs for instantiating the module outputs.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/module/module.go#L220-L235
+ ```
+
+The `ProvideModule` function should return an instance of `cosmossdk.io/core/appmodule.AppModule` which implements
+one or more app module extension interfaces for initializing the module.
+
+Following is the complete app wiring configuration for `group`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/x/group/module/module.go#L194-L235
+```
+
+The module is now ready to be used with `depinject` by a chain developer.
+
+## Integrate in an application
+
+The App Wiring is done in `app_config.go` / `app.yaml` and `app_di.go` and is explained in detail in the [overview of `app_di.go`](../building-apps/01-app-go-di.md).
diff --git a/copy-of-sdk-docs/build/building-modules/16-testing.md b/copy-of-sdk-docs/build/building-modules/16-testing.md
new file mode 100644
index 00000000..43a79b8e
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/16-testing.md
@@ -0,0 +1,124 @@
+---
+sidebar_position: 1
+---
+
+# Testing
+
+The Cosmos SDK contains different types of [tests](https://martinfowler.com/articles/practical-test-pyramid.html).
+These tests have different goals and are used at different stages of the development cycle.
+We advise, as a general rule, to use tests at all stages of the development cycle.
+It is advised, as a chain developer, to test your application and modules in a similar way to the SDK.
+
+The rationale behind testing can be found in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes).
+
+## Unit Tests
+
+Unit tests are the lowest test category of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html).
+All packages and modules should have unit test coverage. Modules should have their dependencies mocked: this means mocking keepers.
+
+The SDK uses `mockgen` to generate mocks for keepers:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/scripts/mockgen.sh#L3-L6
+```
+
+You can read more about mockgen [here](https://go.uber.org/mock).
+
+### Example
+
+As an example, we will walkthrough the [keeper tests](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/keeper_test.go) of the `x/gov` module.
+
+The `x/gov` module has a `Keeper` type, which requires a few external dependencies (ie. imports outside `x/gov` to work properly).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/keeper.go#L22-L24
+```
+
+In order to only test `x/gov`, we mock the [expected keepers](https://docs.cosmos.network/v0.46/building-modules/keeper.html#type-definition) and instantiate the `Keeper` with the mocked dependencies. Note that we may need to configure the mocked dependencies to return the expected values:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/common_test.go#L68-L82
+```
+
+This allows us to test the `x/gov` module without having to import other modules.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/keeper_test.go#L3-L42
+```
+
+We can then create unit tests using the newly created `Keeper` instance.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/keeper/keeper_test.go#L83-L107
+```
+
+## Integration Tests
+
+Integration tests are at the second level of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html).
+In the SDK, we locate our integration tests under [`/tests/integrations`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/integration).
+
+The goal of these integration tests is to test how a component interacts with other dependencies. Compared to unit tests, integration tests do not mock dependencies. Instead, they use the direct dependencies of the component. This differs as well from end-to-end tests, which test the component with a full application.
+
+Integration tests interact with the tested module via the defined `Msg` and `Query` services. The result of the test can be verified by checking the state of the application, by checking the emitted events or the response. It is advised to combine two of these methods to verify the result of the test.
+
+The SDK provides small helpers for quickly setting up an integration tests. These helpers can be found at .
+
+### Example
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/a2f73a7dd37bea0ab303792c55fa1e4e1db3b898/testutil/integration/example_test.go#L30-L116
+```
+
+## Deterministic and Regression tests
+
+Tests are written for queries in the Cosmos SDK which have `module_query_safe` Protobuf annotation.
+
+Each query is tested using 2 methods:
+
+* Use property-based testing with the [`rapid`](https://pkg.go.dev/pgregory.net/rapid@v0.5.3) library. The property that is tested is that the query response and gas consumption are the same upon 1000 query calls.
+* Regression tests are written with hardcoded responses and gas, and verify they don't change upon 1000 calls and between SDK patch versions.
+
+Here's an example of regression tests:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/tests/integration/bank/keeper/deterministic_test.go#L143-L160
+```
+
+## Simulations
+
+Simulations uses as well a minimal application, built with [`depinject`](../packages/01-depinject.md):
+
+:::note
+You can as well use the `AppConfig` `configurator` for creating an `AppConfig` [inline](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/slashing/app_test.go#L54-L62). There is no difference between those two ways, use whichever you prefer.
+:::
+
+Following is an example for `x/gov/` simulations:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/simulation/operations_test.go#L415-L441
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/gov/simulation/operations_test.go#L94-L136
+```
+
+## End-to-end Tests
+
+End-to-end tests are at the top of the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html).
+They must test the whole application flow, from the user perspective (for instance, CLI tests). They are located under [`/tests/e2e`](https://github.com/cosmos/cosmos-sdk/tree/main/tests/e2e).
+
+
+For that, the SDK is using `simapp` but you should use your own application (`appd`).
+Here are some examples:
+
+* SDK E2E tests: .
+* Cosmos Hub E2E tests: .
+* Osmosis E2E tests: .
+
+:::note warning
+The SDK is in the process of creating its E2E tests, as defined in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes). This page will eventually be updated with better examples.
+:::
+
+## Learn More
+
+Learn more about testing scope in [ADR-59](https://docs.cosmos.network/main/build/architecture/adr-059-test-scopes).
diff --git a/copy-of-sdk-docs/build/building-modules/17-preblock.md b/copy-of-sdk-docs/build/building-modules/17-preblock.md
new file mode 100644
index 00000000..43722497
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/17-preblock.md
@@ -0,0 +1,32 @@
+---
+sidebar_position: 1
+---
+
+# PreBlocker
+
+:::note Synopsis
+`PreBlocker` is an optional method module developers can implement in their module. They will be triggered before [`BeginBlock`](../../learn/advanced/00-baseapp.md#beginblock).
+:::
+
+:::note Pre-requisite Readings
+
+* [Module Manager](./01-module-manager.md)
+
+:::
+
+## PreBlocker
+
+There are two semantics around the new lifecycle method:
+
+* It runs before the `BeginBlocker` of all modules
+* It can modify consensus parameters in storage, and signal the caller through the return value.
+
+When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameters in the deliver context:
+
+```
+app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams())
+```
+
+The new ctx must be passed to all the other lifecycle methods.
+
+
diff --git a/copy-of-sdk-docs/build/building-modules/_category_.json b/copy-of-sdk-docs/build/building-modules/_category_.json
new file mode 100644
index 00000000..2d50f8b3
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Building Modules",
+ "position": 1,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/building-modules/transaction_flow.svg b/copy-of-sdk-docs/build/building-modules/transaction_flow.svg
new file mode 100644
index 00000000..93bb940a
--- /dev/null
+++ b/copy-of-sdk-docs/build/building-modules/transaction_flow.svg
@@ -0,0 +1,48 @@
+
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/migrations/01-intro.md b/copy-of-sdk-docs/build/migrations/01-intro.md
new file mode 100644
index 00000000..e3146856
--- /dev/null
+++ b/copy-of-sdk-docs/build/migrations/01-intro.md
@@ -0,0 +1,15 @@
+---
+sidebar_position: 1
+---
+
+# SDK Migrations
+
+To smoothen the update to the latest stable release, the SDK includes a CLI command for hard-fork migrations (under the ` genesis migrate` subcommand).
+Additionally, the SDK includes in-place migrations for its core modules. These in-place migrations are useful to migrate between major releases.
+
+* Hard-fork migrations are supported from the last major release to the current one.
+* [In-place module migrations](https://docs.cosmos.network/main/core/upgrade#overwriting-genesis-functions) are supported from the last two major releases to the current one.
+
+Migration from a version older than the last two major releases is not supported.
+
+When migrating from a previous version, refer to the [`UPGRADING.md`](../../../../UPGRADING.md) and the `CHANGELOG.md` of the version you are migrating to.
diff --git a/copy-of-sdk-docs/build/migrations/02-upgrade-reference.md b/copy-of-sdk-docs/build/migrations/02-upgrade-reference.md
new file mode 100644
index 00000000..aaefe25f
--- /dev/null
+++ b/copy-of-sdk-docs/build/migrations/02-upgrade-reference.md
@@ -0,0 +1,26 @@
+# Upgrade Reference
+
+This document provides a quick reference for the upgrades from `v0.53.x` to `v0.54.x` of Cosmos SDK.
+
+Note, always read the **App Wiring Changes** section for more information on application wiring updates.
+
+🚨Upgrading to v0.54.x will require a **coordinated** chain upgrade.🚨
+
+### TLDR
+
+**The only major feature in Cosmos SDK v0.54.x is the upgrade from CometBFT v0.x.x to CometBFT v2.**
+
+For a full list of changes, see the [Changelog](https://github.com/cosmos/cosmos-sdk/blob/release/v0.54.x/CHANGELOG.md).
+
+#### Deprecation of `TimeoutCommit`
+
+CometBFT v2 has deprecated the use of `TimeoutCommit` for a new field, `NextBlockDelay`, that is part of the
+`FinalizeBlockResponse` ABCI message that is returned to CometBFT via the SDK baseapp. More information from
+the CometBFT repo can be found [here](https://github.com/cometbft/cometbft/blob/88ef3d267de491db98a654be0af6d791e8724ed0/spec/abci/abci%2B%2B_methods.md?plain=1#L689).
+
+For SDK application developers and node runners, this means that the `timeout_commit` value in the `config.toml` file
+is still used if `NextBlockDelay` is 0 (its default value). This means that when upgrading to Cosmos SDK v0.54.x, if
+the existing `timout_commit` values that validators have been using will be maintained and have the same behavior.
+
+For setting the field in your application, there is a new `baseapp` option, `SetNextBlockDelay` which can be passed to your application upon
+initialization in `app.go`. Setting this value to any non-zero value will override anything that is set in validators' `config.toml`.
diff --git a/copy-of-sdk-docs/build/migrations/02-upgrading.md b/copy-of-sdk-docs/build/migrations/02-upgrading.md
new file mode 100644
index 00000000..c63f249d
--- /dev/null
+++ b/copy-of-sdk-docs/build/migrations/02-upgrading.md
@@ -0,0 +1,522 @@
+# Upgrading Cosmos SDK
+
+This guide provides instructions for upgrading to specific versions of Cosmos SDK.
+Note, always read the **SimApp** section for more information on application wiring updates.
+
+## [v0.50.x](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.50.0)
+
+### Migration to CometBFT (Part 2)
+
+The Cosmos SDK has migrated in its previous versions, to CometBFT.
+Some functions have been renamed to reflect the naming change.
+
+Following an exhaustive list:
+
+* `client.TendermintRPC` -> `client.CometRPC`
+* `clitestutil.MockTendermintRPC` -> `clitestutil.MockCometRPC`
+* `clitestutilgenutil.CreateDefaultTendermintConfig` -> `clitestutilgenutil.CreateDefaultCometConfig`
+* Package `client/grpc/tmservice` -> `client/grpc/cmtservice`
+
+Additionally, the commands and flags mentioning `tendermint` have been renamed to `comet`.
+These commands and flags are still supported for backward compatibility.
+
+For backward compatibility, the `**/tendermint/**` gRPC services are still supported.
+
+Additionally, the SDK is starting its abstraction from CometBFT Go types through the codebase:
+
+* The usage of the CometBFT logger has been replaced by the Cosmos SDK logger interface (`cosmossdk.io/log.Logger`).
+* The usage of `github.com/cometbft/cometbft/libs/bytes.HexByte` has been replaced by `[]byte`.
+* Usage of an application genesis (see [genutil](#xgenutil)).
+
+#### Enable Vote Extensions
+
+:::tip
+This is an optional feature that is disabled by default.
+:::
+
+Once all the code changes required to implement Vote Extensions are in place,
+they can be enabled by setting the consensus param `Abci.VoteExtensionsEnableHeight`
+to a value greater than zero.
+
+In a new chain, this can be done in the `genesis.json` file.
+
+For existing chains this can be done in two ways:
+
+* During an upgrade the value is set in an upgrade handler.
+* A governance proposal that changes the consensus param **after a coordinated upgrade has taken place**.
+
+### BaseApp
+
+All ABCI methods now accept a pointer to the request and response types defined
+by CometBFT. In addition, they also return errors. An ABCI method should only
+return errors in cases where a catastrophic failure has occurred and the application
+should halt. However, this is abstracted away from the application developer. Any
+handler that an application can define or set that returns an error, will gracefully
+by handled by `BaseApp` on behalf of the application.
+
+BaseApp calls of `BeginBlock` & `Endblock` are now private but are still exposed
+to the application to define via the `Manager` type. `FinalizeBlock` is public
+and should be used in order to test and run operations. This means that although
+`BeginBlock` & `Endblock` no longer exist in the ABCI interface, they are automatically
+called by `BaseApp` during `FinalizeBlock`. Specifically, the order of operations
+is `BeginBlock` -> `DeliverTx` (for all txs) -> `EndBlock`.
+
+ABCI++ 2.0 also brings `ExtendVote` and `VerifyVoteExtension` ABCI methods. These
+methods allow applications to extend and verify pre-commit votes. The Cosmos SDK
+allows an application to define handlers for these methods via `ExtendVoteHandler`
+and `VerifyVoteExtensionHandler` respectively. Please see [here](https://docs.cosmos.network/v0.50/build/building-apps/vote-extensions)
+for more info.
+
+#### Set PreBlocker
+
+A `SetPreBlocker` method has been added to BaseApp. This is essential for BaseApp to run `PreBlock` which runs before begin blocker other modules, and allows to modify consensus parameters, and the changes are visible to the following state machine logics.
+Read more about other use cases [here](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-068-preblock.md).
+
+`depinject` / app di users need to add `x/upgrade` in their `app_config.go` / `app.yml`:
+
+```diff
++ PreBlockers: []string{
++ upgradetypes.ModuleName,
++ },
+BeginBlockers: []string{
+- upgradetypes.ModuleName,
+ minttypes.ModuleName,
+}
+```
+
+When using (legacy) application wiring, the following must be added to `app.go`:
+
+```diff
++app.ModuleManager.SetOrderPreBlockers(
++ upgradetypes.ModuleName,
++)
+
+app.ModuleManager.SetOrderBeginBlockers(
+- upgradetypes.ModuleName,
+)
+
++ app.SetPreBlocker(app.PreBlocker)
+
+// ... //
+
++func (app *SimApp) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) {
++ return app.ModuleManager.PreBlock(ctx, req)
++}
+```
+
+#### Events
+
+The log section of `abci.TxResult` is not populated in the case of successful
+msg(s) execution. Instead a new attribute is added to all messages indicating
+the `msg_index` which identifies which events and attributes relate the same
+transaction.
+
+`BeginBlock` & `EndBlock` Events are now emitted through `FinalizeBlock` but have
+an added attribute, `mode=BeginBlock|EndBlock`, to identify if the event belongs
+to `BeginBlock` or `EndBlock`.
+
+### Config files
+
+Confix is a new SDK tool for modifying and migrating configuration of the SDK.
+It is the replacement of the `config.Cmd` command from the `client/config` package.
+
+Use the following command to migrate your configuration:
+
+```bash
+simd config migrate v0.50
+```
+
+If you were using ` config [key]` or ` config [key] [value]` to set and get values from the `client.toml`, replace it with ` config get client [key]` and ` config set client [key] [value]`. The extra verbosity is due to the extra functionalities added in config.
+
+More information about [confix](https://docs.cosmos.network/main/tooling/confix) and how to add it in your application binary in the [documentation](https://docs.cosmos.network/main/tooling/confix).
+
+#### gRPC-Web
+
+gRPC-Web is now listening to the same address and port as the gRPC Gateway API server (default: `localhost:1317`).
+The possibility to listen to a different address has been removed, as well as its settings.
+Use `confix` to clean-up your `app.toml`. A nginx (or alike) reverse-proxy can be set to keep the previous behavior.
+
+#### Database Support
+
+ClevelDB, BoltDB and BadgerDB are not supported anymore. To migrate from a unsupported database to a supported database please use a database migration tool.
+
+### Protobuf
+
+With the deprecation of the Amino JSON codec defined in [cosmos/gogoproto](https://github.com/cosmos/gogoproto) in favor of the protoreflect powered x/tx/aminojson codec, module developers are encouraged verify that their messages have the correct protobuf annotations to deterministically produce identical output from both codecs.
+
+For core SDK types equivalence is asserted by generative testing of [SignableTypes](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/tests/integration/rapidgen/rapidgen.go#L102) in [TestAminoJSON_Equivalence](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/tests/integration/tx/aminojson/aminojson_test.go#L94).
+
+**TODO: summarize proto annotation requirements.**
+
+#### Stringer
+
+The `gogoproto.goproto_stringer = false` annotation has been removed from most proto files. This means that the `String()` method is being generated for types that previously had this annotation. The generated `String()` method uses `proto.CompactTextString` for _stringifying_ structs.
+[Verify](https://github.com/cosmos/cosmos-sdk/pull/13850#issuecomment-1328889651) the usage of the modified `String()` methods and double-check that they are not used in state-machine code.
+
+### SimApp
+
+In this section we describe the changes made in Cosmos SDK' SimApp.
+**These changes are directly applicable to your application wiring.**
+
+#### Module Assertions
+
+Previously, all modules were required to be set in `OrderBeginBlockers`, `OrderEndBlockers` and `OrderInitGenesis / OrderExportGenesis` in `app.go` / `app_config.go`. This is no longer the case, the assertion has been loosened to only require modules implementing, respectively, the `appmodule.HasBeginBlocker`, `appmodule.HasEndBlocker` and `appmodule.HasGenesis` / `module.HasGenesis` interfaces.
+
+#### Module wiring
+
+The following modules `NewKeeper` function now take a `KVStoreService` instead of a `StoreKey`:
+
+* `x/auth`
+* `x/authz`
+* `x/bank`
+* `x/consensus`
+* `x/crisis`
+* `x/distribution`
+* `x/evidence`
+* `x/feegrant`
+* `x/gov`
+* `x/mint`
+* `x/nft`
+* `x/slashing`
+* `x/upgrade`
+
+**Users using `depinject` / app di do not need any changes, this is abstracted for them.**
+
+Users manually wiring their chain need to use the `runtime.NewKVStoreService` method to create a `KVStoreService` from a `StoreKey`:
+
+```diff
+app.ConsensusParamsKeeper = consensusparamkeeper.NewKeeper(
+ appCodec,
+- keys[consensusparamtypes.StoreKey]
++ runtime.NewKVStoreService(keys[consensusparamtypes.StoreKey]),
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+)
+```
+
+#### Logger
+
+Replace all your CometBFT logger imports by `cosmossdk.io/log`.
+
+Additionally, `depinject` / app di users must now supply a logger through the main `depinject.Supply` function instead of passing it to `appBuilder.Build`.
+
+```diff
+appConfig = depinject.Configs(
+ AppConfig,
+ depinject.Supply(
+ // supply the application options
+ appOpts,
++ logger,
+ ...
+```
+
+```diff
+- app.App = appBuilder.Build(logger, db, traceStore, baseAppOptions...)
++ app.App = appBuilder.Build(db, traceStore, baseAppOptions...)
+```
+
+User manually wiring their chain need to add the logger argument when creating the `x/bank` keeper.
+
+#### Module Basics
+
+Previously, the `ModuleBasics` was a global variable that was used to register all modules' `AppModuleBasic` implementation.
+The global variable has been removed and the basic module manager can be now created from the module manager.
+
+This is automatically done for `depinject` / app di users, however for supplying different app module implementation, pass them via `depinject.Supply` in the main `AppConfig` (`app_config.go`):
+
+```go
+depinject.Supply(
+ // supply custom module basics
+ map[string]module.AppModuleBasic{
+ genutiltypes.ModuleName: genutil.NewAppModuleBasic(genutiltypes.DefaultMessageValidator),
+ govtypes.ModuleName: gov.NewAppModuleBasic(
+ []govclient.ProposalHandler{
+ paramsclient.ProposalHandler,
+ },
+ ),
+ },
+ )
+```
+
+Users manually wiring their chain need to use the new `module.NewBasicManagerFromManager` function, after the module manager creation, and pass a `map[string]module.AppModuleBasic` as argument for optionally overriding some module's `AppModuleBasic`.
+
+#### AutoCLI
+
+[`AutoCLI`](https://docs.cosmos.network/main/core/autocli) has been implemented by the SDK for all its module CLI queries. This means chains must add the following in their `root.go` to enable `AutoCLI` in their application:
+
+```go
+if err := autoCliOpts.EnhanceRootCommand(rootCmd); err != nil {
+ panic(err)
+}
+```
+
+Where `autoCliOpts` is the autocli options of the app, containing all modules and codecs.
+That value can injected by depinject ([see root_v2.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/simapp/simd/cmd/root_v2.go#L49-L67)) or manually provided by the app ([see legacy app.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/simapp/app.go#L636-L655)).
+
+:::warning
+Not doing this will result in all core SDK modules queries not to be included in the binary.
+:::
+
+Additionally `AutoCLI` automatically adds the custom modules commands to the root command for all modules implementing the [`appmodule.AppModule`](https://pkg.go.dev/cosmossdk.io/core/appmodule#AppModule) interface.
+This means, after ensuring all the used modules implement this interface, the following can be removed from your `root.go`:
+
+```diff
+func txCommand() *cobra.Command {
+ ....
+- appd.ModuleBasics.AddTxCommands(cmd)
+}
+```
+
+```diff
+func queryCommand() *cobra.Command {
+ ....
+- appd.ModuleBasics.AddQueryCommands(cmd)
+}
+```
+
+### Packages
+
+#### Math
+
+References to `types/math.go` which contained aliases for math types aliasing the `cosmossdk.io/math` package have been removed.
+Import directly the `cosmossdk.io/math` package instead.
+
+#### Store
+
+References to `types/store.go` which contained aliases for store types have been remapped to point to appropriate `store/types`, hence the `types/store.go` file is no longer needed and has been removed.
+
+##### Extract Store to a standalone module
+
+The `store` module is extracted to have a separate go.mod file which allows it be a standalone module.
+All the store imports are now renamed to use `cosmossdk.io/store` instead of `github.com/cosmos/cosmos-sdk/store` across the SDK.
+
+##### Streaming
+
+[ADR-38](https://docs.cosmos.network/main/architecture/adr-038-state-listening) has been implemented in the SDK.
+
+To continue using state streaming, replace `streaming.LoadStreamingServices` by the following in your `app.go`:
+
+```go
+if err := app.RegisterStreamingServices(appOpts, app.kvStoreKeys()); err != nil {
+ panic(err)
+}
+```
+
+#### Client
+
+The return type of the interface method `TxConfig.SignModeHandler()` has been changed from `x/auth/signing.SignModeHandler` to `x/tx/signing.HandlerMap`. This change is transparent to most users as the `TxConfig` interface is typically implemented by private `x/auth/tx.config` struct (as returned by `auth.NewTxConfig`) which has been updated to return the new type. If users have implemented their own `TxConfig` interface, they will need to update their implementation to return the new type.
+
+##### Textual sign mode
+
+A new sign mode is available in the SDK that produces more human readable output, currently only available on Ledger
+devices but soon to be implemented in other UIs.
+
+:::tip
+This sign mode does not allow offline signing
+:::
+
+When using (legacy) application wiring, the following must be added to `app.go` after setting the app's bank keeper:
+
+```go
+ enabledSignModes := append(tx.DefaultSignModes, sigtypes.SignMode_SIGN_MODE_TEXTUAL)
+ txConfigOpts := tx.ConfigOptions{
+ EnabledSignModes: enabledSignModes,
+ TextualCoinMetadataQueryFn: txmodule.NewBankKeeperCoinMetadataQueryFn(app.BankKeeper),
+ }
+ txConfig, err := tx.NewTxConfigWithOptions(
+ appCodec,
+ txConfigOpts,
+ )
+ if err != nil {
+ log.Fatalf("Failed to create new TxConfig with options: %v", err)
+ }
+ app.txConfig = txConfig
+```
+
+When using `depinject` / `app di`, **it's enabled by default** if there's a bank keeper present.
+
+And in the application client (usually `root.go`):
+
+```go
+ if !clientCtx.Offline {
+ txConfigOpts.EnabledSignModes = append(txConfigOpts.EnabledSignModes, signing.SignMode_SIGN_MODE_TEXTUAL)
+ txConfigOpts.TextualCoinMetadataQueryFn = txmodule.NewGRPCCoinMetadataQueryFn(clientCtx)
+ txConfigWithTextual, err := tx.NewTxConfigWithOptions(
+ codec.NewProtoCodec(clientCtx.InterfaceRegistry),
+ txConfigOpts,
+ )
+ if err != nil {
+ return err
+ }
+ clientCtx = clientCtx.WithTxConfig(txConfigWithTextual)
+ }
+```
+
+When using `depinject` / `app di`, the a tx config should be recreated from the `txConfigOpts` to use `NewGRPCCoinMetadataQueryFn` instead of depending on the bank keeper (that is used in the server).
+
+To learn more see the [docs](https://docs.cosmos.network/main/learn/advanced/transactions#sign_mode_textual) and the [ADR-050](https://docs.cosmos.network/main/build/architecture/adr-050-sign-mode-textual).
+
+### Modules
+
+#### `**all**`
+
+* [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) has defined a simplification of the message validation process for modules.
+ The `sdk.Msg` interface has been updated to not require the implementation of the `ValidateBasic` method.
+ It is now recommended to validate message directly in the message server. When the validation is performed in the message server, the `ValidateBasic` method on a message is no longer required and can be removed.
+
+* Messages no longer need to implement the `LegacyMsg` interface and implementations of `GetSignBytes` can be deleted. Because of this change, global legacy Amino codec definitions and their registration in `init()` can safely be removed as well.
+
+* The `AppModuleBasic` interface has been simplified. Defining `GetTxCmd() *cobra.Command` and `GetQueryCmd() *cobra.Command` is no longer required. The module manager detects when module commands are defined. If AutoCLI is enabled, `EnhanceRootCommand()` will add the auto-generated commands to the root command, unless a custom module command is defined and register that one instead.
+
+* The following modules' `Keeper` methods now take in a `context.Context` instead of `sdk.Context`. Any module that has an interfaces for them (like "expected keepers") will need to update and re-generate mocks if needed:
+
+ * `x/authz`
+ * `x/bank`
+ * `x/mint`
+ * `x/crisis`
+ * `x/distribution`
+ * `x/evidence`
+ * `x/gov`
+ * `x/slashing`
+ * `x/upgrade`
+
+* `BeginBlock` and `EndBlock` have changed their signature, so it is important that any module implementing them are updated accordingly.
+
+```diff
+- BeginBlock(sdk.Context, abci.RequestBeginBlock)
++ BeginBlock(context.Context) error
+```
+
+```diff
+- EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate
++ EndBlock(context.Context) error
+```
+
+In case a module requires to return `abci.ValidatorUpdate` from `EndBlock`, it can use the `HasABCIEndBlock` interface instead.
+
+```diff
+- EndBlock(sdk.Context, abci.RequestEndBlock) []abci.ValidatorUpdate
++ EndBlock(context.Context) ([]abci.ValidatorUpdate, error)
+```
+
+:::tip
+It is possible to ensure that a module implements the correct interfaces by using compiler assertions in your `x/{moduleName}/module.go`:
+
+```go
+var (
+ _ module.AppModuleBasic = (*AppModule)(nil)
+ _ module.AppModuleSimulation = (*AppModule)(nil)
+ _ module.HasGenesis = (*AppModule)(nil)
+
+ _ appmodule.AppModule = (*AppModule)(nil)
+ _ appmodule.HasBeginBlocker = (*AppModule)(nil)
+ _ appmodule.HasEndBlocker = (*AppModule)(nil)
+ ...
+)
+```
+
+Read more on those interfaces [here](https://docs.cosmos.network/v0.50/building-modules/module-manager#application-module-interfaces).
+
+:::
+
+* `GetSigners()` is no longer required to be implemented on `Msg` types. The SDK will automatically infer the signers from the `Signer` field on the message. The signer field is required on all messages unless using a custom signer function.
+
+To find out more please read the [signer field](../../build/building-modules/05-protobuf-annotations.md#signer) & [here](https://github.com/cosmos/cosmos-sdk/blob/7352d0bce8e72121e824297df453eb1059c28da8/docs/docs/build/building-modules/02-messages-and-queries.md#L40) documentation.
+
+
+#### `x/auth`
+
+For ante handler construction via `ante.NewAnteHandler`, the field `ante.HandlerOptions.SignModeHandler` has been updated to `x/tx/signing/HandlerMap` from `x/auth/signing/SignModeHandler`. Callers typically fetch this value from `client.TxConfig.SignModeHandler()` (which is also changed) so this change should be transparent to most users.
+
+#### `x/capability`
+
+The capability module has been moved to [cosmos/ibc-go](https://github.com/cosmos/ibc-go). IBC v8 will contain the necessary changes to incorporate the new module location. In your `app.go`, you must import the capability module from the new location:
+
+```diff
++ "github.com/cosmos/ibc-go/modules/capability"
++ capabilitykeeper "github.com/cosmos/ibc-go/modules/capability/keeper"
++ capabilitytypes "github.com/cosmos/ibc-go/modules/capability/types"
+- "github.com/cosmos/cosmos-sdk/x/capability/types"
+- capabilitykeeper "github.com/cosmos/cosmos-sdk/x/capability/keeper"
+- capabilitytypes "github.com/cosmos/cosmos-sdk/x/capability/types"
+```
+
+Similar to previous versions, your module manager must include the capability module.
+
+```go
+app.ModuleManager = module.NewManager(
+ capability.NewAppModule(encodingConfig.Codec, *app.CapabilityKeeper, true),
+ // remaining modules
+)
+```
+
+#### `x/genutil`
+
+The Cosmos SDK has migrated from a CometBFT genesis to a application managed genesis file.
+The genesis is now fully handled by `x/genutil`. This has no consequences for running chains:
+
+* Importing a CometBFT genesis is still supported.
+* Exporting a genesis now exports the genesis as an application genesis.
+
+When needing to read an application genesis, use the following helpers from the `x/genutil/types` package:
+
+```go
+// AppGenesisFromReader reads the AppGenesis from the reader.
+func AppGenesisFromReader(reader io.Reader) (*AppGenesis, error)
+
+// AppGenesisFromFile reads the AppGenesis from the provided file.
+func AppGenesisFromFile(genFile string) (*AppGenesis, error)
+```
+
+#### `x/gov`
+
+##### Expedited Proposals
+
+The `gov` v1 module now supports expedited governance proposals. When a proposal is expedited, the voting period will be shortened to `ExpeditedVotingPeriod` parameter. An expedited proposal must have an higher voting threshold than a classic proposal, that threshold is defined with the `ExpeditedThreshold` parameter.
+
+##### Cancelling Proposals
+
+The `gov` module now supports cancelling governance proposals. When a proposal is canceled, all the deposits of the proposal are either burnt or sent to `ProposalCancelDest` address. The deposits burn rate will be determined by a new parameter called `ProposalCancelRatio` parameter.
+
+```text
+1. deposits * proposal_cancel_ratio will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned.
+2. deposits * (1 - proposal_cancel_ratio) will be sent to depositors.
+```
+
+By default, the new `ProposalCancelRatio` parameter is set to `0.5` during migration and `ProposalCancelDest` is set to empty string (i.e. burnt).
+
+#### `x/evidence`
+
+##### Extract evidence to a standalone module
+
+The `x/evidence` module is extracted to have a separate go.mod file which allows it be a standalone module.
+All the evidence imports are now renamed to use `cosmossdk.io/x/evidence` instead of `github.com/cosmos/cosmos-sdk/x/evidence` across the SDK.
+
+#### `x/nft`
+
+##### Extract nft to a standalone module
+
+The `x/nft` module is extracted to have a separate go.mod file which allows it to be a standalone module.
+All the evidence imports are now renamed to use `cosmossdk.io/x/nft` instead of `github.com/cosmos/cosmos-sdk/x/nft` across the SDK.
+
+#### x/feegrant
+
+##### Extract feegrant to a standalone module
+
+The `x/feegrant` module is extracted to have a separate go.mod file which allows it to be a standalone module.
+All the feegrant imports are now renamed to use `cosmossdk.io/x/feegrant` instead of `github.com/cosmos/cosmos-sdk/x/feegrant` across the SDK.
+
+#### `x/upgrade`
+
+##### Extract upgrade to a standalone module
+
+The `x/upgrade` module is extracted to have a separate go.mod file which allows it to be a standalone module.
+All the upgrade imports are now renamed to use `cosmossdk.io/x/upgrade` instead of `github.com/cosmos/cosmos-sdk/x/upgrade` across the SDK.
+
+### Tooling
+
+#### Rosetta
+
+Rosetta has moved to it's own [repo](https://github.com/cosmos/rosetta) and not imported by the Cosmos SDK SimApp by default.
+Any user who is interested on using the tool can connect it standalone to any node without the need to add it as part of the node binary.
+
+The rosetta tool also allows multi chain connections.
diff --git a/copy-of-sdk-docs/build/migrations/03-upgrade-guide.md b/copy-of-sdk-docs/build/migrations/03-upgrade-guide.md
new file mode 100644
index 00000000..057911c6
--- /dev/null
+++ b/copy-of-sdk-docs/build/migrations/03-upgrade-guide.md
@@ -0,0 +1,503 @@
+# Upgrade Guide
+
+This document provides a full guide for upgrading a Cosmos SDK chain from `v0.50.x` to `v0.53.x`.
+
+This guide includes one **required** change and three **optional** features.
+
+After completing this guide, applications will have:
+
+* The `x/protocolpool` module
+* The `x/epochs` module
+* Unordered Transaction support
+
+## Table of Contents
+
+* [App Wiring Changes (REQUIRED)](#app-wiring-changes-required)
+* [Adding ProtocolPool Module (OPTIONAL)](#adding-protocolpool-module-optional)
+ * [ProtocolPool Manual Wiring](#protocolpool-manual-wiring)
+ * [ProtocolPool DI Wiring](#protocolpool-di-wiring)
+* [Adding Epochs Module (OPTIONAL)](#adding-epochs-module-optional)
+ * [Epochs Manual Wiring](#epochs-manual-wiring)
+ * [Epochs DI Wiring](#epochs-di-wiring)
+* [Enable Unordered Transactions (OPTIONAL)](#enable-unordered-transactions-optional)
+* [Upgrade Handler](#upgrade-handler)
+
+## App Wiring Changes **REQUIRED**
+
+The `x/auth` module now contains a `PreBlocker` that _must_ be set in the module manager's `SetOrderPreBlockers` method.
+
+```go
+app.ModuleManager.SetOrderPreBlockers(
+ upgradetypes.ModuleName,
+ authtypes.ModuleName, // NEW
+)
+```
+
+## Adding ProtocolPool Module **OPTIONAL**
+
+:::warning
+
+Using an external community pool such as `x/protocolpool` will cause the following `x/distribution` handlers to return an error:
+
+**QueryService**
+
+* `CommunityPool`
+
+**MsgService**
+
+* `CommunityPoolSpend`
+* `FundCommunityPool`
+
+If your services depend on this functionality from `x/distribution`, please update them to use either `x/protocolpool` or your custom external community pool alternatives.
+
+:::
+
+### Manual Wiring
+
+Import the following:
+
+```go
+import (
+ // ...
+ "github.com/cosmos/cosmos-sdk/x/protocolpool"
+ protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper"
+ protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types"
+)
+```
+
+Set the module account permissions.
+
+```go
+maccPerms = map[string][]string{
+ // ...
+ protocolpooltypes.ModuleName: nil,
+ protocolpooltypes.ProtocolPoolEscrowAccount: nil,
+}
+```
+
+Add the protocol pool keeper to your application struct.
+
+```go
+ProtocolPoolKeeper protocolpoolkeeper.Keeper
+```
+
+Add the store key:
+
+```go
+keys := storetypes.NewKVStoreKeys(
+ // ...
+ protocolpooltypes.StoreKey,
+)
+```
+
+Instantiate the keeper.
+
+Make sure to do this before the distribution module instantiation, as you will pass the keeper there next.
+
+```go
+app.ProtocolPoolKeeper = protocolpoolkeeper.NewKeeper(
+ appCodec,
+ runtime.NewKVStoreService(keys[protocolpooltypes.StoreKey]),
+ app.AccountKeeper,
+ app.BankKeeper,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+)
+```
+
+Pass the protocolpool keeper to the distribution keeper:
+
+```go
+app.DistrKeeper = distrkeeper.NewKeeper(
+ appCodec,
+ runtime.NewKVStoreService(keys[distrtypes.StoreKey]),
+ app.AccountKeeper,
+ app.BankKeeper,
+ app.StakingKeeper,
+ authtypes.FeeCollectorName,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+ distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), // NEW
+)
+```
+
+Add the protocolpool module to the module manager:
+
+```go
+app.ModuleManager = module.NewManager(
+ // ...
+ protocolpool.NewAppModule(appCodec, app.ProtocolPoolKeeper, app.AccountKeeper, app.BankKeeper),
+)
+```
+
+Add an entry for SetOrderBeginBlockers, SetOrderEndBlockers, SetOrderInitGenesis, and SetOrderExportGenesis.
+
+```go
+app.ModuleManager.SetOrderBeginBlockers(
+ // must come AFTER distribution.
+ distrtypes.ModuleName,
+ protocolpooltypes.ModuleName,
+)
+```
+
+```go
+app.ModuleManager.SetOrderEndBlockers(
+ // order does not matter.
+ protocolpooltypes.ModuleName,
+)
+```
+
+```go
+app.ModuleManager.SetOrderInitGenesis(
+ // order does not matter.
+ protocolpooltypes.ModuleName,
+)
+```
+
+```go
+app.ModuleManager.SetOrderInitGenesis(
+ protocolpooltypes.ModuleName, // must be exported before bank.
+ banktypes.ModuleName,
+)
+```
+
+### DI Wiring
+
+Note: _as long as an external community pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool._
+
+First, set up the keeper for the application.
+
+Import the protocolpool keeper:
+
+```go
+protocolpoolkeeper "github.com/cosmos/cosmos-sdk/x/protocolpool/keeper"
+```
+
+Add the keeper to your application struct:
+
+```go
+ProtocolPoolKeeper protocolpoolkeeper.Keeper
+```
+
+Add the keeper to the depinject system:
+
+```go
+depinject.Inject(
+ appConfig,
+ &appBuilder,
+ &app.appCodec,
+ &app.legacyAmino,
+ &app.txConfig,
+ &app.interfaceRegistry,
+ // ... other modules
+ &app.ProtocolPoolKeeper, // NEW MODULE!
+)
+```
+
+Next, set up configuration for the module.
+
+Import the following:
+
+```go
+import (
+ protocolpoolmodulev1 "cosmossdk.io/api/cosmos/protocolpool/module/v1"
+
+ _ "github.com/cosmos/cosmos-sdk/x/protocolpool" // import for side-effects
+ protocolpooltypes "github.com/cosmos/cosmos-sdk/x/protocolpool/types"
+)
+```
+
+The protocolpool module has module accounts that handle funds. Add them to the module account permission configuration:
+
+```go
+moduleAccPerms = []*authmodulev1.ModuleAccountPermission{
+ // ...
+ {Account: protocolpooltypes.ModuleName},
+ {Account: protocolpooltypes.ProtocolPoolEscrowAccount},
+}
+```
+
+Next, add an entry for BeginBlockers, EndBlockers, InitGenesis, and ExportGenesis.
+
+```go
+BeginBlockers: []string{
+ // ...
+ // must be AFTER distribution.
+ distrtypes.ModuleName,
+ protocolpooltypes.ModuleName,
+},
+```
+
+```go
+EndBlockers: []string{
+ // ...
+ // order for protocolpool does not matter.
+ protocolpooltypes.ModuleName,
+},
+```
+
+```go
+InitGenesis: []string{
+ // ... must be AFTER distribution.
+ distrtypes.ModuleName,
+ protocolpooltypes.ModuleName,
+},
+```
+
+```go
+ExportGenesis: []string{
+ // ...
+ // Must be exported before x/bank.
+ protocolpooltypes.ModuleName,
+ banktypes.ModuleName,
+},
+```
+
+Lastly, add an entry for protocolpool in the ModuleConfig.
+
+```go
+{
+ Name: protocolpooltypes.ModuleName,
+ Config: appconfig.WrapAny(&protocolpoolmodulev1.Module{}),
+},
+```
+
+## Adding Epochs Module **OPTIONAL**
+
+### Manual Wiring
+
+Import the following:
+
+```go
+import (
+ // ...
+ "github.com/cosmos/cosmos-sdk/x/epochs"
+ epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper"
+ epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types"
+)
+```
+
+Add the epochs keeper to your application struct:
+
+```go
+EpochsKeeper epochskeeper.Keeper
+```
+
+Add the store key:
+
+```go
+keys := storetypes.NewKVStoreKeys(
+ // ...
+ epochstypes.StoreKey,
+)
+```
+
+Instantiate the keeper:
+
+```go
+app.EpochsKeeper = epochskeeper.NewKeeper(
+ runtime.NewKVStoreService(keys[epochstypes.StoreKey]),
+ appCodec,
+)
+```
+
+Set up hooks for the epochs keeper:
+
+To learn how to write hooks for the epoch keeper, see the [x/epoch README](https://github.com/cosmos/cosmos-sdk/blob/main/x/epochs/README.md)
+
+```go
+app.EpochsKeeper.SetHooks(
+ epochstypes.NewMultiEpochHooks(
+ // insert epoch hooks receivers here
+ app.SomeOtherModule
+ ),
+)
+```
+
+Add the epochs module to the module manager:
+
+```go
+app.ModuleManager = module.NewManager(
+ // ...
+ epochs.NewAppModule(appCodec, app.EpochsKeeper),
+)
+```
+
+Add entries for SetOrderBeginBlockers and SetOrderInitGenesis:
+
+```go
+app.ModuleManager.SetOrderBeginBlockers(
+ // ...
+ epochstypes.ModuleName,
+)
+```
+
+```go
+app.ModuleManager.SetOrderInitGenesis(
+ // ...
+ epochstypes.ModuleName,
+)
+```
+
+### DI Wiring
+
+First, set up the keeper for the application.
+
+Import the epochs keeper:
+
+```go
+epochskeeper "github.com/cosmos/cosmos-sdk/x/epochs/keeper"
+```
+
+Add the keeper to your application struct:
+
+```go
+EpochsKeeper epochskeeper.Keeper
+```
+
+Add the keeper to the depinject system:
+
+```go
+depinject.Inject(
+ appConfig,
+ &appBuilder,
+ &app.appCodec,
+ &app.legacyAmino,
+ &app.txConfig,
+ &app.interfaceRegistry,
+ // ... other modules
+ &app.EpochsKeeper, // NEW MODULE!
+)
+```
+
+Next, set up configuration for the module.
+
+Import the following:
+
+```go
+import (
+ epochsmodulev1 "cosmossdk.io/api/cosmos/epochs/module/v1"
+
+ _ "github.com/cosmos/cosmos-sdk/x/epochs" // import for side-effects
+ epochstypes "github.com/cosmos/cosmos-sdk/x/epochs/types"
+)
+```
+
+Add an entry for BeginBlockers and InitGenesis:
+
+```go
+BeginBlockers: []string{
+ // ...
+ epochstypes.ModuleName,
+},
+```
+
+```go
+InitGenesis: []string{
+ // ...
+ epochstypes.ModuleName,
+},
+```
+
+Lastly, add an entry for epochs in the ModuleConfig:
+
+```go
+{
+ Name: epochstypes.ModuleName,
+ Config: appconfig.WrapAny(&epochsmodulev1.Module{}),
+},
+```
+
+## Enable Unordered Transactions **OPTIONAL**
+
+To enable unordered transaction support on an application, the `x/auth` keeper must be supplied with the `WithUnorderedTransactions` option.
+
+Note that unordered transactions require sequence values to be zero, and will **FAIL** if a non-zero sequence value is set.
+Please ensure no sequence value is set when submitting an unordered transaction.
+Services that rely on prior assumptions about sequence values should be updated to handle unordered transactions.
+Services should be aware that when the transaction is unordered, the transaction sequence will always be zero.
+
+```go
+ app.AccountKeeper = authkeeper.NewAccountKeeper(
+ appCodec,
+ runtime.NewKVStoreService(keys[authtypes.StoreKey]),
+ authtypes.ProtoBaseAccount,
+ maccPerms,
+ authcodec.NewBech32Codec(sdk.Bech32MainPrefix),
+ sdk.Bech32MainPrefix,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+ authkeeper.WithUnorderedTransactions(true), // new option!
+ )
+```
+
+If using dependency injection, update the auth module config.
+
+```go
+ {
+ Name: authtypes.ModuleName,
+ Config: appconfig.WrapAny(&authmodulev1.Module{
+ Bech32Prefix: "cosmos",
+ ModuleAccountPermissions: moduleAccPerms,
+ EnableUnorderedTransactions: true, // remove this line if you do not want unordered transactions.
+ }),
+ },
+```
+
+By default, unordered transactions use a transaction timeout duration of 10 minutes and a default gas charge of 2240 gas units.
+To modify these default values, pass in the corresponding options to the new `SigVerifyOptions` field in `x/auth's` `ante.HandlerOptions`.
+
+```go
+options := ante.HandlerOptions{
+ SigVerifyOptions: []ante.SigVerificationDecoratorOption{
+ // change below as needed.
+ ante.WithUnorderedTxGasCost(ante.DefaultUnorderedTxGasCost),
+ ante.WithMaxUnorderedTxTimeoutDuration(ante.DefaultMaxTimeoutDuration),
+ },
+}
+```
+
+```go
+anteDecorators := []sdk.AnteDecorator{
+ // ... other decorators ...
+ ante.NewSigVerificationDecorator(options.AccountKeeper, options.SignModeHandler, options.SigVerifyOptions...), // supply new options
+}
+```
+
+## Upgrade Handler
+
+The upgrade handler only requires adding the store upgrades for the modules added above.
+If your application is not adding `x/protocolpool` or `x/epochs`, you do not need to add the store upgrade.
+
+```go
+// UpgradeName defines the on-chain upgrade name for the sample SimApp upgrade
+// from v050 to v053.
+//
+// NOTE: This upgrade defines a reference implementation of what an upgrade
+// could look like when an application is migrating from Cosmos SDK version
+// v0.50.x to v0.53.x.
+const UpgradeName = "v050-to-v053"
+
+func (app SimApp) RegisterUpgradeHandlers() {
+ app.UpgradeKeeper.SetUpgradeHandler(
+ UpgradeName,
+ func(ctx context.Context, _ upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
+ return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM)
+ },
+ )
+
+ upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk()
+ if err != nil {
+ panic(err)
+ }
+
+ if upgradeInfo.Name == UpgradeName && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) {
+ storeUpgrades := storetypes.StoreUpgrades{
+ Added: []string{
+ epochstypes.ModuleName, // if not adding x/epochs to your chain, remove this line.
+ protocolpooltypes.ModuleName, // if not adding x/protocolpool to your chain, remove this line.
+ },
+ }
+
+ // configure store loader that checks if version == upgradeHeight and applies store upgrades
+ app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades))
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/migrations/_category_.json b/copy-of-sdk-docs/build/migrations/_category_.json
new file mode 100644
index 00000000..5a06c3eb
--- /dev/null
+++ b/copy-of-sdk-docs/build/migrations/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Migrations",
+ "position": 3,
+ "link": null
+}
diff --git a/copy-of-sdk-docs/build/modules/README.md b/copy-of-sdk-docs/build/modules/README.md
new file mode 100644
index 00000000..12a128c3
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/README.md
@@ -0,0 +1,63 @@
+---
+sidebar_position: 0
+---
+
+# List of Modules
+
+Here are some production-grade modules that can be used in Cosmos SDK applications, along with their respective documentation:
+
+## Essential Modules
+
+Essential modules include functionality that _must_ be included in your Cosmos SDK blockchain.
+These modules provide the core behaviors that are needed for users and operators such as balance tracking,
+proof-of-stake capabilities and governance.
+
+* [Auth](./auth/README.md) - Authentication of accounts and transactions for Cosmos SDK applications.
+* [Bank](./bank/README.md) - Token transfer functionalities.
+* [Circuit](./circuit/README.md) - Circuit breaker module for pausing messages.
+* [Consensus](./consensus/README.md) - Consensus module for modifying CometBFT's ABCI consensus params.
+* [Distribution](./distribution/README.md) - Fee distribution, and staking token provision distribution.
+* [Evidence](./evidence/README.md) - Evidence handling for double signing, misbehaviour, etc.
+* [Governance](./gov/README.md) - On-chain proposals and voting.
+* [Genutil](./genutil/README.md) - Genesis utilities for the Cosmos SDK.
+* [Mint](./mint/README.md) - Creation of new units of staking token.
+* [Slashing](./slashing/README.md) - Validator punishment mechanisms.
+* [Staking](./staking/README.md) - Proof-of-Stake layer for public blockchains.
+* [Upgrade](./upgrade/README.md) - Software upgrades handling and coordination.
+
+## Supplementary Modules
+
+Supplementary modules are modules that are maintained in the Cosmos SDK but are not necessary for
+the core functionality of your blockchain. They can be thought of as ways to extend the
+capabilities of your blockchain or further specialize it.
+
+* [Authz](./authz/README.md) - Authorization for accounts to perform actions on behalf of other accounts.
+* [Epochs](./epochs/README.md) - Registration so SDK modules can have logic to be executed at the timed tickers.
+* [Feegrant](./feegrant/README.md) - Grant fee allowances for executing transactions.
+* [ProtocolPool](./protocolpool/README.md) - Extended management of community pool functionality.
+
+## Deprecated Modules
+
+The following modules are deprecated. They will no longer be maintained and eventually will be removed
+in an upcoming release of the Cosmos SDK per our [release process](https://github.com/cosmos/cosmos-sdk/blob/main/RELEASE_PROCESS.md).
+
+* [Crisis](./crisis/README.md) - _Deprecated_ halting the blockchain under certain circumstances (e.g. if an invariant is broken).
+* [Params](./params/README.md) - _Deprecated_ Globally available parameter store.
+* [NFT](./nft/README.md) - _Deprecated_ NFT module implemented based on [ADR43](https://docs.cosmos.network/main/build/architecture/adr-043-nft-module). This module will be moved to the `cosmos-sdk-legacy` repo for use.
+* [Group](./group/README.md) - _Deprecated_ Allows for the creation and management of on-chain multisig accounts. This module will be moved to the `cosmos-sdk-legacy` repo for legacy use.
+
+To learn more about the process of building modules, visit the [building modules reference documentation](https://docs.cosmos.network/main/building-modules/intro).
+
+## IBC
+
+The IBC module for the SDK is maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go).
+
+Additionally, the [capability module](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability) is from v0.50+ maintained by the IBC Go team in its [own repository](https://github.com/cosmos/ibc-go/tree/fdd664698d79864f1e00e147f9879e58497b5ef1/modules/capability).
+
+## CosmWasm
+
+The CosmWasm module enables smart contracts, learn more by going to their [documentation site](https://book.cosmwasm.com/), or visit [the repository](https://github.com/CosmWasm/cosmwasm).
+
+## EVM
+
+Read more about writing smart contracts with solidity at the official [`evm` documentation page](https://evm.cosmos.network/).
diff --git a/copy-of-sdk-docs/build/modules/_category_.json b/copy-of-sdk-docs/build/modules/_category_.json
new file mode 100644
index 00000000..72d229c0
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Modules",
+ "position": 2,
+ "link": null
+}
diff --git a/copy-of-sdk-docs/build/modules/auth/1-vesting.md b/copy-of-sdk-docs/build/modules/auth/1-vesting.md
new file mode 100644
index 00000000..92458067
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/auth/1-vesting.md
@@ -0,0 +1,618 @@
+---
+sidebar_position: 1
+---
+
+# `x/auth/vesting`
+
+
+* [Intro and Requirements](#intro-and-requirements)
+* [Note](#note)
+* [Vesting Account Types](#vesting-account-types)
+ * [BaseVestingAccount](#basevestingaccount)
+ * [ContinuousVestingAccount](#continuousvestingaccount)
+ * [DelayedVestingAccount](#delayedvestingaccount)
+ * [Period](#period)
+ * [PeriodicVestingAccount](#periodicvestingaccount)
+ * [PermanentLockedAccount](#permanentlockedaccount)
+* [Vesting Account Specification](#vesting-account-specification)
+ * [Determining Vesting & Vested Amounts](#determining-vesting--vested-amounts)
+ * [Periodic Vesting Accounts](#periodic-vesting-accounts)
+ * [Transferring/Sending](#transferringsending)
+ * [Delegating](#delegating)
+ * [Undelegating](#undelegating)
+* [Keepers & Handlers](#keepers--handlers)
+* [Genesis Initialization](#genesis-initialization)
+* [Examples](#examples)
+ * [Simple](#simple)
+ * [Slashing](#slashing)
+ * [Periodic Vesting](#periodic-vesting)
+* [Glossary](#glossary)
+
+## Intro and Requirements
+
+This specification defines the vesting account implementation that is used by the Cosmos Hub. The requirements for this vesting account is that it should be initialized during genesis with a starting balance `X` and a vesting end time `ET`. A vesting account may be initialized with a vesting start time `ST` and a number of vesting periods `P`. If a vesting start time is included, the vesting period does not begin until start time is reached. If vesting periods are included, the vesting occurs over the specified number of periods.
+
+For all vesting accounts, the owner of the vesting account is able to delegate and undelegate from validators, however they cannot transfer coins to another account until those coins are vested. This specification allows for four different kinds of vesting:
+
+* Delayed vesting, where all coins are vested once `ET` is reached.
+* Continuous vesting, where coins begin to vest at `ST` and vest linearly with respect to time until `ET` is reached
+* Periodic vesting, where coins begin to vest at `ST` and vest periodically according to number of periods and the vesting amount per period. The number of periods, length per period, and amount per period are configurable. A periodic vesting account is distinguished from a continuous vesting account in that coins can be released in staggered tranches. For example, a periodic vesting account could be used for vesting arrangements where coins are released quarterly, yearly, or over any other function of tokens over time.
+* Permanent locked vesting, where coins are locked forever. Coins in this account can still be used for delegating and for governance votes even while locked.
+
+## Note
+
+Vesting accounts can be initialized with some vesting and non-vesting coins. The non-vesting coins would be immediately transferable. DelayedVesting ContinuousVesting, PeriodicVesting and PermanentVesting accounts can be created with normal messages after genesis. Other types of vesting accounts must be created at genesis, or as part of a manual network upgrade. The current specification only allows for _unconditional_ vesting (ie. there is no possibility of reaching `ET` and
+having coins fail to vest).
+
+## Vesting Account Types
+
+```go
+// VestingAccount defines an interface that any vesting account type must
+// implement.
+type VestingAccount interface {
+ Account
+
+ GetVestedCoins(Time) Coins
+ GetVestingCoins(Time) Coins
+
+ // TrackDelegation performs internal vesting accounting necessary when
+ // delegating from a vesting account. It accepts the current block time, the
+ // delegation amount and balance of all coins whose denomination exists in
+ // the account's original vesting balance.
+ TrackDelegation(Time, Coins, Coins)
+
+ // TrackUndelegation performs internal vesting accounting necessary when a
+ // vesting account performs an undelegation.
+ TrackUndelegation(Coins)
+
+ GetStartTime() int64
+ GetEndTime() int64
+}
+```
+
+### BaseVestingAccount
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L11-L35
+```
+
+### ContinuousVestingAccount
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L37-L46
+```
+
+### DelayedVestingAccount
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L48-L57
+```
+
+### Period
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L59-L69
+```
+
+```go
+// Stores all vesting periods passed as part of a PeriodicVestingAccount
+type Periods []Period
+
+```
+
+### PeriodicVestingAccount
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L71-L81
+```
+
+In order to facilitate less ad-hoc type checking and assertions and to support flexibility in account balance usage, the existing `x/bank` `ViewKeeper` interface is updated to contain the following:
+
+```go
+type ViewKeeper interface {
+ // ...
+
+ // Calculates the total locked account balance.
+ LockedCoins(ctx sdk.Context, addr sdk.AccAddress) sdk.Coins
+
+ // Calculates the total spendable balance that can be sent to other accounts.
+ SpendableCoins(ctx sdk.Context, addr sdk.AccAddress) sdk.Coins
+}
+```
+
+### PermanentLockedAccount
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/vesting/v1beta1/vesting.proto#L83-L94
+```
+
+## Vesting Account Specification
+
+Given a vesting account, we define the following in the proceeding operations:
+
+* `OV`: The original vesting coin amount. It is a constant value.
+* `V`: The number of `OV` coins that are still _vesting_. It is derived by
+`OV`, `StartTime` and `EndTime`. This value is computed on demand and not on a per-block basis.
+* `V'`: The number of `OV` coins that are _vested_ (unlocked). This value is computed on demand and not a per-block basis.
+* `DV`: The number of delegated _vesting_ coins. It is a variable value. It is stored and modified directly in the vesting account.
+* `DF`: The number of delegated _vested_ (unlocked) coins. It is a variable value. It is stored and modified directly in the vesting account.
+* `BC`: The number of `OV` coins less any coins that are transferred
+(which can be negative or delegated). It is considered to be balance of the embedded base account. It is stored and modified directly in the vesting account.
+
+### Determining Vesting & Vested Amounts
+
+It is important to note that these values are computed on demand and not on a mandatory per-block basis (e.g. `BeginBlocker` or `EndBlocker`).
+
+#### Continuously Vesting Accounts
+
+To determine the amount of coins that are vested for a given block time `T`, the
+following is performed:
+
+1. Compute `X := T - StartTime`
+2. Compute `Y := EndTime - StartTime`
+3. Compute `V' := OV * (X / Y)`
+4. Compute `V := OV - V'`
+
+Thus, the total amount of _vested_ coins is `V'` and the remaining amount, `V`,
+is _vesting_.
+
+```go
+func (cva ContinuousVestingAccount) GetVestedCoins(t Time) Coins {
+ if t <= cva.StartTime {
+ // We must handle the case where the start time for a vesting account has
+ // been set into the future or when the start of the chain is not exactly
+ // known.
+ return ZeroCoins
+ } else if t >= cva.EndTime {
+ return cva.OriginalVesting
+ }
+
+ x := t - cva.StartTime
+ y := cva.EndTime - cva.StartTime
+
+ return cva.OriginalVesting * (x / y)
+}
+
+func (cva ContinuousVestingAccount) GetVestingCoins(t Time) Coins {
+ return cva.OriginalVesting - cva.GetVestedCoins(t)
+}
+```
+
+### Periodic Vesting Accounts
+
+Periodic vesting accounts require calculating the coins released during each period for a given block time `T`. Note that multiple periods could have passed when calling `GetVestedCoins`, so we must iterate over each period until the end of that period is after `T`.
+
+1. Set `CT := StartTime`
+2. Set `V' := 0`
+
+For each Period P:
+
+ 1. Compute `X := T - CT`
+ 2. IF `X >= P.Length`
+ 1. Compute `V' += P.Amount`
+ 2. Compute `CT += P.Length`
+ 3. ELSE break
+ 3. Compute `V := OV - V'`
+
+```go
+func (pva PeriodicVestingAccount) GetVestedCoins(t Time) Coins {
+ if t < pva.StartTime {
+ return ZeroCoins
+ }
+ ct := pva.StartTime // The start of the vesting schedule
+ vested := 0
+ periods = pva.GetPeriods()
+ for _, period := range periods {
+ if t - ct < period.Length {
+ break
+ }
+ vested += period.Amount
+ ct += period.Length // increment ct to the start of the next vesting period
+ }
+ return vested
+}
+
+func (pva PeriodicVestingAccount) GetVestingCoins(t Time) Coins {
+ return pva.OriginalVesting - cva.GetVestedCoins(t)
+}
+```
+
+#### Delayed/Discrete Vesting Accounts
+
+Delayed vesting accounts are easier to reason about as they only have the full amount vesting up until a certain time, then all the coins become vested (unlocked). This does not include any unlocked coins the account may have initially.
+
+```go
+func (dva DelayedVestingAccount) GetVestedCoins(t Time) Coins {
+ if t >= dva.EndTime {
+ return dva.OriginalVesting
+ }
+
+ return ZeroCoins
+}
+
+func (dva DelayedVestingAccount) GetVestingCoins(t Time) Coins {
+ return dva.OriginalVesting - dva.GetVestedCoins(t)
+}
+```
+
+### Transferring/Sending
+
+At any given time, a vesting account may transfer: `min((BC + DV) - V, BC)`.
+
+In other words, a vesting account may transfer the minimum of the base account balance and the base account balance plus the number of currently delegated vesting coins less the number of coins vested so far.
+
+However, given that account balances are tracked via the `x/bank` module and that we want to avoid loading the entire account balance, we can instead determine the locked balance, which can be defined as `max(V - DV, 0)`, and infer the spendable balance from that.
+
+```go
+func (va VestingAccount) LockedCoins(t Time) Coins {
+ return max(va.GetVestingCoins(t) - va.DelegatedVesting, 0)
+}
+```
+
+The `x/bank` `ViewKeeper` can then provide APIs to determine locked and spendable coins for any account:
+
+```go
+func (k Keeper) LockedCoins(ctx Context, addr AccAddress) Coins {
+ acc := k.GetAccount(ctx, addr)
+ if acc != nil {
+ if acc.IsVesting() {
+ return acc.LockedCoins(ctx.BlockTime())
+ }
+ }
+
+ // non-vesting accounts do not have any locked coins
+ return NewCoins()
+}
+```
+
+#### Keepers/Handlers
+
+The corresponding `x/bank` keeper should appropriately handle sending coins based on if the account is a vesting account or not.
+
+```go
+func (k Keeper) SendCoins(ctx Context, from Account, to Account, amount Coins) {
+ bc := k.GetBalances(ctx, from)
+ v := k.LockedCoins(ctx, from)
+
+ spendable := bc - v
+ newCoins := spendable - amount
+ assert(newCoins >= 0)
+
+ from.SetBalance(newCoins)
+ to.AddBalance(amount)
+
+ // save balances...
+}
+```
+
+### Delegating
+
+For a vesting account attempting to delegate `D` coins, the following is performed:
+
+1. Verify `BC >= D > 0`
+2. Compute `X := min(max(V - DV, 0), D)` (portion of `D` that is vesting)
+3. Compute `Y := D - X` (portion of `D` that is free)
+4. Set `DV += X`
+5. Set `DF += Y`
+
+```go
+func (va VestingAccount) TrackDelegation(t Time, balance Coins, amount Coins) {
+ assert(balance <= amount)
+ x := min(max(va.GetVestingCoins(t) - va.DelegatedVesting, 0), amount)
+ y := amount - x
+
+ va.DelegatedVesting += x
+ va.DelegatedFree += y
+}
+```
+
+**Note** `TrackDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by subtracting `amount`.
+
+#### Keepers/Handlers
+
+```go
+func DelegateCoins(t Time, from Account, amount Coins) {
+ if isVesting(from) {
+ from.TrackDelegation(t, amount)
+ } else {
+ from.SetBalance(sc - amount)
+ }
+
+ // save account...
+}
+```
+
+### Undelegating
+
+For a vesting account attempting to undelegate `D` coins, the following is performed:
+
+> NOTE: `DV < D` and `(DV + DF) < D` may be possible due to quirks in the rounding of delegation/undelegation logic.
+
+1. Verify `D > 0`
+2. Compute `X := min(DF, D)` (portion of `D` that should become free, prioritizing free coins)
+3. Compute `Y := min(DV, D - X)` (portion of `D` that should remain vesting)
+4. Set `DF -= X`
+5. Set `DV -= Y`
+
+```go
+func (cva ContinuousVestingAccount) TrackUndelegation(amount Coins) {
+ x := min(cva.DelegatedFree, amount)
+ y := amount - x
+
+ cva.DelegatedFree -= x
+ cva.DelegatedVesting -= y
+}
+```
+
+**Note** `TrackUnDelegation` only modifies the `DelegatedVesting` and `DelegatedFree` fields, so upstream callers MUST modify the `Coins` field by adding `amount`.
+
+**Note**: If a delegation is slashed, the continuous vesting account ends up with an excess `DV` amount, even after all its coins have vested. This is because undelegating free coins are prioritized.
+
+**Note**: The undelegation (bond refund) amount may exceed the delegated vesting (bond) amount due to the way undelegation truncates the bond refund, which can increase the validator's exchange rate (tokens/shares) slightly if the undelegated tokens are non-integral.
+
+#### Keepers/Handlers
+
+```go
+func UndelegateCoins(to Account, amount Coins) {
+ if isVesting(to) {
+ if to.DelegatedFree + to.DelegatedVesting >= amount {
+ to.TrackUndelegation(amount)
+ // save account ...
+ }
+ } else {
+ AddBalance(to, amount)
+ // save account...
+ }
+}
+```
+
+## Keepers & Handlers
+
+The `VestingAccount` implementations reside in `x/auth`. However, any keeper in a module (e.g. staking in `x/staking`) wishing to potentially utilize any vesting coins, must call explicit methods on the `x/bank` keeper (e.g. `DelegateCoins`) opposed to `SendCoins` and `SubtractCoins`.
+
+In addition, the vesting account should also be able to spend any coins it receives from other users. Thus, the bank module's `MsgSend` handler should error if a vesting account is trying to send an amount that exceeds their unlocked coin amount.
+
+See the above specification for full implementation details.
+
+## Genesis Initialization
+
+To initialize both vesting and non-vesting accounts, the `GenesisAccount` struct includes new fields: `Vesting`, `StartTime`, and `EndTime`. Accounts meant to be of type `BaseAccount` or any non-vesting type have `Vesting = false`. The genesis initialization logic (e.g. `initFromGenesisState`) must parse and return the correct accounts accordingly based off of these fields.
+
+```go
+type GenesisAccount struct {
+ // ...
+
+ // vesting account fields
+ OriginalVesting sdk.Coins `json:"original_vesting"`
+ DelegatedFree sdk.Coins `json:"delegated_free"`
+ DelegatedVesting sdk.Coins `json:"delegated_vesting"`
+ StartTime int64 `json:"start_time"`
+ EndTime int64 `json:"end_time"`
+}
+
+func ToAccount(gacc GenesisAccount) Account {
+ bacc := NewBaseAccount(gacc)
+
+ if gacc.OriginalVesting > 0 {
+ if ga.StartTime != 0 && ga.EndTime != 0 {
+ // return a continuous vesting account
+ } else if ga.EndTime != 0 {
+ // return a delayed vesting account
+ } else {
+ // invalid genesis vesting account provided
+ panic()
+ }
+ }
+
+ return bacc
+}
+```
+
+## Examples
+
+### Simple
+
+Given a continuous vesting account with 10 vesting coins.
+
+```text
+OV = 10
+DF = 0
+DV = 0
+BC = 10
+V = 10
+V' = 0
+```
+
+1. Immediately receives 1 coin
+
+ ```text
+ BC = 11
+ ```
+
+2. Time passes, 2 coins vest
+
+ ```text
+ V = 8
+ V' = 2
+ ```
+
+3. Delegates 4 coins to validator A
+
+ ```text
+ DV = 4
+ BC = 7
+ ```
+
+4. Sends 3 coins
+
+ ```text
+ BC = 4
+ ```
+
+5. More time passes, 2 more coins vest
+
+ ```text
+ V = 6
+ V' = 4
+ ```
+
+6. Sends 2 coins. At this point the account cannot send anymore until further
+coins vest or it receives additional coins. It can still however, delegate.
+
+ ```text
+ BC = 2
+ ```
+
+### Slashing
+
+Same initial starting conditions as the simple example.
+
+1. Time passes, 5 coins vest
+
+ ```text
+ V = 5
+ V' = 5
+ ```
+
+2. Delegate 5 coins to validator A
+
+ ```text
+ DV = 5
+ BC = 5
+ ```
+
+3. Delegate 5 coins to validator B
+
+ ```text
+ DF = 5
+ BC = 0
+ ```
+
+4. Validator A gets slashed by 50%, making the delegation to A now worth 2.5 coins
+5. Undelegate from validator A (2.5 coins)
+
+ ```text
+ DF = 5 - 2.5 = 2.5
+ BC = 0 + 2.5 = 2.5
+ ```
+
+6. Undelegate from validator B (5 coins). The account at this point can only
+send 2.5 coins unless it receives more coins or until more coins vest.
+It can still however, delegate.
+
+ ```text
+ DV = 5 - 2.5 = 2.5
+ DF = 2.5 - 2.5 = 0
+ BC = 2.5 + 5 = 7.5
+ ```
+
+ Notice how we have an excess amount of `DV`.
+
+### Periodic Vesting
+
+A vesting account is created where 100 tokens will be released over 1 year, with
+1/4 of tokens vesting each quarter. The vesting schedule would be as follows:
+
+```yaml
+Periods:
+- amount: 25stake, length: 7884000
+- amount: 25stake, length: 7884000
+- amount: 25stake, length: 7884000
+- amount: 25stake, length: 7884000
+```
+
+```text
+OV = 100
+DF = 0
+DV = 0
+BC = 100
+V = 100
+V' = 0
+```
+
+1. Immediately receives 1 coin
+
+ ```text
+ BC = 101
+ ```
+
+2. Vesting period 1 passes, 25 coins vest
+
+ ```text
+ V = 75
+ V' = 25
+ ```
+
+3. During vesting period 2, 5 coins are transferred and 5 coins are delegated
+
+ ```text
+ DV = 5
+ BC = 91
+ ```
+
+4. Vesting period 2 passes, 25 coins vest
+
+ ```text
+ V = 50
+ V' = 50
+ ```
+
+## Glossary
+
+* OriginalVesting: The amount of coins (per denomination) that are initially
+part of a vesting account. These coins are set at genesis.
+* StartTime: The BFT time at which a vesting account starts to vest.
+* EndTime: The BFT time at which a vesting account is fully vested.
+* DelegatedFree: The tracked amount of coins (per denomination) that are
+delegated from a vesting account that have been fully vested at time of delegation.
+* DelegatedVesting: The tracked amount of coins (per denomination) that are
+delegated from a vesting account that were vesting at time of delegation.
+* ContinuousVestingAccount: A vesting account implementation that vests coins
+linearly over time.
+* DelayedVestingAccount: A vesting account implementation that only fully vests
+all coins at a given time.
+* PeriodicVestingAccount: A vesting account implementation that vests coins
+according to a custom vesting schedule.
+* PermanentLockedAccount: It does not ever release coins, locking them indefinitely.
+Coins in this account can still be used for delegating and for governance votes even while locked.
+
+
+## CLI
+
+A user can query and interact with the `vesting` module using the CLI.
+
+### Transactions
+
+The `tx` commands allow users to interact with the `vesting` module.
+
+```bash
+simd tx vesting --help
+```
+
+#### create-periodic-vesting-account
+
+The `create-periodic-vesting-account` command creates a new vesting account funded with an allocation of tokens, where a sequence of coins and period length in seconds. Periods are sequential, in that the duration of a period only starts at the end of the previous period. The duration of the first period starts upon account creation.
+
+```bash
+simd tx vesting create-periodic-vesting-account [to_address] [periods_json_file] [flags]
+```
+
+Example:
+
+```bash
+simd tx vesting create-periodic-vesting-account cosmos1.. periods.json
+```
+
+#### create-vesting-account
+
+The `create-vesting-account` command creates a new vesting account funded with an allocation of tokens. The account can either be a delayed or continuous vesting account, which is determined by the '--delayed' flag. All vesting accounts created will have their start time set by the committed block's time. The end_time must be provided as a UNIX epoch timestamp.
+
+```bash
+simd tx vesting create-vesting-account [to_address] [amount] [end_time] [flags]
+```
+
+Example:
+
+```bash
+simd tx vesting create-vesting-account cosmos1.. 100stake 2592000
+```
diff --git a/copy-of-sdk-docs/build/modules/auth/2-tx.md b/copy-of-sdk-docs/build/modules/auth/2-tx.md
new file mode 100644
index 00000000..78da0503
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/auth/2-tx.md
@@ -0,0 +1,264 @@
+---
+sidebar_position: 1
+---
+
+# `x/auth/tx`
+
+:::note Pre-requisite Readings
+
+* [Transactions](https://docs.cosmos.network/main/core/transactions#transaction-generation)
+* [Encoding](https://docs.cosmos.network/main/core/encoding#transaction-encoding)
+
+:::
+
+## Abstract
+
+This document specifies the `x/auth/tx` package of the Cosmos SDK.
+
+This package represents the Cosmos SDK implementation of the `client.TxConfig`, `client.TxBuilder`, `client.TxEncoder` and `client.TxDecoder` interfaces.
+
+## Contents
+
+* [Transactions](#transactions)
+ * [`TxConfig`](#txconfig)
+ * [`TxBuilder`](#txbuilder)
+ * [`TxEncoder`/ `TxDecoder`](#txencoder-txdecoder)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+
+## Transactions
+
+### `TxConfig`
+
+`client.TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type.
+The interface defines a set of methods for creating a `client.TxBuilder`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/client/tx_config.go#L25-L31
+```
+
+The default implementation of `client.TxConfig` is instantiated by `NewTxConfig` in `x/auth/tx` module.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/auth/tx/config.go#L22-L28
+```
+
+### `TxBuilder`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/client/tx_config.go#L33-L50
+```
+
+The [`client.TxBuilder`](https://docs.cosmos.network/main/core/transactions#transaction-generation) interface is as well implemented by `x/auth/tx`.
+A `client.TxBuilder` can be accessed with `TxConfig.NewTxBuilder()`.
+
+### `TxEncoder`/ `TxDecoder`
+
+More information about `TxEncoder` and `TxDecoder` can be found [here](https://docs.cosmos.network/main/core/encoding#transaction-encoding).
+
+## Client
+
+### CLI
+
+#### Query
+
+The `x/auth/tx` module provides a CLI command to query any transaction, given its hash, transaction sequence or signature.
+
+Without any argument, the command will query the transaction using the transaction hash.
+
+```shell
+simd query tx DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685
+```
+
+When querying a transaction from an account given its sequence, use the `--type=acc_seq` flag:
+
+```shell
+simd query tx --type=acc_seq cosmos1u69uyr6v9qwe6zaaeaqly2h6wnedac0xpxq325/1
+```
+
+When querying a transaction given its signature, use the `--type=signature` flag:
+
+```shell
+simd query tx --type=signature Ofjvgrqi8twZfqVDmYIhqwRLQjZZ40XbxEamk/veH3gQpRF0hL2PH4ejRaDzAX+2WChnaWNQJQ41ekToIi5Wqw==
+```
+
+When querying a transaction given its events, use the `--type=events` flag:
+
+```shell
+simd query txs --events 'message.sender=cosmos...' --page 1 --limit 30
+```
+
+The `x/auth/block` module provides a CLI command to query any block, given its hash, height, or events.
+
+When querying a block by its hash, use the `--type=hash` flag:
+
+```shell
+simd query block --type=hash DFE87B78A630C0EFDF76C80CD24C997E252792E0317502AE1A02B9809F0D8685
+```
+
+When querying a block by its height, use the `--type=height` flag:
+
+```shell
+simd query block --type=height 1357
+```
+
+When querying a block by its events, use the `--query` flag:
+
+```shell
+simd query blocks --query 'message.sender=cosmos...' --page 1 --limit 30
+```
+
+#### Transactions
+
+The `x/auth/tx` module provides a convenient CLI command for decoding and encoding transactions.
+
+#### `encode`
+
+The `encode` command encodes a transaction created with the `--generate-only` flag or signed with the sign command.
+The transaction is serialized it to Protobuf and returned as base64.
+
+```bash
+$ simd tx encode tx.json
+Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==
+$ simd tx encode tx.signed.json
+```
+
+More information about the `encode` command can be found running `simd tx encode --help`.
+
+#### `decode`
+
+The `decode` commands decodes a transaction encoded with the `encode` command.
+
+
+```bash
+simd tx decode Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA==
+```
+
+More information about the `decode` command can be found running `simd tx decode --help`.
+
+### gRPC
+
+A user can query the `x/auth/tx` module using gRPC endpoints.
+
+#### `TxDecode`
+
+The `TxDecode` endpoint allows to decode a transaction.
+
+```shell
+cosmos.tx.v1beta1.Service/TxDecode
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"tx_bytes":"Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="}' \
+ localhost:9090 \
+ cosmos.tx.v1beta1.Service/TxDecode
+```
+
+Example Output:
+
+```json
+{
+ "tx": {
+ "body": {
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"}
+ ]
+ },
+ "authInfo": {
+ "fee": {
+ "gasLimit": "200000"
+ }
+ }
+ }
+}
+```
+
+#### `TxEncode`
+
+The `TxEncode` endpoint allows to encode a transaction.
+
+```shell
+cosmos.tx.v1beta1.Service/TxEncode
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"tx": {
+ "body": {
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100"}],"fromAddress":"cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275","toAddress":"cosmos158saldyg8pmxu7fwvt0d6x7jeswp4gwyklk6y3"}
+ ]
+ },
+ "authInfo": {
+ "fee": {
+ "gasLimit": "200000"
+ }
+ }
+ }}' \
+ localhost:9090 \
+ cosmos.tx.v1beta1.Service/TxEncode
+```
+
+Example Output:
+
+```json
+{
+ "txBytes": "Co8BCowBChwvY29zbW9zLmJhbmsudjFiZXRhMS5Nc2dTZW5kEmwKLWNvc21vczFsNnZzcWhoN3Jud3N5cjJreXozampnM3FkdWF6OGd3Z3lsODI3NRItY29zbW9zMTU4c2FsZHlnOHBteHU3Znd2dDBkNng3amVzd3A0Z3d5a2xrNnkzGgwKBXN0YWtlEgMxMDASBhIEEMCaDA=="
+}
+```
+
+#### `TxDecodeAmino`
+
+The `TxDecode` endpoint allows to decode an amino transaction.
+
+```shell
+cosmos.tx.v1beta1.Service/TxDecodeAmino
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"}' \
+ localhost:9090 \
+ cosmos.tx.v1beta1.Service/TxDecodeAmino
+```
+
+Example Output:
+
+```json
+{
+ "aminoJson": "{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"
+}
+```
+
+#### `TxEncodeAmino`
+
+The `TxEncodeAmino` endpoint allows to encode an amino transaction.
+
+```shell
+cosmos.tx.v1beta1.Service/TxEncodeAmino
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"amino_json":"{\"type\":\"cosmos-sdk/StdTx\",\"value\":{\"msg\":[{\"type\":\"cosmos-sdk/MsgSend\",\"value\":{\"from_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"to_address\":\"cosmos1tszz7p2zgd7vvkahyfre4wn5xyu80rptg6v9h5\",\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}]}}],\"fee\":{\"amount\":[{\"denom\":\"stake\",\"amount\":\"10\"}],\"gas\":\"200000\"},\"signatures\":null,\"memo\":\"foobar\",\"timeout_height\":\"0\"}}"}' \
+ localhost:9090 \
+ cosmos.tx.v1beta1.Service/TxEncodeAmino
+```
+
+Example Output:
+
+```json
+{
+ "amino_binary": "KCgWqQpvqKNhmgotY29zbW9zMXRzeno3cDJ6Z2Q3dnZrYWh5ZnJlNHduNXh5dTgwcnB0ZzZ2OWg1Ei1jb3Ntb3MxdHN6ejdwMnpnZDd2dmthaHlmcmU0d241eHl1ODBycHRnNnY5aDUaCwoFc3Rha2USAjEwEhEKCwoFc3Rha2USAjEwEMCaDCIGZm9vYmFy"
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/auth/README.md b/copy-of-sdk-docs/build/modules/auth/README.md
new file mode 100644
index 00000000..bd9f18a3
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/auth/README.md
@@ -0,0 +1,710 @@
+---
+sidebar_position: 1
+---
+
+# `x/auth`
+
+## Abstract
+
+This document specifies the auth module of the Cosmos SDK.
+
+The auth module is responsible for specifying the base transaction and account types
+for an application, since the SDK itself is agnostic to these particulars. It contains
+the middlewares, where all basic transaction validity checks (signatures, nonces, auxiliary fields)
+are performed, and exposes the account keeper, which allows other modules to read, write, and modify accounts.
+
+This module is used in the Cosmos Hub.
+
+## Contents
+
+* [Concepts](#concepts)
+ * [Gas & Fees](#gas--fees)
+* [State](#state)
+ * [Accounts](#accounts)
+* [AnteHandlers](#antehandlers)
+* [Keepers](#keepers)
+ * [Account Keeper](#account-keeper)
+* [Parameters](#parameters)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+
+## Concepts
+
+**Note:** The auth module is different from the [authz module](../authz/).
+
+The differences are:
+
+* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types.
+* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter.
+
+### Gas & Fees
+
+Fees serve two purposes for an operator of the network.
+
+Fees limit the growth of the state stored by every full node and allow for
+general purpose censorship of transactions of little economic value. Fees
+are best suited as an anti-spam mechanism where validators are disinterested in
+the use of the network and identities of users.
+
+Fees are determined by the gas limits and gas prices transactions provide, where
+`fees = ceil(gasLimit * gasPrices)`. Txs incur gas costs for all state reads/writes,
+signature verification, as well as costs proportional to the tx size. Operators
+should set minimum gas prices when starting their nodes. They must set the unit
+costs of gas in each token denomination they wish to support:
+
+`simd start ... --minimum-gas-prices=0.00001stake;0.05photinos`
+
+When adding transactions to mempool or gossipping transactions, validators check
+if the transaction's gas prices, which are determined by the provided fees, meet
+any of the validator's minimum gas prices. In other words, a transaction must
+provide a fee of at least one denomination that matches a validator's minimum
+gas price.
+
+CometBFT does not currently provide fee based mempool prioritization, and fee
+based mempool filtering is local to node and not part of consensus. But with
+minimum gas prices set, such a mechanism could be implemented by node operators.
+
+Because the market value for tokens will fluctuate, validators are expected to
+dynamically adjust their minimum gas prices to a level that would encourage the
+use of the network.
+
+## State
+
+### Accounts
+
+Accounts contain authentication information for a uniquely identified external user of an SDK blockchain,
+including public key, address, and account number / sequence number for replay protection. For efficiency,
+since account balances must also be fetched to pay fees, account structs also store the balance of a user
+as `sdk.Coins`.
+
+Accounts are exposed externally as an interface, and stored internally as
+either a base account or vesting account. Module clients wishing to add more
+account types may do so.
+
+* `0x01 | Address -> ProtocolBuffer(account)`
+
+#### Account Interface
+
+The account interface exposes methods to read and write standard account information.
+Note that all of these methods operate on an account struct conforming to the
+interface - in order to write the account to the store, the account keeper will
+need to be used.
+
+```go
+// AccountI is an interface used to store coins at a given address within state.
+// It presumes a notion of sequence numbers for replay protection,
+// a notion of account numbers for replay protection for previously pruned accounts,
+// and a pubkey for authentication purposes.
+//
+// Many complex conditions can be used in the concrete struct which implements AccountI.
+type AccountI interface {
+ proto.Message
+
+ GetAddress() sdk.AccAddress
+ SetAddress(sdk.AccAddress) error // errors if already set.
+
+ GetPubKey() crypto.PubKey // can return nil.
+ SetPubKey(crypto.PubKey) error
+
+ GetAccountNumber() uint64
+ SetAccountNumber(uint64) error
+
+ GetSequence() uint64
+ SetSequence(uint64) error
+
+ // Ensure that account implements stringer
+ String() string
+}
+```
+
+##### Base Account
+
+A base account is the simplest and most common account type, which just stores all requisite
+fields directly in a struct.
+
+```protobuf
+// BaseAccount defines a base account type. It contains all the necessary fields
+// for basic account functionality. Any custom account type should extend this
+// type for additional functionality (e.g. vesting).
+message BaseAccount {
+ string address = 1;
+ google.protobuf.Any pub_key = 2;
+ uint64 account_number = 3;
+ uint64 sequence = 4;
+}
+```
+
+### Vesting Account
+
+See [Vesting](https://docs.cosmos.network/main/modules/auth/vesting/).
+
+## AnteHandlers
+
+The `x/auth` module presently has no transaction handlers of its own, but does expose the special `AnteHandler`, used for performing basic validity checks on a transaction, such that it could be thrown out of the mempool.
+The `AnteHandler` can be seen as a set of decorators that check transactions within the current context, per [ADR 010](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-010-modular-antehandler.md).
+
+Note that the `AnteHandler` is called on both `CheckTx` and `DeliverTx`, as CometBFT proposers presently have the ability to include in their proposed block transactions which fail `CheckTx`.
+
+### Decorators
+
+The auth module provides `AnteDecorator`s that are recursively chained together into a single `AnteHandler` in the following order:
+
+* `SetUpContextDecorator`: Sets the `GasMeter` in the `Context` and wraps the next `AnteHandler` with a defer clause to recover from any downstream `OutOfGas` panics in the `AnteHandler` chain to return an error with information on gas provided and gas used.
+
+* `RejectExtensionOptionsDecorator`: Rejects all extension options which can optionally be included in protobuf transactions.
+
+* `MempoolFeeDecorator`: Checks if the `tx` fee is above local mempool `minFee` parameter during `CheckTx`.
+
+* `ValidateBasicDecorator`: Calls `tx.ValidateBasic` and returns any non-nil error.
+
+* `TxTimeoutHeightDecorator`: Check for a `tx` height timeout.
+
+* `ValidateMemoDecorator`: Validates `tx` memo with application parameters and returns any non-nil error.
+
+* `ConsumeGasTxSizeDecorator`: Consumes gas proportional to the `tx` size based on application parameters.
+
+* `DeductFeeDecorator`: Deducts the `FeeAmount` from first signer of the `tx`. If the `x/feegrant` module is enabled and a fee granter is set, it deducts fees from the fee granter account.
+
+* `SetPubKeyDecorator`: Sets the pubkey from a `tx`'s signers that does not already have its corresponding pubkey saved in the state machine and in the current context.
+
+* `ValidateSigCountDecorator`: Validates the number of signatures in `tx` based on app-parameters.
+
+* `SigGasConsumeDecorator`: Consumes parameter-defined amount of gas for each signature. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`.
+
+* `SigVerificationDecorator`: Verifies all signatures are valid. This requires pubkeys to be set in context for all signers as part of `SetPubKeyDecorator`.
+
+* `IncrementSequenceDecorator`: Increments the account sequence for each signer to prevent replay attacks.
+
+## Keepers
+
+The auth module only exposes one keeper, the account keeper, which can be used to read and write accounts.
+
+### Account Keeper
+
+Presently only one fully-permissioned account keeper is exposed, which has the ability to both read and write
+all fields of all accounts, and to iterate over all stored accounts.
+
+```go
+// AccountKeeperI is the interface contract that x/auth's keeper implements.
+type AccountKeeperI interface {
+ // Return a new account with the next account number and the specified address. Does not save the new account to the store.
+ NewAccountWithAddress(sdk.Context, sdk.AccAddress) types.AccountI
+
+ // Return a new account with the next account number. Does not save the new account to the store.
+ NewAccount(sdk.Context, types.AccountI) types.AccountI
+
+ // Check if an account exists in the store.
+ HasAccount(sdk.Context, sdk.AccAddress) bool
+
+ // Retrieve an account from the store.
+ GetAccount(sdk.Context, sdk.AccAddress) types.AccountI
+
+ // Set an account in the store.
+ SetAccount(sdk.Context, types.AccountI)
+
+ // Remove an account from the store.
+ RemoveAccount(sdk.Context, types.AccountI)
+
+ // Iterate over all accounts, calling the provided function. Stop iteration when it returns true.
+ IterateAccounts(sdk.Context, func(types.AccountI) bool)
+
+ // Fetch the public key of an account at a specified address
+ GetPubKey(sdk.Context, sdk.AccAddress) (crypto.PubKey, error)
+
+ // Fetch the sequence of an account at a specified address.
+ GetSequence(sdk.Context, sdk.AccAddress) (uint64, error)
+
+ // Fetch the next account number, and increment the internal counter.
+ NextAccountNumber(sdk.Context) uint64
+}
+```
+
+## Parameters
+
+The auth module contains the following parameters:
+
+| Key | Type | Example |
+| ---------------------- | --------------- | ------- |
+| MaxMemoCharacters | uint64 | 256 |
+| TxSigLimit | uint64 | 7 |
+| TxSizeCostPerByte | uint64 | 10 |
+| SigVerifyCostED25519 | uint64 | 590 |
+| SigVerifyCostSecp256k1 | uint64 | 1000 |
+
+## Client
+
+### CLI
+
+A user can query and interact with the `auth` module using the CLI.
+
+### Query
+
+The `query` commands allow users to query `auth` state.
+
+```bash
+simd query auth --help
+```
+
+#### account
+
+The `account` command allow users to query for an account by it's address.
+
+```bash
+simd query auth account [address] [flags]
+```
+
+Example:
+
+```bash
+simd query auth account cosmos1...
+```
+
+Example Output:
+
+```bash
+'@type': /cosmos.auth.v1beta1.BaseAccount
+account_number: "0"
+address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2
+pub_key:
+ '@type': /cosmos.crypto.secp256k1.PubKey
+ key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD
+sequence: "1"
+```
+
+#### accounts
+
+The `accounts` command allow users to query all the available accounts.
+
+```bash
+simd query auth accounts [flags]
+```
+
+Example:
+
+```bash
+simd query auth accounts
+```
+
+Example Output:
+
+```bash
+accounts:
+- '@type': /cosmos.auth.v1beta1.BaseAccount
+ account_number: "0"
+ address: cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2
+ pub_key:
+ '@type': /cosmos.crypto.secp256k1.PubKey
+ key: ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD
+ sequence: "1"
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "8"
+ address: cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr
+ pub_key: null
+ sequence: "0"
+ name: transfer
+ permissions:
+ - minter
+ - burner
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "4"
+ address: cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh
+ pub_key: null
+ sequence: "0"
+ name: bonded_tokens_pool
+ permissions:
+ - burner
+ - staking
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "5"
+ address: cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r
+ pub_key: null
+ sequence: "0"
+ name: not_bonded_tokens_pool
+ permissions:
+ - burner
+ - staking
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "6"
+ address: cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn
+ pub_key: null
+ sequence: "0"
+ name: gov
+ permissions:
+ - burner
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "3"
+ address: cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl
+ pub_key: null
+ sequence: "0"
+ name: distribution
+ permissions: []
+- '@type': /cosmos.auth.v1beta1.BaseAccount
+ account_number: "1"
+ address: cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j
+ pub_key: null
+ sequence: "0"
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "7"
+ address: cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q
+ pub_key: null
+ sequence: "0"
+ name: mint
+ permissions:
+ - minter
+- '@type': /cosmos.auth.v1beta1.ModuleAccount
+ base_account:
+ account_number: "2"
+ address: cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta
+ pub_key: null
+ sequence: "0"
+ name: fee_collector
+ permissions: []
+pagination:
+ next_key: null
+ total: "0"
+```
+
+#### params
+
+The `params` command allow users to query the current auth parameters.
+
+```bash
+simd query auth params [flags]
+```
+
+Example:
+
+```bash
+simd query auth params
+```
+
+Example Output:
+
+```bash
+max_memo_characters: "256"
+sig_verify_cost_ed25519: "590"
+sig_verify_cost_secp256k1: "1000"
+tx_sig_limit: "7"
+tx_size_cost_per_byte: "10"
+```
+
+### Transactions
+
+The `auth` module supports transactions commands to help you with signing and more. Compared to other modules you can access directly the `auth` module transactions commands using the only `tx` command.
+
+Use directly the `--help` flag to get more information about the `tx` command.
+
+```bash
+simd tx --help
+```
+
+#### `sign`
+
+The `sign` command allows users to sign transactions that was generated offline.
+
+```bash
+simd tx sign tx.json --from $ALICE > tx.signed.json
+```
+
+The result is a signed transaction that can be broadcasted to the network thanks to the broadcast command.
+
+More information about the `sign` command can be found running `simd tx sign --help`.
+
+#### `sign-batch`
+
+The `sign-batch` command allows users to sign multiples offline generated transactions.
+The transactions can be in one file, with one tx per line, or in multiple files.
+
+```bash
+simd tx sign txs.json --from $ALICE > tx.signed.json
+```
+
+or
+
+```bash
+simd tx sign tx1.json tx2.json tx3.json --from $ALICE > tx.signed.json
+```
+
+The result is multiples signed transactions. For combining the signed transactions into one transactions, use the `--append` flag.
+
+More information about the `sign-batch` command can be found running `simd tx sign-batch --help`.
+
+#### `multi-sign`
+
+The `multi-sign` command allows users to sign transactions that was generated offline by a multisig account.
+
+```bash
+simd tx multisign transaction.json k1k2k3 k1sig.json k2sig.json k3sig.json
+```
+
+Where `k1k2k3` is the multisig account address, `k1sig.json` is the signature of the first signer, `k2sig.json` is the signature of the second signer, and `k3sig.json` is the signature of the third signer.
+
+##### Nested multisig transactions
+
+To allow transactions to be signed by nested multisigs, meaning that a participant of a multisig account can be another multisig account, the `--skip-signature-verification` flag must be used.
+
+```bash
+# First aggregate signatures of the multisig participant
+simd tx multi-sign transaction.json ms1 ms1p1sig.json ms1p2sig.json --signature-only --skip-signature-verification > ms1sig.json
+
+# Then use the aggregated signatures and the other signatures to sign the final transaction
+simd tx multi-sign transaction.json k1ms1 k1sig.json ms1sig.json --skip-signature-verification
+```
+
+Where `ms1` is the nested multisig account address, `ms1p1sig.json` is the signature of the first participant of the nested multisig account, `ms1p2sig.json` is the signature of the second participant of the nested multisig account, and `ms1sig.json` is the aggregated signature of the nested multisig account.
+
+`k1ms1` is a multisig account comprised of an individual signer and another nested multisig account (`ms1`). `k1sig.json` is the signature of the first signer of the individual member.
+
+More information about the `multi-sign` command can be found running `simd tx multi-sign --help`.
+
+#### `multisign-batch`
+
+The `multisign-batch` works the same way as `sign-batch`, but for multisig accounts.
+With the difference that the `multisign-batch` command requires all transactions to be in one file, and the `--append` flag does not exist.
+
+More information about the `multisign-batch` command can be found running `simd tx multisign-batch --help`.
+
+#### `validate-signatures`
+
+The `validate-signatures` command allows users to validate the signatures of a signed transaction.
+
+```bash
+$ simd tx validate-signatures tx.signed.json
+Signers:
+ 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275
+
+Signatures:
+ 0: cosmos1l6vsqhh7rnwsyr2kyz3jjg3qduaz8gwgyl8275 [OK]
+```
+
+More information about the `validate-signatures` command can be found running `simd tx validate-signatures --help`.
+
+#### `broadcast`
+
+The `broadcast` command allows users to broadcast a signed transaction to the network.
+
+```bash
+simd tx broadcast tx.signed.json
+```
+
+More information about the `broadcast` command can be found running `simd tx broadcast --help`.
+
+
+### gRPC
+
+A user can query the `auth` module using gRPC endpoints.
+
+#### Account
+
+The `account` endpoint allow users to query for an account by it's address.
+
+```bash
+cosmos.auth.v1beta1.Query/Account
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"address":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.auth.v1beta1.Query/Account
+```
+
+Example Output:
+
+```bash
+{
+ "account":{
+ "@type":"/cosmos.auth.v1beta1.BaseAccount",
+ "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2",
+ "pubKey":{
+ "@type":"/cosmos.crypto.secp256k1.PubKey",
+ "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD"
+ },
+ "sequence":"1"
+ }
+}
+```
+
+#### Accounts
+
+The `accounts` endpoint allow users to query all the available accounts.
+
+```bash
+cosmos.auth.v1beta1.Query/Accounts
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.auth.v1beta1.Query/Accounts
+```
+
+Example Output:
+
+```bash
+{
+ "accounts":[
+ {
+ "@type":"/cosmos.auth.v1beta1.BaseAccount",
+ "address":"cosmos1zwg6tpl8aw4rawv8sgag9086lpw5hv33u5ctr2",
+ "pubKey":{
+ "@type":"/cosmos.crypto.secp256k1.PubKey",
+ "key":"ApDrE38zZdd7wLmFS9YmqO684y5DG6fjZ4rVeihF/AQD"
+ },
+ "sequence":"1"
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos1yl6hdjhmkf37639730gffanpzndzdpmhwlkfhr",
+ "accountNumber":"8"
+ },
+ "name":"transfer",
+ "permissions":[
+ "minter",
+ "burner"
+ ]
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos1fl48vsnmsdzcv85q5d2q4z5ajdha8yu34mf0eh",
+ "accountNumber":"4"
+ },
+ "name":"bonded_tokens_pool",
+ "permissions":[
+ "burner",
+ "staking"
+ ]
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos1tygms3xhhs3yv487phx3dw4a95jn7t7lpm470r",
+ "accountNumber":"5"
+ },
+ "name":"not_bonded_tokens_pool",
+ "permissions":[
+ "burner",
+ "staking"
+ ]
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos10d07y265gmmuvt4z0w9aw880jnsr700j6zn9kn",
+ "accountNumber":"6"
+ },
+ "name":"gov",
+ "permissions":[
+ "burner"
+ ]
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos1jv65s3grqf6v6jl3dp4t6c9t9rk99cd88lyufl",
+ "accountNumber":"3"
+ },
+ "name":"distribution"
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.BaseAccount",
+ "accountNumber":"1",
+ "address":"cosmos147k3r7v2tvwqhcmaxcfql7j8rmkrlsemxshd3j"
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos1m3h30wlvsf8llruxtpukdvsy0km2kum8g38c8q",
+ "accountNumber":"7"
+ },
+ "name":"mint",
+ "permissions":[
+ "minter"
+ ]
+ },
+ {
+ "@type":"/cosmos.auth.v1beta1.ModuleAccount",
+ "baseAccount":{
+ "address":"cosmos17xpfvakm2amg962yls6f84z3kell8c5lserqta",
+ "accountNumber":"2"
+ },
+ "name":"fee_collector"
+ }
+ ],
+ "pagination":{
+ "total":"9"
+ }
+}
+```
+
+#### Params
+
+The `params` endpoint allow users to query the current auth parameters.
+
+```bash
+cosmos.auth.v1beta1.Query/Params
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.auth.v1beta1.Query/Params
+```
+
+Example Output:
+
+```bash
+{
+ "params": {
+ "maxMemoCharacters": "256",
+ "txSigLimit": "7",
+ "txSizeCostPerByte": "10",
+ "sigVerifyCostEd25519": "590",
+ "sigVerifyCostSecp256k1": "1000"
+ }
+}
+```
+
+### REST
+
+A user can query the `auth` module using REST endpoints.
+
+#### Account
+
+The `account` endpoint allow users to query for an account by it's address.
+
+```bash
+/cosmos/auth/v1beta1/account?address={address}
+```
+
+#### Accounts
+
+The `accounts` endpoint allow users to query all the available accounts.
+
+```bash
+/cosmos/auth/v1beta1/accounts
+```
+
+#### Params
+
+The `params` endpoint allow users to query the current auth parameters.
+
+```bash
+/cosmos/auth/v1beta1/params
+```
diff --git a/copy-of-sdk-docs/build/modules/authz/README.md b/copy-of-sdk-docs/build/modules/authz/README.md
new file mode 100644
index 00000000..3ec3c366
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/authz/README.md
@@ -0,0 +1,377 @@
+---
+sidebar_position: 1
+---
+
+# `x/authz`
+
+## Abstract
+
+`x/authz` is an implementation of a Cosmos SDK module, per [ADR 30](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md), that allows
+granting arbitrary privileges from one account (the granter) to another account (the grantee). Authorizations must be granted for a particular Msg service method one by one using an implementation of the `Authorization` interface.
+
+## Contents
+
+* [Concepts](#concepts)
+ * [Authorization and Grant](#authorization-and-grant)
+ * [Built-in Authorizations](#built-in-authorizations)
+ * [Gas](#gas)
+* [State](#state)
+ * [Grant](#grant)
+ * [GrantQueue](#grantqueue)
+* [Messages](#messages)
+ * [MsgGrant](#msggrant)
+ * [MsgRevoke](#msgrevoke)
+ * [MsgExec](#msgexec)
+* [Events](#events)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+
+## Concepts
+
+### Authorization and Grant
+
+The `x/authz` module defines interfaces and messages grant authorizations to perform actions
+on behalf of one account to other accounts. The design is defined in the [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md).
+
+A *grant* is an allowance to execute a Msg by the grantee on behalf of the granter.
+Authorization is an interface that must be implemented by a concrete authorization logic to validate and execute grants. Authorizations are extensible and can be defined for any Msg service method even outside of the module where the Msg method is defined. See the `SendAuthorization` example in the next section for more details.
+
+**Note:** The authz module is different from the [auth (authentication)](../auth/) module that is responsible for specifying the base transaction and account types.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/authz/authorizations.go#L11-L25
+```
+
+### Built-in Authorizations
+
+The Cosmos SDK `x/authz` module comes with following authorization types:
+
+#### GenericAuthorization
+
+`GenericAuthorization` implements the `Authorization` interface that gives unrestricted permission to execute the provided Msg on behalf of granter's account.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/authz.proto#L14-L22
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/authz/generic_authorization.go#L16-L29
+```
+
+* `msg` stores Msg type URL.
+
+#### SendAuthorization
+
+`SendAuthorization` implements the `Authorization` interface for the `cosmos.bank.v1beta1.MsgSend` Msg.
+
+* It takes a (positive) `SpendLimit` that specifies the maximum amount of tokens the grantee can spend. The `SpendLimit` is updated as the tokens are spent.
+* It takes an (optional) `AllowList` that specifies to which addresses a grantee can send token.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/authz.proto#L11-L30
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/bank/types/send_authorization.go#L29-L62
+```
+
+* `spend_limit` keeps track of how many coins are left in the authorization.
+* `allow_list` specifies an optional list of addresses to whom the grantee can send tokens on behalf of the granter.
+
+#### StakeAuthorization
+
+`StakeAuthorization` implements the `Authorization` interface for messages in the [staking module](https://docs.cosmos.network/v0.53/build/modules/staking). It takes an `AuthorizationType` to specify whether you want to authorise delegating, undelegating or redelegating (i.e. these have to be authorised separately). It also takes an optional `MaxTokens` that keeps track of a limit to the amount of tokens that can be delegated/undelegated/redelegated. If left empty, the amount is unlimited. Additionally, this Msg takes an `AllowList` or a `DenyList`, which allows you to select which validators you allow or deny grantees to stake with.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/authz.proto#L11-L35
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/staking/types/authz.go#L15-L35
+```
+
+### Gas
+
+In order to prevent DoS attacks, granting `StakeAuthorization`s with `x/authz` incurs gas. `StakeAuthorization` allows you to authorize another account to delegate, undelegate, or redelegate to validators. The authorizer can define a list of validators they allow or deny delegations to. The Cosmos SDK iterates over these lists and charge 10 gas for each validator in both of the lists.
+
+Since the state maintaining a list for granter, grantee pair with same expiration, we are iterating over the list to remove the grant (in case of any revoke of particular `msgType`) from the list and we are charging 20 gas per iteration.
+
+## State
+
+### Grant
+
+Grants are identified by combining granter address (the address bytes of the granter), grantee address (the address bytes of the grantee) and Authorization type (its type URL). Hence we only allow one grant for the (granter, grantee, Authorization) triple.
+
+* Grant: `0x01 | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes | msgType_bytes -> ProtocolBuffer(AuthorizationGrant)`
+
+The grant object encapsulates an `Authorization` type and an expiration timestamp:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/authz.proto#L24-L32
+```
+
+### GrantQueue
+
+We are maintaining a queue for authz pruning. Whenever a grant is created, an item will be added to `GrantQueue` with a key of expiration, granter, grantee.
+
+In `EndBlock` (which runs for every block) we continuously check and prune the expired grants by forming a prefix key with current blocktime that passed the stored expiration in `GrantQueue`, we iterate through all the matched records from `GrantQueue` and delete them from the `GrantQueue` & `Grant`s store.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/5f4ddc6f80f9707320eec42182184207fff3833a/x/authz/keeper/keeper.go#L378-L403
+```
+
+* GrantQueue: `0x02 | expiration_bytes | granter_address_len (1 byte) | granter_address_bytes | grantee_address_len (1 byte) | grantee_address_bytes -> ProtocolBuffer(GrantQueueItem)`
+
+The `expiration_bytes` are the expiration date in UTC with the format `"2006-01-02T15:04:05.000000000"`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/authz/keeper/keys.go#L77-L93
+```
+
+The `GrantQueueItem` object contains the list of type urls between granter and grantee that expire at the time indicated in the key.
+
+## Messages
+
+In this section we describe the processing of messages for the authz module.
+
+### MsgGrant
+
+An authorization grant is created using the `MsgGrant` message.
+If there is already a grant for the `(granter, grantee, Authorization)` triple, then the new grant overwrites the previous one. To update or extend an existing grant, a new grant with the same `(granter, grantee, Authorization)` triple should be created.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/tx.proto#L35-L45
+```
+
+The message handling should fail if:
+
+* both granter and grantee have the same address.
+* provided `Expiration` time is less than current unix timestamp (but a grant will be created if no `expiration` time is provided since `expiration` is optional).
+* provided `Grant.Authorization` is not implemented.
+* `Authorization.MsgTypeURL()` is not defined in the router (there is no defined handler in the app router to handle that Msg types).
+
+### MsgRevoke
+
+A grant can be removed with the `MsgRevoke` message.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/tx.proto#L69-L78
+```
+
+The message handling should fail if:
+
+* both granter and grantee have the same address.
+* provided `MsgTypeUrl` is empty.
+
+NOTE: The `MsgExec` message removes a grant if the grant has expired.
+
+### MsgExec
+
+When a grantee wants to execute a transaction on behalf of a granter, they must send `MsgExec`.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/authz/v1beta1/tx.proto#L52-L63
+```
+
+The message handling should fail if:
+
+* provided `Authorization` is not implemented.
+* grantee doesn't have permission to run the transaction.
+* if granted authorization is expired.
+
+## Events
+
+The authz module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main/cosmos.authz.v1beta1#cosmos.authz.v1beta1.EventGrant).
+
+## Client
+
+### CLI
+
+A user can query and interact with the `authz` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `authz` state.
+
+```bash
+simd query authz --help
+```
+
+##### grants
+
+The `grants` command allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type.
+
+```bash
+simd query authz grants [granter-addr] [grantee-addr] [msg-type-url]? [flags]
+```
+
+Example:
+
+```bash
+simd query authz grants cosmos1.. cosmos1.. /cosmos.bank.v1beta1.MsgSend
+```
+
+Example Output:
+
+```bash
+grants:
+- authorization:
+ '@type': /cosmos.bank.v1beta1.SendAuthorization
+ spend_limit:
+ - amount: "100"
+ denom: stake
+ expiration: "2022-01-01T00:00:00Z"
+pagination: null
+```
+
+#### Transactions
+
+The `tx` commands allow users to interact with the `authz` module.
+
+```bash
+simd tx authz --help
+```
+
+##### exec
+
+The `exec` command allows a grantee to execute a transaction on behalf of granter.
+
+```bash
+ simd tx authz exec [tx-json-file] --from [grantee] [flags]
+```
+
+Example:
+
+```bash
+simd tx authz exec tx.json --from=cosmos1..
+```
+
+##### grant
+
+The `grant` command allows a granter to grant an authorization to a grantee.
+
+```bash
+simd tx authz grant --from [flags]
+```
+
+* The `send` authorization_type refers to the built-in `SendAuthorization` type. The custom flags available are `spend-limit` (required) and `allow-list` (optional) , documented [here](#sendauthorization)
+
+Example:
+
+```bash
+ simd tx authz grant cosmos1.. send --spend-limit=100stake --allow-list=cosmos1...,cosmos2... --from=cosmos1..
+```
+
+* The `generic` authorization_type refers to the built-in `GenericAuthorization` type. The custom flag available is `msg-type` (required) documented [here](#genericauthorization).
+
+> Note: `msg-type` is any valid Cosmos SDK `Msg` type url.
+
+Example:
+
+```bash
+ simd tx authz grant cosmos1.. generic --msg-type=/cosmos.bank.v1beta1.MsgSend --from=cosmos1..
+```
+
+* The `delegate`,`unbond`,`redelegate` authorization_types refer to the built-in `StakeAuthorization` type. The custom flags available are `spend-limit` (optional), `allowed-validators` (optional) and `deny-validators` (optional) documented [here](#stakeauthorization).
+
+> Note: `allowed-validators` and `deny-validators` cannot both be empty. `spend-limit` represents the `MaxTokens`
+
+Example:
+
+```bash
+simd tx authz grant cosmos1.. delegate --spend-limit=100stake --allowed-validators=cosmos...,cosmos... --deny-validators=cosmos... --from=cosmos1..
+```
+
+##### revoke
+
+The `revoke` command allows a granter to revoke an authorization from a grantee.
+
+```bash
+simd tx authz revoke [grantee] [msg-type-url] --from=[granter] [flags]
+```
+
+Example:
+
+```bash
+simd tx authz revoke cosmos1.. /cosmos.bank.v1beta1.MsgSend --from=cosmos1..
+```
+
+### gRPC
+
+A user can query the `authz` module using gRPC endpoints.
+
+#### Grants
+
+The `Grants` endpoint allows users to query grants for a granter-grantee pair. If the message type URL is set, it selects grants only for that message type.
+
+```bash
+cosmos.authz.v1beta1.Query/Grants
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"granter":"cosmos1..","grantee":"cosmos1..","msg_type_url":"/cosmos.bank.v1beta1.MsgSend"}' \
+ localhost:9090 \
+ cosmos.authz.v1beta1.Query/Grants
+```
+
+Example Output:
+
+```bash
+{
+ "grants": [
+ {
+ "authorization": {
+ "@type": "/cosmos.bank.v1beta1.SendAuthorization",
+ "spendLimit": [
+ {
+ "denom":"stake",
+ "amount":"100"
+ }
+ ]
+ },
+ "expiration": "2022-01-01T00:00:00Z"
+ }
+ ]
+}
+```
+
+### REST
+
+A user can query the `authz` module using REST endpoints.
+
+```bash
+/cosmos/authz/v1beta1/grants
+```
+
+Example:
+
+```bash
+curl "localhost:1317/cosmos/authz/v1beta1/grants?granter=cosmos1..&grantee=cosmos1..&msg_type_url=/cosmos.bank.v1beta1.MsgSend"
+```
+
+Example Output:
+
+```bash
+{
+ "grants": [
+ {
+ "authorization": {
+ "@type": "/cosmos.bank.v1beta1.SendAuthorization",
+ "spend_limit": [
+ {
+ "denom": "stake",
+ "amount": "100"
+ }
+ ]
+ },
+ "expiration": "2022-01-01T00:00:00Z"
+ }
+ ],
+ "pagination": null
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/bank/README.md b/copy-of-sdk-docs/build/modules/bank/README.md
new file mode 100644
index 00000000..62a781da
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/bank/README.md
@@ -0,0 +1,1039 @@
+---
+sidebar_position: 1
+---
+
+# `x/bank`
+
+## Abstract
+
+This document specifies the bank module of the Cosmos SDK.
+
+The bank module is responsible for handling multi-asset coin transfers between
+accounts and tracking special-case pseudo-transfers which must work differently
+with particular kinds of accounts (notably delegating/undelegating for vesting
+accounts). It exposes several interfaces with varying capabilities for secure
+interaction with other modules which must alter user balances.
+
+In addition, the bank module tracks and provides query support for the total
+supply of all assets used in the application.
+
+This module is used in the Cosmos Hub.
+
+## Contents
+
+* [Supply](#supply)
+ * [Total Supply](#total-supply)
+* [Module Accounts](#module-accounts)
+ * [Permissions](#permissions)
+* [State](#state)
+* [Params](#params)
+* [Keepers](#keepers)
+* [Messages](#messages)
+* [Events](#events)
+ * [Message Events](#message-events)
+ * [Keeper Events](#keeper-events)
+* [Parameters](#parameters)
+ * [SendEnabled](#sendenabled)
+ * [DefaultSendEnabled](#defaultsendenabled)
+* [Client](#client)
+ * [CLI](#cli)
+ * [Query](#query)
+ * [Transactions](#transactions)
+* [gRPC](#grpc)
+
+## Supply
+
+The `supply` functionality:
+
+* passively tracks the total supply of coins within a chain,
+* provides a pattern for modules to hold/interact with `Coins`, and
+* introduces the invariant check to verify a chain's total supply.
+
+### Total Supply
+
+The total `Supply` of the network is equal to the sum of all coins from the
+account. The total supply is updated every time a `Coin` is minted (eg: as part
+of the inflation mechanism) or burned (eg: due to slashing or if a governance
+proposal is vetoed).
+
+## Module Accounts
+
+The supply functionality introduces a new type of `auth.Account` which can be used by
+modules to allocate tokens and in special cases mint or burn tokens. At a base
+level these module accounts are capable of sending/receiving tokens to and from
+`auth.Account`s and other module accounts. This design replaces previous
+alternative designs where, to hold tokens, modules would burn the incoming
+tokens from the sender account, and then track those tokens internally. Later,
+in order to send tokens, the module would need to effectively mint tokens
+within a destination account. The new design removes duplicate logic between
+modules to perform this accounting.
+
+The `ModuleAccount` interface is defined as follows:
+
+```go
+type ModuleAccount interface {
+ auth.Account // same methods as the Account interface
+
+ GetName() string // name of the module; used to obtain the address
+ GetPermissions() []string // permissions of module account
+ HasPermission(string) bool
+}
+```
+
+> **WARNING!**
+> Any module or message handler that allows either direct or indirect sending of funds must explicitly guarantee those funds cannot be sent to module accounts (unless allowed).
+
+The supply `Keeper` also introduces new wrapper functions for the auth `Keeper`
+and the bank `Keeper` that are related to `ModuleAccount`s in order to be able
+to:
+
+* Get and set `ModuleAccount`s by providing the `Name`.
+* Send coins from and to other `ModuleAccount`s or standard `Account`s
+ (`BaseAccount` or `VestingAccount`) by passing only the `Name`.
+* `Mint` or `Burn` coins for a `ModuleAccount` (restricted to its permissions).
+
+### Permissions
+
+Each `ModuleAccount` has a different set of permissions that provide different
+object capabilities to perform certain actions. Permissions need to be
+registered upon the creation of the supply `Keeper` so that every time a
+`ModuleAccount` calls the allowed functions, the `Keeper` can lookup the
+permissions to that specific account and perform or not perform the action.
+
+The available permissions are:
+
+* `Minter`: allows for a module to mint a specific amount of coins.
+* `Burner`: allows for a module to burn a specific amount of coins.
+* `Staking`: allows for a module to delegate and undelegate a specific amount of coins.
+
+## State
+
+The `x/bank` module keeps state of the following primary objects:
+
+1. Account balances
+2. Denomination metadata
+3. The total supply of all balances
+4. Information on which denominations are allowed to be sent.
+
+In addition, the `x/bank` module keeps the following indexes to manage the
+aforementioned state:
+
+* Supply Index: `0x0 | byte(denom) -> byte(amount)`
+* Denom Metadata Index: `0x1 | byte(denom) -> ProtocolBuffer(Metadata)`
+* Balances Index: `0x2 | byte(address length) | []byte(address) | []byte(balance.Denom) -> ProtocolBuffer(balance)`
+* Reverse Denomination to Address Index: `0x03 | byte(denom) | 0x00 | []byte(address) -> 0`
+
+## Params
+
+The bank module stores it's params in state with the prefix of `0x05`,
+it can be updated with governance or the address with authority.
+
+* Params: `0x05 | ProtocolBuffer(Params)`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/bank.proto#L12-L23
+```
+
+## Keepers
+
+The bank module provides these exported keeper interfaces that can be
+passed to other modules that read or update account balances. Modules
+should use the least-permissive interface that provides the functionality they
+require.
+
+Best practices dictate careful review of `bank` module code to ensure that
+permissions are limited in the way that you expect.
+
+### Denied Addresses
+
+The `x/bank` module accepts a map of addresses that are considered blocklisted
+from directly and explicitly receiving funds through means such as `MsgSend` and
+`MsgMultiSend` and direct API calls like `SendCoinsFromModuleToAccount`.
+
+Typically, these addresses are module accounts. If these addresses receive funds
+outside the expected rules of the state machine, invariants are likely to be
+broken and could result in a halted network.
+
+By providing the `x/bank` module with a blocklisted set of addresses, an error occurs for the operation if a user or client attempts to directly or indirectly send funds to a blocklisted account, for example, by using [IBC](https://ibc.cosmos.network).
+
+### Common Types
+
+#### Input
+
+An input of a multiparty transfer
+
+```protobuf
+// Input models transaction input.
+message Input {
+ string address = 1;
+ repeated cosmos.base.v1beta1.Coin coins = 2;
+}
+```
+
+#### Output
+
+An output of a multiparty transfer.
+
+```protobuf
+// Output models transaction outputs.
+message Output {
+ string address = 1;
+ repeated cosmos.base.v1beta1.Coin coins = 2;
+}
+```
+
+### BaseKeeper
+
+The base keeper provides full-permission access: the ability to arbitrary modify any account's balance and mint or burn coins.
+
+Restricted permission to mint per module could be achieved by using baseKeeper with `WithMintCoinsRestriction` to give specific restrictions to mint (e.g. only minting certain denom).
+
+```go
+// Keeper defines a module interface that facilitates the transfer of coins
+// between accounts.
+type Keeper interface {
+ SendKeeper
+ WithMintCoinsRestriction(MintingRestrictionFn) BaseKeeper
+
+ InitGenesis(context.Context, *types.GenesisState)
+ ExportGenesis(context.Context) *types.GenesisState
+
+ GetSupply(ctx context.Context, denom string) sdk.Coin
+ HasSupply(ctx context.Context, denom string) bool
+ GetPaginatedTotalSupply(ctx context.Context, pagination *query.PageRequest) (sdk.Coins, *query.PageResponse, error)
+ IterateTotalSupply(ctx context.Context, cb func(sdk.Coin) bool)
+ GetDenomMetaData(ctx context.Context, denom string) (types.Metadata, bool)
+ HasDenomMetaData(ctx context.Context, denom string) bool
+ SetDenomMetaData(ctx context.Context, denomMetaData types.Metadata)
+ IterateAllDenomMetaData(ctx context.Context, cb func(types.Metadata) bool)
+
+ SendCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) error
+ SendCoinsFromModuleToModule(ctx context.Context, senderModule, recipientModule string, amt sdk.Coins) error
+ SendCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error
+ DelegateCoinsFromAccountToModule(ctx context.Context, senderAddr sdk.AccAddress, recipientModule string, amt sdk.Coins) error
+ UndelegateCoinsFromModuleToAccount(ctx context.Context, senderModule string, recipientAddr sdk.AccAddress, amt sdk.Coins) error
+ MintCoins(ctx context.Context, moduleName string, amt sdk.Coins) error
+ BurnCoins(ctx context.Context, moduleName string, amt sdk.Coins) error
+
+ DelegateCoins(ctx context.Context, delegatorAddr, moduleAccAddr sdk.AccAddress, amt sdk.Coins) error
+ UndelegateCoins(ctx context.Context, moduleAccAddr, delegatorAddr sdk.AccAddress, amt sdk.Coins) error
+
+ // GetAuthority gets the address capable of executing governance proposal messages. Usually the gov module account.
+ GetAuthority() string
+
+ types.QueryServer
+}
+```
+
+### SendKeeper
+
+The send keeper provides access to account balances and the ability to transfer coins between
+accounts. The send keeper does not alter the total supply (mint or burn coins).
+
+```go
+// SendKeeper defines a module interface that facilitates the transfer of coins
+// between accounts without the possibility of creating coins.
+type SendKeeper interface {
+ ViewKeeper
+
+ AppendSendRestriction(restriction SendRestrictionFn)
+ PrependSendRestriction(restriction SendRestrictionFn)
+ ClearSendRestriction()
+
+ InputOutputCoins(ctx context.Context, input types.Input, outputs []types.Output) error
+ SendCoins(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) error
+
+ GetParams(ctx context.Context) types.Params
+ SetParams(ctx context.Context, params types.Params) error
+
+ IsSendEnabledDenom(ctx context.Context, denom string) bool
+ SetSendEnabled(ctx context.Context, denom string, value bool)
+ SetAllSendEnabled(ctx context.Context, sendEnableds []*types.SendEnabled)
+ DeleteSendEnabled(ctx context.Context, denom string)
+ IterateSendEnabledEntries(ctx context.Context, cb func(denom string, sendEnabled bool) (stop bool))
+ GetAllSendEnabledEntries(ctx context.Context) []types.SendEnabled
+
+ IsSendEnabledCoin(ctx context.Context, coin sdk.Coin) bool
+ IsSendEnabledCoins(ctx context.Context, coins ...sdk.Coin) error
+
+ BlockedAddr(addr sdk.AccAddress) bool
+}
+```
+
+#### Send Restrictions
+
+The `SendKeeper` applies a `SendRestrictionFn` before each transfer of funds.
+
+```golang
+// A SendRestrictionFn can restrict sends and/or provide a new receiver address.
+type SendRestrictionFn func(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (newToAddr sdk.AccAddress, err error)
+```
+
+After the `SendKeeper` (or `BaseKeeper`) has been created, send restrictions can be added to it using the `AppendSendRestriction` or `PrependSendRestriction` functions.
+Both functions compose the provided restriction with any previously provided restrictions.
+`AppendSendRestriction` adds the provided restriction to be run after any previously provided send restrictions.
+`PrependSendRestriction` adds the restriction to be run before any previously provided send restrictions.
+The composition will short-circuit when an error is encountered. I.e. if the first one returns an error, the second is not run.
+
+During `SendCoins`, the send restriction is applied before coins are removed from the from address and adding them to the to address.
+During `InputOutputCoins`, the send restriction is applied after the input coins are removed and once for each output before the funds are added.
+
+A send restriction function should make use of a custom value in the context to allow bypassing that specific restriction.
+
+Send Restrictions are not placed on `ModuleToAccount` or `ModuleToModule` transfers. This is done due to modules needing to move funds to user accounts and other module accounts. This is a design decision to allow for more flexibility in the state machine. The state machine should be able to move funds between module accounts and user accounts without restrictions.
+
+Secondly this limitation would limit the usage of the state machine even for itself. users would not be able to receive rewards, not be able to move funds between module accounts. In the case that a user sends funds from a user account to the community pool and then a governance proposal is used to get those tokens into the users account this would fall under the discretion of the app chain developer to what they would like to do here. We can not make strong assumptions here.
+Thirdly, this issue could lead into a chain halt if a token is disabled and the token is moved in the begin/endblock. This is the last reason we see the current change and more damaging then beneficial for users.
+
+For example, in your module's keeper package, you'd define the send restriction function:
+
+```golang
+var _ banktypes.SendRestrictionFn = Keeper{}.SendRestrictionFn
+
+func (k Keeper) SendRestrictionFn(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) (sdk.AccAddress, error) {
+ // Bypass if the context says to.
+ if mymodule.HasBypass(ctx) {
+ return toAddr, nil
+ }
+
+ // Your custom send restriction logic goes here.
+ return nil, errors.New("not implemented")
+}
+```
+
+The bank keeper should be provided to your keeper's constructor so the send restriction can be added to it:
+
+```golang
+func NewKeeper(cdc codec.BinaryCodec, storeKey storetypes.StoreKey, bankKeeper mymodule.BankKeeper) Keeper {
+ rv := Keeper{/*...*/}
+ bankKeeper.AppendSendRestriction(rv.SendRestrictionFn)
+ return rv
+}
+```
+
+Then, in the `mymodule` package, define the context helpers:
+
+```golang
+const bypassKey = "bypass-mymodule-restriction"
+
+// WithBypass returns a new context that will cause the mymodule bank send restriction to be skipped.
+func WithBypass(ctx context.Context) context.Context {
+ return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, true)
+}
+
+// WithoutBypass returns a new context that will cause the mymodule bank send restriction to not be skipped.
+func WithoutBypass(ctx context.Context) context.Context {
+ return sdk.UnwrapSDKContext(ctx).WithValue(bypassKey, false)
+}
+
+// HasBypass checks the context to see if the mymodule bank send restriction should be skipped.
+func HasBypass(ctx context.Context) bool {
+ bypassValue := ctx.Value(bypassKey)
+ if bypassValue == nil {
+ return false
+ }
+ bypass, isBool := bypassValue.(bool)
+ return isBool && bypass
+}
+```
+
+Now, anywhere where you want to use `SendCoins` or `InputOutputCoins`, but you don't want your send restriction applied:
+
+```golang
+func (k Keeper) DoThing(ctx context.Context, fromAddr, toAddr sdk.AccAddress, amt sdk.Coins) error {
+ return k.bankKeeper.SendCoins(mymodule.WithBypass(ctx), fromAddr, toAddr, amt)
+}
+```
+
+### ViewKeeper
+
+The view keeper provides read-only access to account balances. The view keeper does not have balance alteration functionality. All balance lookups are `O(1)`.
+
+```go
+// ViewKeeper defines a module interface that facilitates read only access to
+// account balances.
+type ViewKeeper interface {
+ ValidateBalance(ctx context.Context, addr sdk.AccAddress) error
+ HasBalance(ctx context.Context, addr sdk.AccAddress, amt sdk.Coin) bool
+
+ GetAllBalances(ctx context.Context, addr sdk.AccAddress) sdk.Coins
+ GetAccountsBalances(ctx context.Context) []types.Balance
+ GetBalance(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin
+ LockedCoins(ctx context.Context, addr sdk.AccAddress) sdk.Coins
+ SpendableCoins(ctx context.Context, addr sdk.AccAddress) sdk.Coins
+ SpendableCoin(ctx context.Context, addr sdk.AccAddress, denom string) sdk.Coin
+
+ IterateAccountBalances(ctx context.Context, addr sdk.AccAddress, cb func(coin sdk.Coin) (stop bool))
+ IterateAllBalances(ctx context.Context, cb func(address sdk.AccAddress, coin sdk.Coin) (stop bool))
+}
+```
+
+## Messages
+
+### MsgSend
+
+Send coins from one address to another.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L38-L53
+```
+
+The message will fail under the following conditions:
+
+* The coins do not have sending enabled
+* The `to` address is restricted
+
+### MsgMultiSend
+
+Send coins from one sender and to a series of different address. If any of the receiving addresses do not correspond to an existing account, a new account is created.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L58-L69
+```
+
+The message will fail under the following conditions:
+
+* Any of the coins do not have sending enabled
+* Any of the `to` addresses are restricted
+* Any of the coins are locked
+* The inputs and outputs do not correctly correspond to one another
+
+### MsgUpdateParams
+
+The `bank` module params can be updated through `MsgUpdateParams`, which can be done using governance proposal. The signer will always be the `gov` module account address.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L74-L88
+```
+
+The message handling can fail if:
+
+* signer is not the gov module account address.
+
+### MsgSetSendEnabled
+
+Used with the x/gov module to set create/edit SendEnabled entries.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L96-L117
+```
+
+The message will fail under the following conditions:
+
+* The authority is not a bech32 address.
+* The authority is not x/gov module's address.
+* There are multiple SendEnabled entries with the same Denom.
+* One or more SendEnabled entries has an invalid Denom.
+
+## Events
+
+The bank module emits the following events:
+
+### Message Events
+
+#### MsgSend
+
+| Type | Attribute Key | Attribute Value |
+| -------- | ------------- | ------------------ |
+| transfer | recipient | {recipientAddress} |
+| transfer | amount | {amount} |
+| message | module | bank |
+| message | action | send |
+| message | sender | {senderAddress} |
+
+#### MsgMultiSend
+
+| Type | Attribute Key | Attribute Value |
+| -------- | ------------- | ------------------ |
+| transfer | recipient | {recipientAddress} |
+| transfer | amount | {amount} |
+| message | module | bank |
+| message | action | multisend |
+| message | sender | {senderAddress} |
+
+### Keeper Events
+
+In addition to message events, the bank keeper will produce events when the following methods are called (or any method which ends up calling them)
+
+#### MintCoins
+
+```json
+{
+ "type": "coinbase",
+ "attributes": [
+ {
+ "key": "minter",
+ "value": "{{sdk.AccAddress of the module minting coins}}",
+ "index": true
+ },
+ {
+ "key": "amount",
+ "value": "{{sdk.Coins being minted}}",
+ "index": true
+ }
+ ]
+}
+```
+
+```json
+{
+ "type": "coin_received",
+ "attributes": [
+ {
+ "key": "receiver",
+ "value": "{{sdk.AccAddress of the module minting coins}}",
+ "index": true
+ },
+ {
+ "key": "amount",
+ "value": "{{sdk.Coins being received}}",
+ "index": true
+ }
+ ]
+}
+```
+
+#### BurnCoins
+
+```json
+{
+ "type": "burn",
+ "attributes": [
+ {
+ "key": "burner",
+ "value": "{{sdk.AccAddress of the module burning coins}}",
+ "index": true
+ },
+ {
+ "key": "amount",
+ "value": "{{sdk.Coins being burned}}",
+ "index": true
+ }
+ ]
+}
+```
+
+```json
+{
+ "type": "coin_spent",
+ "attributes": [
+ {
+ "key": "spender",
+ "value": "{{sdk.AccAddress of the module burning coins}}",
+ "index": true
+ },
+ {
+ "key": "amount",
+ "value": "{{sdk.Coins being burned}}",
+ "index": true
+ }
+ ]
+}
+```
+
+#### addCoins
+
+```json
+{
+ "type": "coin_received",
+ "attributes": [
+ {
+ "key": "receiver",
+ "value": "{{sdk.AccAddress of the address beneficiary of the coins}}",
+ "index": true
+ },
+ {
+ "key": "amount",
+ "value": "{{sdk.Coins being received}}",
+ "index": true
+ }
+ ]
+}
+```
+
+#### subUnlockedCoins/DelegateCoins
+
+```json
+{
+ "type": "coin_spent",
+ "attributes": [
+ {
+ "key": "spender",
+ "value": "{{sdk.AccAddress of the address which is spending coins}}",
+ "index": true
+ },
+ {
+ "key": "amount",
+ "value": "{{sdk.Coins being spent}}",
+ "index": true
+ }
+ ]
+}
+```
+
+## Parameters
+
+The bank module contains the following parameters
+
+### SendEnabled
+
+The SendEnabled parameter is now deprecated and not to be use. It is replaced
+with state store records.
+
+
+### DefaultSendEnabled
+
+The default send enabled value controls send transfer capability for all
+coin denominations unless specifically included in the array of `SendEnabled`
+parameters.
+
+## Client
+
+### CLI
+
+A user can query and interact with the `bank` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `bank` state.
+
+```shell
+simd query bank --help
+```
+
+##### balances
+
+The `balances` command allows users to query account balances by address.
+
+```shell
+simd query bank balances [address] [flags]
+```
+
+Example:
+
+```shell
+simd query bank balances cosmos1..
+```
+
+Example Output:
+
+```yml
+balances:
+- amount: "1000000000"
+ denom: stake
+pagination:
+ next_key: null
+ total: "0"
+```
+
+##### denom-metadata
+
+The `denom-metadata` command allows users to query metadata for coin denominations. A user can query metadata for a single denomination using the `--denom` flag or all denominations without it.
+
+```shell
+simd query bank denom-metadata [flags]
+```
+
+Example:
+
+```shell
+simd query bank denom-metadata --denom stake
+```
+
+Example Output:
+
+```yml
+metadata:
+ base: stake
+ denom_units:
+ - aliases:
+ - STAKE
+ denom: stake
+ description: native staking token of simulation app
+ display: stake
+ name: SimApp Token
+ symbol: STK
+```
+
+##### total
+
+The `total` command allows users to query the total supply of coins. A user can query the total supply for a single coin using the `--denom` flag or all coins without it.
+
+```shell
+simd query bank total [flags]
+```
+
+Example:
+
+```shell
+simd query bank total --denom stake
+```
+
+Example Output:
+
+```yml
+amount: "10000000000"
+denom: stake
+```
+
+##### send-enabled
+
+The `send-enabled` command allows users to query for all or some SendEnabled entries.
+
+```shell
+simd query bank send-enabled [denom1 ...] [flags]
+```
+
+Example:
+
+```shell
+simd query bank send-enabled
+```
+
+Example output:
+
+```yml
+send_enabled:
+- denom: foocoin
+ enabled: true
+- denom: barcoin
+pagination:
+ next-key: null
+ total: 2
+```
+
+#### Transactions
+
+The `tx` commands allow users to interact with the `bank` module.
+
+```shell
+simd tx bank --help
+```
+
+##### send
+
+The `send` command allows users to send funds from one account to another.
+
+```shell
+simd tx bank send [from_key_or_address] [to_address] [amount] [flags]
+```
+
+Example:
+
+```shell
+simd tx bank send cosmos1.. cosmos1.. 100stake
+```
+
+## gRPC
+
+A user can query the `bank` module using gRPC endpoints.
+
+### Balance
+
+The `Balance` endpoint allows users to query account balance by address for a given denomination.
+
+```shell
+cosmos.bank.v1beta1.Query/Balance
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"address":"cosmos1..","denom":"stake"}' \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/Balance
+```
+
+Example Output:
+
+```json
+{
+ "balance": {
+ "denom": "stake",
+ "amount": "1000000000"
+ }
+}
+```
+
+### AllBalances
+
+The `AllBalances` endpoint allows users to query account balance by address for all denominations.
+
+```shell
+cosmos.bank.v1beta1.Query/AllBalances
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"address":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/AllBalances
+```
+
+Example Output:
+
+```json
+{
+ "balances": [
+ {
+ "denom": "stake",
+ "amount": "1000000000"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+### DenomMetadata
+
+The `DenomMetadata` endpoint allows users to query metadata for a single coin denomination.
+
+```shell
+cosmos.bank.v1beta1.Query/DenomMetadata
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"denom":"stake"}' \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/DenomMetadata
+```
+
+Example Output:
+
+```json
+{
+ "metadata": {
+ "description": "native staking token of simulation app",
+ "denomUnits": [
+ {
+ "denom": "stake",
+ "aliases": [
+ "STAKE"
+ ]
+ }
+ ],
+ "base": "stake",
+ "display": "stake",
+ "name": "SimApp Token",
+ "symbol": "STK"
+ }
+}
+```
+
+### DenomsMetadata
+
+The `DenomsMetadata` endpoint allows users to query metadata for all coin denominations.
+
+```shell
+cosmos.bank.v1beta1.Query/DenomsMetadata
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/DenomsMetadata
+```
+
+Example Output:
+
+```json
+{
+ "metadatas": [
+ {
+ "description": "native staking token of simulation app",
+ "denomUnits": [
+ {
+ "denom": "stake",
+ "aliases": [
+ "STAKE"
+ ]
+ }
+ ],
+ "base": "stake",
+ "display": "stake",
+ "name": "SimApp Token",
+ "symbol": "STK"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+### DenomOwners
+
+The `DenomOwners` endpoint allows users to query metadata for a single coin denomination.
+
+```shell
+cosmos.bank.v1beta1.Query/DenomOwners
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"denom":"stake"}' \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/DenomOwners
+```
+
+Example Output:
+
+```json
+{
+ "denomOwners": [
+ {
+ "address": "cosmos1..",
+ "balance": {
+ "denom": "stake",
+ "amount": "5000000000"
+ }
+ },
+ {
+ "address": "cosmos1..",
+ "balance": {
+ "denom": "stake",
+ "amount": "5000000000"
+ }
+ },
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+```
+
+### TotalSupply
+
+The `TotalSupply` endpoint allows users to query the total supply of all coins.
+
+```shell
+cosmos.bank.v1beta1.Query/TotalSupply
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/TotalSupply
+```
+
+Example Output:
+
+```json
+{
+ "supply": [
+ {
+ "denom": "stake",
+ "amount": "10000000000"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+### SupplyOf
+
+The `SupplyOf` endpoint allows users to query the total supply of a single coin.
+
+```shell
+cosmos.bank.v1beta1.Query/SupplyOf
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"denom":"stake"}' \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/SupplyOf
+```
+
+Example Output:
+
+```json
+{
+ "amount": {
+ "denom": "stake",
+ "amount": "10000000000"
+ }
+}
+```
+
+### Params
+
+The `Params` endpoint allows users to query the parameters of the `bank` module.
+
+```shell
+cosmos.bank.v1beta1.Query/Params
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/Params
+```
+
+Example Output:
+
+```json
+{
+ "params": {
+ "defaultSendEnabled": true
+ }
+}
+```
+
+### SendEnabled
+
+The `SendEnabled` endpoints allows users to query the SendEnabled entries of the `bank` module.
+
+Any denominations NOT returned, use the `Params.DefaultSendEnabled` value.
+
+```shell
+cosmos.bank.v1beta1.Query/SendEnabled
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/SendEnabled
+```
+
+Example Output:
+
+```json
+{
+ "send_enabled": [
+ {
+ "denom": "foocoin",
+ "enabled": true
+ },
+ {
+ "denom": "barcoin"
+ }
+ ],
+ "pagination": {
+ "next-key": null,
+ "total": 2
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/circuit/README.md b/copy-of-sdk-docs/build/modules/circuit/README.md
new file mode 100644
index 00000000..253ca497
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/circuit/README.md
@@ -0,0 +1,259 @@
+# `x/circuit`
+
+## Concepts
+
+Circuit Breaker is a module that is meant to avoid a chain needing to halt/shut down in the presence of a vulnerability, instead the module will allow specific messages or all messages to be disabled. When operating a chain, if it is app specific then a halt of the chain is less detrimental, but if there are applications built on top of the chain then halting is expensive due to the disturbance to applications.
+
+Circuit Breaker works with the idea that an address or set of addresses have the right to block messages from being executed and/or included in the mempool. Any address with a permission is able to reset the circuit breaker for the message.
+
+The transactions are checked and can be rejected at two points:
+
+* In `CircuitBreakerDecorator` [ante handler](https://docs.cosmos.network/main/learn/advanced/baseapp#antehandler):
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/x/circuit/v0.1.0/x/circuit/ante/circuit.go#L27-L41
+```
+
+* With a [message router check](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router):
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/msg_service_router.go#L104-L115
+```
+
+:::note
+The `CircuitBreakerDecorator` works for most use cases, but [does not check the inner messages of a transaction](https://docs.cosmos.network/main/learn/beginner/tx-lifecycle#antehandler). This means some transactions (such as `x/authz` transactions or some `x/gov` transactions) may pass the ante handler. **This does not affect the circuit breaker** as the message router check will still fail the transaction.
+This tradeoff is to avoid introducing more dependencies in the `x/circuit` module. Chains can re-define the `CircuitBreakerDecorator` to check for inner messages if they wish to do so.
+:::
+
+## State
+
+### Accounts
+
+* AccountPermissions `0x1 | account_address -> ProtocolBuffer(CircuitBreakerPermissions)`
+
+```go
+type level int32
+
+const (
+ // LEVEL_NONE_UNSPECIFIED indicates that the account will have no circuit
+ // breaker permissions.
+ LEVEL_NONE_UNSPECIFIED = iota
+ // LEVEL_SOME_MSGS indicates that the account will have permission to
+ // trip or reset the circuit breaker for some Msg type URLs. If this level
+ // is chosen, a non-empty list of Msg type URLs must be provided in
+ // limit_type_urls.
+ LEVEL_SOME_MSGS
+ // LEVEL_ALL_MSGS indicates that the account can trip or reset the circuit
+ // breaker for Msg's of all type URLs.
+ LEVEL_ALL_MSGS
+ // LEVEL_SUPER_ADMIN indicates that the account can take all circuit breaker
+ // actions and can grant permissions to other accounts.
+ LEVEL_SUPER_ADMIN
+)
+
+type Access struct {
+ level int32
+ msgs []string // if full permission, msgs can be empty
+}
+```
+
+
+### Disable List
+
+List of type urls that are disabled.
+
+* DisableList `0x2 | msg_type_url -> []byte{}`
+
+## State Transitions
+
+### Authorize
+
+Authorize, is called by the module authority (default governance module account) or any account with `LEVEL_SUPER_ADMIN` to give permission to disable/enable messages to another account. There are three levels of permissions that can be granted. `LEVEL_SOME_MSGS` limits the number of messages that can be disabled. `LEVEL_ALL_MSGS` permits all messages to be disabled. `LEVEL_SUPER_ADMIN` allows an account to take all circuit breaker actions including authorizing and deauthorizing other accounts.
+
+```protobuf
+ // AuthorizeCircuitBreaker allows a super-admin to grant (or revoke) another
+ // account's circuit breaker permissions.
+ rpc AuthorizeCircuitBreaker(MsgAuthorizeCircuitBreaker) returns (MsgAuthorizeCircuitBreakerResponse);
+```
+
+### Trip
+
+Trip, is called by an authorized account to disable message execution for a specific msgURL. If empty, all the msgs will be disabled.
+
+```protobuf
+ // TripCircuitBreaker pauses processing of Msg's in the state machine.
+ rpc TripCircuitBreaker(MsgTripCircuitBreaker) returns (MsgTripCircuitBreakerResponse);
+```
+
+### Reset
+
+Reset is called by an authorized account to enable execution for a specific msgURL of previously disabled message. If empty, all the disabled messages will be enabled.
+
+```protobuf
+ // ResetCircuitBreaker resumes processing of Msg's in the state machine that
+ // have been paused using TripCircuitBreaker.
+ rpc ResetCircuitBreaker(MsgResetCircuitBreaker) returns (MsgResetCircuitBreakerResponse);
+```
+
+## Messages
+
+### MsgAuthorizeCircuitBreaker
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L25-L75
+```
+
+This message is expected to fail if:
+
+* the granter is not an account with permission level `LEVEL_SUPER_ADMIN` or the module authority
+
+### MsgTripCircuitBreaker
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L77-L93
+```
+
+This message is expected to fail if:
+
+* if the signer does not have a permission level with the ability to disable the specified type url message
+
+### MsgResetCircuitBreaker
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/circuit/v1/tx.proto#L95-109
+```
+
+This message is expected to fail if:
+
+* if the type url is not disabled
+
+## Events - list and describe event tags
+
+The circuit module emits the following events:
+
+### Message Events
+
+#### MsgAuthorizeCircuitBreaker
+
+| Type | Attribute Key | Attribute Value |
+|---------|---------------|---------------------------|
+| string | granter | {granterAddress} |
+| string | grantee | {granteeAddress} |
+| string | permission | {granteePermissions} |
+| message | module | circuit |
+| message | action | authorize_circuit_breaker |
+
+#### MsgTripCircuitBreaker
+
+| Type | Attribute Key | Attribute Value |
+|----------|---------------|--------------------|
+| string | authority | {authorityAddress} |
+| []string | msg_urls | []string{msg_urls} |
+| message | module | circuit |
+| message | action | trip_circuit_breaker |
+
+#### ResetCircuitBreaker
+
+| Type | Attribute Key | Attribute Value |
+|----------|---------------|--------------------|
+| string | authority | {authorityAddress} |
+| []string | msg_urls | []string{msg_urls} |
+| message | module | circuit |
+| message | action | reset_circuit_breaker |
+
+
+## Keys - list of key prefixes used by the circuit module
+
+* `AccountPermissionPrefix` - `0x01`
+* `DisableListPrefix` - `0x02`
+
+## Client - list and describe CLI commands and gRPC and REST endpoints
+
+## Examples: Using Circuit Breaker CLI Commands
+
+This section provides practical examples for using the Circuit Breaker module through the command-line interface (CLI). These examples demonstrate how to authorize accounts, disable (trip) specific message types, and re-enable (reset) them when needed.
+
+### Querying Circuit Breaker Permissions
+
+Check an account's current circuit breaker permissions:
+
+```bash
+# Query permissions for a specific account
+ query circuit account-permissions
+
+# Example:
+simd query circuit account-permissions cosmos1...
+```
+
+Check which message types are currently disabled:
+
+```bash
+# Query all disabled message types
+ query circuit disabled-list
+
+# Example:
+simd query circuit disabled-list
+```
+
+### Authorizing an Account as Circuit Breaker
+
+Only a super-admin or the module authority (typically the governance module account) can grant circuit breaker permissions to other accounts:
+
+```bash
+# Grant LEVEL_ALL_MSGS permission (can disable any message type)
+ tx circuit authorize --level=ALL_MSGS --from= --gas=auto --gas-adjustment=1.5
+
+# Grant LEVEL_SOME_MSGS permission (can only disable specific message types)
+ tx circuit authorize --level=SOME_MSGS --limit-type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5
+
+# Grant LEVEL_SUPER_ADMIN permission (can disable messages and authorize other accounts)
+ tx circuit authorize --level=SUPER_ADMIN --from= --gas=auto --gas-adjustment=1.5
+```
+
+### Disabling Message Processing (Trip)
+
+Disable specific message types to prevent their execution (requires authorization):
+
+```bash
+# Disable a single message type
+ tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5
+
+# Disable multiple message types
+ tx circuit trip --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5
+
+# Disable all message types (emergency measure)
+ tx circuit trip --from= --gas=auto --gas-adjustment=1.5
+```
+
+### Re-enabling Message Processing (Reset)
+
+Re-enable previously disabled message types (requires authorization):
+
+```bash
+# Re-enable a single message type
+ tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend" --from= --gas=auto --gas-adjustment=1.5
+
+# Re-enable multiple message types
+ tx circuit reset --type-urls="/cosmos.bank.v1beta1.MsgSend,/cosmos.staking.v1beta1.MsgDelegate" --from= --gas=auto --gas-adjustment=1.5
+
+# Re-enable all disabled message types
+ tx circuit reset --from= --gas=auto --gas-adjustment=1.5
+```
+
+### Usage in Emergency Scenarios
+
+In case of a critical vulnerability in a specific message type:
+
+1. Quickly disable the vulnerable message type:
+
+ ```bash
+ tx circuit trip --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5
+ ```
+
+2. After a fix is deployed, re-enable the message type:
+
+ ```bash
+ tx circuit reset --type-urls="/cosmos.vulnerable.v1beta1.MsgVulnerable" --from= --gas=auto --gas-adjustment=1.5
+ ```
+
+This allows chains to surgically disable problematic functionality without halting the entire chain, providing time for developers to implement and deploy fixes.
diff --git a/copy-of-sdk-docs/build/modules/consensus/README.md b/copy-of-sdk-docs/build/modules/consensus/README.md
new file mode 100644
index 00000000..902280a6
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/consensus/README.md
@@ -0,0 +1,7 @@
+---
+sidebar_position: 1
+---
+
+# `x/consensus`
+
+Functionality to modify CometBFT's ABCI consensus params.
diff --git a/copy-of-sdk-docs/build/modules/crisis/README.md b/copy-of-sdk-docs/build/modules/crisis/README.md
new file mode 100644
index 00000000..631f9d85
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/crisis/README.md
@@ -0,0 +1,112 @@
+---
+sidebar_position: 1
+---
+
+# `x/crisis`
+
+NOTE: `x/crisis` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release.
+
+## Overview
+
+The crisis module halts the blockchain under the circumstance that a blockchain
+invariant is broken. Invariants can be registered with the application during the
+application initialization process.
+
+## Contents
+
+* [State](#state)
+* [Messages](#messages)
+* [Events](#events)
+* [Parameters](#parameters)
+* [Client](#client)
+ * [CLI](#cli)
+
+## State
+
+### ConstantFee
+
+Due to the anticipated large gas cost requirement to verify an invariant (and
+potential to exceed the maximum allowable block gas limit) a constant fee is
+used instead of the standard gas consumption method. The constant fee is
+intended to be larger than the anticipated gas cost of running the invariant
+with the standard gas consumption method.
+
+The ConstantFee param is stored in the module params state with the prefix of `0x01`,
+it can be updated with governance or the address with authority.
+
+* Params: `mint/params -> legacy_amino(sdk.Coin)`
+
+## Messages
+
+In this section we describe the processing of the crisis messages and the
+corresponding updates to the state.
+
+### MsgVerifyInvariant
+
+Blockchain invariants can be checked using the `MsgVerifyInvariant` message.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/crisis/v1beta1/tx.proto#L26-L42
+```
+
+This message is expected to fail if:
+
+* the sender does not have enough coins for the constant fee
+* the invariant route is not registered
+
+This message checks the invariant provided, and if the invariant is broken it
+panics, halting the blockchain. If the invariant is broken, the constant fee is
+never deducted as the transaction is never committed to a block (equivalent to
+being refunded). However, if the invariant is not broken, the constant fee will
+not be refunded.
+
+## Events
+
+The crisis module emits the following events:
+
+### Handlers
+
+#### MsgVerifyInvariant
+
+| Type | Attribute Key | Attribute Value |
+|-----------|---------------|------------------|
+| invariant | route | {invariantRoute} |
+| message | module | crisis |
+| message | action | verify_invariant |
+| message | sender | {senderAddress} |
+
+## Parameters
+
+The crisis module contains the following parameters:
+
+| Key | Type | Example |
+|-------------|---------------|-----------------------------------|
+| ConstantFee | object (coin) | {"denom":"uatom","amount":"1000"} |
+
+## Client
+
+### CLI
+
+A user can query and interact with the `crisis` module using the CLI.
+
+#### Transactions
+
+The `tx` commands allow users to interact with the `crisis` module.
+
+```bash
+simd tx crisis --help
+```
+
+##### invariant-broken
+
+The `invariant-broken` command submits proof when an invariant was broken to halt the chain
+
+```bash
+simd tx crisis invariant-broken [module-name] [invariant-route] [flags]
+```
+
+Example:
+
+```bash
+simd tx crisis invariant-broken bank total-supply --from=[keyname or address]
+```
diff --git a/copy-of-sdk-docs/build/modules/distribution/README.md b/copy-of-sdk-docs/build/modules/distribution/README.md
new file mode 100644
index 00000000..0563c5d7
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/distribution/README.md
@@ -0,0 +1,1128 @@
+---
+sidebar_position: 1
+---
+
+# `x/distribution`
+
+## Overview
+
+This _simple_ distribution mechanism describes a functional way to passively
+distribute rewards between validators and delegators. Note that this mechanism does
+not distribute funds in as precisely as active reward distribution mechanisms and
+will therefore be upgraded in the future.
+
+The mechanism operates as follows. Collected rewards are pooled globally and
+divided out passively to validators and delegators. Each validator has the
+opportunity to charge commission to the delegators on the rewards collected on
+behalf of the delegators. Fees are collected directly into a global reward pool
+and validator proposer-reward pool. Due to the nature of passive accounting,
+whenever changes to parameters which affect the rate of reward distribution
+occurs, withdrawal of rewards must also occur.
+
+* Whenever withdrawing, one must withdraw the maximum amount they are entitled
+ to, leaving nothing in the pool.
+* Whenever bonding, unbonding, or re-delegating tokens to an existing account, a
+ full withdrawal of the rewards must occur (as the rules for lazy accounting
+ change).
+* Whenever a validator chooses to change the commission on rewards, all accumulated
+ commission rewards must be simultaneously withdrawn.
+
+The above scenarios are covered in `hooks.md`.
+
+The distribution mechanism outlined herein is used to lazily distribute the
+following rewards between validators and associated delegators:
+
+* multi-token fees to be socially distributed
+* inflated staked asset provisions
+* validator commission on all rewards earned by their delegators stake
+
+Fees are pooled within a global pool. The mechanisms used allow for validators
+and delegators to independently and lazily withdraw their rewards.
+
+## Shortcomings
+
+As a part of the lazy computations, each delegator holds an accumulation term
+specific to each validator which is used to estimate what their approximate
+fair portion of tokens held in the global fee pool is owed to them.
+
+```text
+entitlement = delegator-accumulation / all-delegators-accumulation
+```
+
+Under the circumstance that there was constant and equal flow of incoming
+reward tokens every block, this distribution mechanism would be equal to the
+active distribution (distribute individually to all delegators each block).
+However, this is unrealistic so deviations from the active distribution will
+occur based on fluctuations of incoming reward tokens as well as timing of
+reward withdrawal by other delegators.
+
+If you happen to know that incoming rewards are about to significantly increase,
+you are incentivized to not withdraw until after this event, increasing the
+worth of your existing _accum_. See [#2764](https://github.com/cosmos/cosmos-sdk/issues/2764)
+for further details.
+
+## Effect on Staking
+
+Charging commission on Atom provisions while also allowing for Atom-provisions
+to be auto-bonded (distributed directly to the validators bonded stake) is
+problematic within BPoS. Fundamentally, these two mechanisms are mutually
+exclusive. If both commission and auto-bonding mechanisms are simultaneously
+applied to the staking-token then the distribution of staking-tokens between
+any validator and its delegators will change with each block. This then
+necessitates a calculation for each delegation records for each block -
+which is considered computationally expensive.
+
+In conclusion, we can only have Atom commission and unbonded atoms
+provisions or bonded atom provisions with no Atom commission, and we elect to
+implement the former. Stakeholders wishing to rebond their provisions may elect
+to set up a script to periodically withdraw and rebond rewards.
+
+## Contents
+
+* [Concepts](#concepts)
+* [State](#state)
+ * [FeePool](#feepool)
+ * [Validator Distribution](#validator-distribution)
+ * [Delegation Distribution](#delegation-distribution)
+ * [Params](#params)
+* [Begin Block](#begin-block)
+* [Messages](#messages)
+* [Hooks](#hooks)
+* [Events](#events)
+* [Parameters](#parameters)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+
+## Concepts
+
+In Proof of Stake (PoS) blockchains, rewards gained from transaction fees are paid to validators. The fee distribution module fairly distributes the rewards to the validators' constituent delegators.
+
+Rewards are calculated per period. The period is updated each time a validator's delegation changes, for example, when the validator receives a new delegation.
+The rewards for a single validator can then be calculated by taking the total rewards for the period before the delegation started, minus the current total rewards.
+To learn more, see the [F1 Fee Distribution paper](https://github.com/cosmos/cosmos-sdk/tree/main/docs/spec/fee_distribution/f1_fee_distr.pdf).
+
+The commission to the validator is paid when the validator is removed or when the validator requests a withdrawal.
+The commission is calculated and incremented at every `BeginBlock` operation to update accumulated fee amounts.
+
+The rewards to a delegator are distributed when the delegation is changed or removed, or a withdrawal is requested.
+Before rewards are distributed, all slashes to the validator that occurred during the current delegation are applied.
+
+### Reference Counting in F1 Fee Distribution
+
+In F1 fee distribution, the rewards a delegator receives are calculated when their delegation is withdrawn. This calculation must read the terms of the summation of rewards divided by the share of tokens from the period which they ended when they delegated, and the final period that was created for the withdrawal.
+
+Additionally, as slashes change the amount of tokens a delegation will have (but we calculate this lazily,
+only when a delegator un-delegates), we must calculate rewards in separate periods before / after any slashes
+which occurred in between when a delegator delegated and when they withdrew their rewards. Thus slashes, like
+delegations, reference the period which was ended by the slash event.
+
+All stored historical rewards records for periods which are no longer referenced by any delegations
+or any slashes can thus be safely removed, as they will never be read (future delegations and future
+slashes will always reference future periods). This is implemented by tracking a `ReferenceCount`
+along with each historical reward storage entry. Each time a new object (delegation or slash)
+is created which might need to reference the historical record, the reference count is incremented.
+Each time one object which previously needed to reference the historical record is deleted, the reference
+count is decremented. If the reference count hits zero, the historical record is deleted.
+
+### External Community Pool Keepers
+
+An external pool community keeper is defined as:
+
+```go
+// ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill
+// for x/distribution to properly accept it as a community pool fund destination.
+type ExternalCommunityPoolKeeper interface {
+ // GetCommunityPoolModule gets the module name that funds should be sent to for the community pool.
+ // This is the address that x/distribution will send funds to for external management.
+ GetCommunityPoolModule() string
+ // FundCommunityPool allows an account to directly fund the community fund pool.
+ FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) error
+ // DistributeFromCommunityPool distributes funds from the community pool module account to
+ // a receiver address.
+ DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) error
+}
+```
+
+By default, the distribution module will use a community pool implementation that is internal. An external community pool
+can be provided to the module which will have funds be diverted to it instead of the internal implementation. The reference
+external community pool maintained by the Cosmos SDK is [`x/protocolpool`](../protocolpool/README.md).
+
+## State
+
+### FeePool
+
+All globally tracked parameters for distribution are stored within
+`FeePool`. Rewards are collected and added to the reward pool and
+distributed to validators/delegators from here.
+
+Note that the reward pool holds decimal coins (`DecCoins`) to allow
+for fractions of coins to be received from operations like inflation.
+When coins are distributed from the pool they are truncated back to
+`sdk.Coins` which are non-decimal.
+
+* FeePool: `0x00 -> ProtocolBuffer(FeePool)`
+
+```go
+// coins with decimal
+type DecCoins []DecCoin
+
+type DecCoin struct {
+ Amount math.LegacyDec
+ Denom string
+}
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/distribution.proto#L116-L123
+```
+
+### Validator Distribution
+
+Validator distribution information for the relevant validator is updated each time:
+
+1. delegation amount to a validator is updated,
+2. any delegator withdraws from a validator, or
+3. the validator withdraws its commission.
+
+* ValidatorDistInfo: `0x02 | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(validatorDistribution)`
+
+```go
+type ValidatorDistInfo struct {
+ OperatorAddress sdk.AccAddress
+ SelfBondRewards sdkmath.DecCoins
+ ValidatorCommission types.ValidatorAccumulatedCommission
+}
+```
+
+### Delegation Distribution
+
+Each delegation distribution only needs to record the height at which it last
+withdrew fees. Because a delegation must withdraw fees each time it's
+properties change (aka bonded tokens etc.) its properties will remain constant
+and the delegator's _accumulation_ factor can be calculated passively knowing
+only the height of the last withdrawal and its current properties.
+
+* DelegationDistInfo: `0x02 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValOperatorAddrLen (1 byte) | ValOperatorAddr -> ProtocolBuffer(delegatorDist)`
+
+```go
+type DelegationDistInfo struct {
+ WithdrawalHeight int64 // last time this delegation withdrew rewards
+}
+```
+
+### Params
+
+The distribution module stores it's params in state with the prefix of `0x09`,
+it can be updated with governance or the address with authority.
+
+* Params: `0x09 | ProtocolBuffer(Params)`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/distribution.proto#L12-L42
+```
+
+## Begin Block
+
+At each `BeginBlock`, all fees received in the previous block are transferred to
+the distribution `ModuleAccount` account. When a delegator or validator
+withdraws their rewards, they are taken out of the `ModuleAccount`. During begin
+block, the different claims on the fees collected are updated as follows:
+
+* The reserve community tax is charged.
+* The remainder is distributed proportionally by voting power to all bonded validators
+
+### The Distribution Scheme
+
+See [params](#params) for description of parameters.
+
+Let `fees` be the total fees collected in the previous block, including
+inflationary rewards to the stake. All fees are collected in a specific module
+account during the block. During `BeginBlock`, they are sent to the
+`"distribution"` `ModuleAccount`. No other sending of tokens occurs. Instead, the
+rewards each account is entitled to are stored, and withdrawals can be triggered
+through the messages `FundCommunityPool`, `WithdrawValidatorCommission` and
+`WithdrawDelegatorReward`.
+
+#### Reward to the Community Pool
+
+The community pool gets `community_tax * fees`, plus any remaining dust after
+validators get their rewards that are always rounded down to the nearest
+integer value.
+
+#### Using an External Community Pool
+
+Starting with Cosmos SDK v0.53.0, an external community pool, such as `x/protocolpool`, can be used in place of the `x/distribution` managed community pool.
+
+
+Please view the warning in the next section before deciding to use an external community pool.
+
+```go
+// ExternalCommunityPoolKeeper is the interface that an external community pool module keeper must fulfill
+// for x/distribution to properly accept it as a community pool fund destination.
+type ExternalCommunityPoolKeeper interface {
+ // GetCommunityPoolModule gets the module name that funds should be sent to for the community pool.
+ // This is the address that x/distribution will send funds to for external management.
+ GetCommunityPoolModule() string
+ // FundCommunityPool allows an account to directly fund the community fund pool.
+ FundCommunityPool(ctx sdk.Context, amount sdk.Coins, senderAddr sdk.AccAddress) error
+ // DistributeFromCommunityPool distributes funds from the community pool module account to
+ // a receiver address.
+ DistributeFromCommunityPool(ctx sdk.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) error
+}
+```
+
+```go
+app.DistrKeeper = distrkeeper.NewKeeper(
+ appCodec,
+ runtime.NewKVStoreService(keys[distrtypes.StoreKey]),
+ app.AccountKeeper,
+ app.BankKeeper,
+ app.StakingKeeper,
+ authtypes.FeeCollectorName,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+ distrkeeper.WithExternalCommunityPool(app.ProtocolPoolKeeper), // New option.
+)
+```
+
+#### External Community Pool Usage Warning
+
+When using an external community pool with `x/distribution`, the following handlers will return an error:
+
+**QueryService**
+
+* `CommunityPool`
+
+**MsgService**
+
+* `CommunityPoolSpend`
+* `FundCommunityPool`
+
+If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents.
+
+#### Reward To the Validators
+
+The proposer receives no extra rewards. All fees are distributed among all the
+bonded validators, including the proposer, in proportion to their consensus power.
+
+```text
+powFrac = validator power / total bonded validator power
+voteMul = 1 - community_tax
+```
+
+All validators receive `fees * voteMul * powFrac`.
+
+#### Rewards to Delegators
+
+Each validator's rewards are distributed to its delegators. The validator also
+has a self-delegation that is treated like a regular delegation in
+distribution calculations.
+
+The validator sets a commission rate. The commission rate is flexible, but each
+validator sets a maximum rate and a maximum daily increase. These maximums cannot be exceeded and protect delegators from sudden increases of validator commission rates to prevent validators from taking all of the rewards.
+
+The outstanding rewards that the operator is entitled to are stored in
+`ValidatorAccumulatedCommission`, while the rewards the delegators are entitled
+to are stored in `ValidatorCurrentRewards`. The [F1 fee distribution scheme](#concepts) is used to calculate the rewards per delegator as they
+withdraw or update their delegation, and is thus not handled in `BeginBlock`.
+
+#### Example Distribution
+
+For this example distribution, the underlying consensus engine selects block proposers in
+proportion to their power relative to the entire bonded power.
+
+All validators are equally performant at including pre-commits in their proposed
+blocks. Then hold `(pre_commits included) / (total bonded validator power)`
+constant so that the amortized block reward for the validator is `( validator power / total bonded power) * (1 - community tax rate)` of
+the total rewards. Consequently, the reward for a single delegator is:
+
+```text
+(delegator proportion of the validator power / validator power) * (validator power / total bonded power)
+ * (1 - community tax rate) * (1 - validator commission rate)
+= (delegator proportion of the validator power / total bonded power) * (1 -
+community tax rate) * (1 - validator commission rate)
+```
+
+## Messages
+
+### MsgSetWithdrawAddress
+
+By default, the withdraw address is the delegator address. To change its withdraw address, a delegator must send a `MsgSetWithdrawAddress` message.
+Changing the withdraw address is possible only if the parameter `WithdrawAddrEnabled` is set to `true`.
+
+The withdraw address cannot be any of the module accounts. These accounts are blocked from being withdraw addresses by being added to the distribution keeper's `blockedAddrs` array at initialization.
+
+Response:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/tx.proto#L49-L60
+```
+
+```go
+func (k Keeper) SetWithdrawAddr(ctx context.Context, delegatorAddr sdk.AccAddress, withdrawAddr sdk.AccAddress) error
+ if k.blockedAddrs[withdrawAddr.String()] {
+ fail with "`{withdrawAddr}` is not allowed to receive external funds"
+ }
+
+ if !k.GetWithdrawAddrEnabled(ctx) {
+ fail with `ErrSetWithdrawAddrDisabled`
+ }
+
+ k.SetDelegatorWithdrawAddr(ctx, delegatorAddr, withdrawAddr)
+```
+
+### MsgWithdrawDelegatorReward
+
+A delegator can withdraw its rewards.
+Internally in the distribution module, this transaction simultaneously removes the previous delegation with associated rewards, the same as if the delegator simply started a new delegation of the same value.
+The rewards are sent immediately from the distribution `ModuleAccount` to the withdraw address.
+Any remainder (truncated decimals) are sent to the community pool.
+The starting height of the delegation is set to the current validator period, and the reference count for the previous period is decremented.
+The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator.
+
+In the F1 distribution, the total rewards are calculated per validator period, and a delegator receives a piece of those rewards in proportion to their stake in the validator.
+In basic F1, the total rewards that all the delegators are entitled to between to periods is calculated the following way.
+Let `R(X)` be the total accumulated rewards up to period `X` divided by the tokens staked at that time. The delegator allocation is `R(X) * delegator_stake`.
+Then the rewards for all the delegators for staking between periods `A` and `B` are `(R(B) - R(A)) * total stake`.
+However, these calculated rewards don't account for slashing.
+
+Taking the slashes into account requires iteration.
+Let `F(X)` be the fraction a validator is to be slashed for a slashing event that happened at period `X`.
+If the validator was slashed at periods `P1, ..., PN`, where `A < P1`, `PN < B`, the distribution module calculates the individual delegator's rewards, `T(A, B)`, as follows:
+
+```go
+stake := initial stake
+rewards := 0
+previous := A
+for P in P1, ..., PN`:
+ rewards = (R(P) - previous) * stake
+ stake = stake * F(P)
+ previous = P
+rewards = rewards + (R(B) - R(PN)) * stake
+```
+
+The historical rewards are calculated retroactively by playing back all the slashes and then attenuating the delegator's stake at each step.
+The final calculated stake is equivalent to the actual staked coins in the delegation with a margin of error due to rounding errors.
+
+Response:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/tx.proto#L66-L77
+```
+
+### WithdrawValidatorCommission
+
+The validator can send the WithdrawValidatorCommission message to withdraw their accumulated commission.
+The commission is calculated in every block during `BeginBlock`, so no iteration is required to withdraw.
+The amount withdrawn is deducted from the `ValidatorOutstandingRewards` variable for the validator.
+Only integer amounts can be sent. If the accumulated awards have decimals, the amount is truncated before the withdrawal is sent, and the remainder is left to be withdrawn later.
+
+### FundCommunityPool
+
+:::warning
+
+This handler will return an error if an `ExternalCommunityPool` is used.
+
+:::
+
+This message sends coins directly from the sender to the community pool.
+
+The transaction fails if the amount cannot be transferred from the sender to the distribution module account.
+
+```go
+func (k Keeper) FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) error {
+ if err := k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount); err != nil {
+ return err
+ }
+
+ feePool, err := k.FeePool.Get(ctx)
+ if err != nil {
+ return err
+ }
+
+ feePool.CommunityPool = feePool.CommunityPool.Add(sdk.NewDecCoinsFromCoins(amount...)...)
+
+ if err := k.FeePool.Set(ctx, feePool); err != nil {
+ return err
+ }
+
+ return nil
+}
+```
+
+### Common distribution operations
+
+These operations take place during many different messages.
+
+#### Initialize delegation
+
+Each time a delegation is changed, the rewards are withdrawn and the delegation is reinitialized.
+Initializing a delegation increments the validator period and keeps track of the starting period of the delegation.
+
+```go
+// initialize starting info for a new delegation
+func (k Keeper) initializeDelegation(ctx context.Context, val sdk.ValAddress, del sdk.AccAddress) {
+ // period has already been incremented - we want to store the period ended by this delegation action
+ previousPeriod := k.GetValidatorCurrentRewards(ctx, val).Period - 1
+
+ // increment reference count for the period we're going to track
+ k.incrementReferenceCount(ctx, val, previousPeriod)
+
+ validator := k.stakingKeeper.Validator(ctx, val)
+ delegation := k.stakingKeeper.Delegation(ctx, del, val)
+
+ // calculate delegation stake in tokens
+ // we don't store directly, so multiply delegation shares * (tokens per share)
+ // note: necessary to truncate so we don't allow withdrawing more rewards than owed
+ stake := validator.TokensFromSharesTruncated(delegation.GetShares())
+ k.SetDelegatorStartingInfo(ctx, val, del, types.NewDelegatorStartingInfo(previousPeriod, stake, uint64(ctx.BlockHeight())))
+}
+```
+
+### MsgUpdateParams
+
+Distribution module params can be updated through `MsgUpdateParams`, which can be done using governance proposal and the signer will always be gov module account address.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/distribution/v1beta1/tx.proto#L133-L147
+```
+
+The message handling can fail if:
+
+* signer is not the gov module account address.
+
+## Hooks
+
+Available hooks that can be called by and from this module.
+
+### Create or modify delegation distribution
+
+* triggered-by: `staking.MsgDelegate`, `staking.MsgBeginRedelegate`, `staking.MsgUndelegate`
+
+#### Before
+
+* The delegation rewards are withdrawn to the withdraw address of the delegator.
+ The rewards include the current period and exclude the starting period.
+* The validator period is incremented.
+ The validator period is incremented because the validator's power and share distribution might have changed.
+* The reference count for the delegator's starting period is decremented.
+
+#### After
+
+The starting height of the delegation is set to the previous period.
+Because of the `Before`-hook, this period is the last period for which the delegator was rewarded.
+
+### Validator created
+
+* triggered-by: `staking.MsgCreateValidator`
+
+When a validator is created, the following validator variables are initialized:
+
+* Historical rewards
+* Current accumulated rewards
+* Accumulated commission
+* Total outstanding rewards
+* Period
+
+By default, all values are set to a `0`, except period, which is set to `1`.
+
+### Validator removed
+
+* triggered-by: `staking.RemoveValidator`
+
+Outstanding commission is sent to the validator's self-delegation withdrawal address.
+Remaining delegator rewards get sent to the community fee pool.
+
+Note: The validator gets removed only when it has no remaining delegations.
+At that time, all outstanding delegator rewards will have been withdrawn.
+Any remaining rewards are dust amounts.
+
+### Validator is slashed
+
+* triggered-by: `staking.Slash`
+* The current validator period reference count is incremented.
+ The reference count is incremented because the slash event has created a reference to it.
+* The validator period is incremented.
+* The slash event is stored for later use.
+ The slash event will be referenced when calculating delegator rewards.
+
+## Events
+
+The distribution module emits the following events:
+
+### BeginBlocker
+
+| Type | Attribute Key | Attribute Value |
+|-----------------|---------------|--------------------|
+| proposer_reward | validator | {validatorAddress} |
+| proposer_reward | reward | {proposerReward} |
+| commission | amount | {commissionAmount} |
+| commission | validator | {validatorAddress} |
+| rewards | amount | {rewardAmount} |
+| rewards | validator | {validatorAddress} |
+
+### Handlers
+
+#### MsgSetWithdrawAddress
+
+| Type | Attribute Key | Attribute Value |
+|----------------------|------------------|----------------------|
+| set_withdraw_address | withdraw_address | {withdrawAddress} |
+| message | module | distribution |
+| message | action | set_withdraw_address |
+| message | sender | {senderAddress} |
+
+#### MsgWithdrawDelegatorReward
+
+| Type | Attribute Key | Attribute Value |
+|---------|---------------|---------------------------|
+| withdraw_rewards | amount | {rewardAmount} |
+| withdraw_rewards | validator | {validatorAddress} |
+| message | module | distribution |
+| message | action | withdraw_delegator_reward |
+| message | sender | {senderAddress} |
+
+#### MsgWithdrawValidatorCommission
+
+| Type | Attribute Key | Attribute Value |
+|------------|---------------|-------------------------------|
+| withdraw_commission | amount | {commissionAmount} |
+| message | module | distribution |
+| message | action | withdraw_validator_commission |
+| message | sender | {senderAddress} |
+
+## Parameters
+
+The distribution module contains the following parameters:
+
+| Key | Type | Example |
+| ------------------- | ------------ | -------------------------- |
+| communitytax | string (dec) | "0.020000000000000000" [0] |
+| withdrawaddrenabled | bool | true |
+
+* [0] `communitytax` must be positive and cannot exceed 1.00.
+* `baseproposerreward` and `bonusproposerreward` were parameters that are deprecated in v0.47 and are not used.
+
+:::note
+The reserve pool is the pool of collected funds for use by governance taken via the `CommunityTax`.
+Currently with the Cosmos SDK, tokens collected by the CommunityTax are accounted for but unspendable.
+:::
+
+## Client
+
+## CLI
+
+A user can query and interact with the `distribution` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `distribution` state.
+
+```shell
+simd query distribution --help
+```
+
+##### commission
+
+The `commission` command allows users to query validator commission rewards by address.
+
+```shell
+simd query distribution commission [address] [flags]
+```
+
+Example:
+
+```shell
+simd query distribution commission cosmosvaloper1...
+```
+
+Example Output:
+
+```yml
+commission:
+- amount: "1000000.000000000000000000"
+ denom: stake
+```
+
+##### community-pool
+
+The `community-pool` command allows users to query all coin balances within the community pool.
+
+```shell
+simd query distribution community-pool [flags]
+```
+
+Example:
+
+```shell
+simd query distribution community-pool
+```
+
+Example Output:
+
+```yml
+pool:
+- amount: "1000000.000000000000000000"
+ denom: stake
+```
+
+##### params
+
+The `params` command allows users to query the parameters of the `distribution` module.
+
+```shell
+simd query distribution params [flags]
+```
+
+Example:
+
+```shell
+simd query distribution params
+```
+
+Example Output:
+
+```yml
+base_proposer_reward: "0.000000000000000000"
+bonus_proposer_reward: "0.000000000000000000"
+community_tax: "0.020000000000000000"
+withdraw_addr_enabled: true
+```
+
+##### rewards
+
+The `rewards` command allows users to query delegator rewards. Users can optionally include the validator address to query rewards earned from a specific validator.
+
+```shell
+simd query distribution rewards [delegator-addr] [validator-addr] [flags]
+```
+
+Example:
+
+```shell
+simd query distribution rewards cosmos1...
+```
+
+Example Output:
+
+```yml
+rewards:
+- reward:
+ - amount: "1000000.000000000000000000"
+ denom: stake
+ validator_address: cosmosvaloper1..
+total:
+- amount: "1000000.000000000000000000"
+ denom: stake
+```
+
+##### slashes
+
+The `slashes` command allows users to query all slashes for a given block range.
+
+```shell
+simd query distribution slashes [validator] [start-height] [end-height] [flags]
+```
+
+Example:
+
+```shell
+simd query distribution slashes cosmosvaloper1... 1 1000
+```
+
+Example Output:
+
+```yml
+pagination:
+ next_key: null
+ total: "0"
+slashes:
+- validator_period: 20,
+ fraction: "0.009999999999999999"
+```
+
+##### validator-outstanding-rewards
+
+The `validator-outstanding-rewards` command allows users to query all outstanding (un-withdrawn) rewards for a validator and all their delegations.
+
+```shell
+simd query distribution validator-outstanding-rewards [validator] [flags]
+```
+
+Example:
+
+```shell
+simd query distribution validator-outstanding-rewards cosmosvaloper1...
+```
+
+Example Output:
+
+```yml
+rewards:
+- amount: "1000000.000000000000000000"
+ denom: stake
+```
+
+##### validator-distribution-info
+
+The `validator-distribution-info` command allows users to query validator commission and self-delegation rewards for validator.
+
+````shell
+simd query distribution validator-distribution-info cosmosvaloper1...
+```
+
+Example Output:
+
+```yml
+commission:
+- amount: "100000.000000000000000000"
+ denom: stake
+operator_address: cosmosvaloper1...
+self_bond_rewards:
+- amount: "100000.000000000000000000"
+ denom: stake
+```
+
+#### Transactions
+
+The `tx` commands allow users to interact with the `distribution` module.
+
+```shell
+simd tx distribution --help
+```
+
+##### fund-community-pool
+
+The `fund-community-pool` command allows users to send funds to the community pool.
+
+```shell
+simd tx distribution fund-community-pool [amount] [flags]
+```
+
+Example:
+
+```shell
+simd tx distribution fund-community-pool 100stake --from cosmos1...
+```
+
+##### set-withdraw-addr
+
+The `set-withdraw-addr` command allows users to set the withdraw address for rewards associated with a delegator address.
+
+```shell
+simd tx distribution set-withdraw-addr [withdraw-addr] [flags]
+```
+
+Example:
+
+```shell
+simd tx distribution set-withdraw-addr cosmos1... --from cosmos1...
+```
+
+##### withdraw-all-rewards
+
+The `withdraw-all-rewards` command allows users to withdraw all rewards for a delegator.
+
+```shell
+simd tx distribution withdraw-all-rewards [flags]
+```
+
+Example:
+
+```shell
+simd tx distribution withdraw-all-rewards --from cosmos1...
+```
+
+##### withdraw-rewards
+
+The `withdraw-rewards` command allows users to withdraw all rewards from a given delegation address,
+and optionally withdraw validator commission if the delegation address given is a validator operator and the user proves the `--commission` flag.
+
+```shell
+simd tx distribution withdraw-rewards [validator-addr] [flags]
+```
+
+Example:
+
+```shell
+simd tx distribution withdraw-rewards cosmosvaloper1... --from cosmos1... --commission
+```
+
+### gRPC
+
+A user can query the `distribution` module using gRPC endpoints.
+
+#### Params
+
+The `Params` endpoint allows users to query parameters of the `distribution` module.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/Params
+```
+
+Example Output:
+
+```json
+{
+ "params": {
+ "communityTax": "20000000000000000",
+ "baseProposerReward": "00000000000000000",
+ "bonusProposerReward": "00000000000000000",
+ "withdrawAddrEnabled": true
+ }
+}
+```
+
+#### ValidatorDistributionInfo
+
+The `ValidatorDistributionInfo` queries validator commission and self-delegation rewards for validator.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"validator_address":"cosmosvalop1..."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/ValidatorDistributionInfo
+```
+
+Example Output:
+
+```json
+{
+ "commission": {
+ "commission": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ]
+ },
+ "self_bond_rewards": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ],
+ "validator_address": "cosmosvalop1..."
+}
+```
+
+#### ValidatorOutstandingRewards
+
+The `ValidatorOutstandingRewards` endpoint allows users to query rewards of a validator address.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"validator_address":"cosmosvalop1.."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/ValidatorOutstandingRewards
+```
+
+Example Output:
+
+```json
+{
+ "rewards": {
+ "rewards": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ]
+ }
+}
+```
+
+#### ValidatorCommission
+
+The `ValidatorCommission` endpoint allows users to query accumulated commission for a validator.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"validator_address":"cosmosvalop1.."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/ValidatorCommission
+```
+
+Example Output:
+
+```json
+{
+ "commission": {
+ "commission": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ]
+ }
+}
+```
+
+#### ValidatorSlashes
+
+The `ValidatorSlashes` endpoint allows users to query slash events of a validator.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"validator_address":"cosmosvalop1.."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/ValidatorSlashes
+```
+
+Example Output:
+
+```json
+{
+ "slashes": [
+ {
+ "validator_period": "20",
+ "fraction": "0.009999999999999999"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### DelegationRewards
+
+The `DelegationRewards` endpoint allows users to query the total rewards accrued by a delegation.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"delegator_address":"cosmos1...","validator_address":"cosmosvalop1..."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/DelegationRewards
+```
+
+Example Output:
+
+```json
+{
+ "rewards": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ]
+}
+```
+
+#### DelegationTotalRewards
+
+The `DelegationTotalRewards` endpoint allows users to query the total rewards accrued by each validator.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"delegator_address":"cosmos1..."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/DelegationTotalRewards
+```
+
+Example Output:
+
+```json
+{
+ "rewards": [
+ {
+ "validatorAddress": "cosmosvaloper1...",
+ "reward": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ]
+ }
+ ],
+ "total": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000"
+ }
+ ]
+}
+```
+
+#### DelegatorValidators
+
+The `DelegatorValidators` endpoint allows users to query all validators for given delegator.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"delegator_address":"cosmos1..."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/DelegatorValidators
+```
+
+Example Output:
+
+```json
+{
+ "validators": ["cosmosvaloper1..."]
+}
+```
+
+#### DelegatorWithdrawAddress
+
+The `DelegatorWithdrawAddress` endpoint allows users to query the withdraw address of a delegator.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"delegator_address":"cosmos1..."}' \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/DelegatorWithdrawAddress
+```
+
+Example Output:
+
+```json
+{
+ "withdrawAddress": "cosmos1..."
+}
+```
+
+#### CommunityPool
+
+The `CommunityPool` endpoint allows users to query the community pool coins.
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.distribution.v1beta1.Query/CommunityPool
+```
+
+Example Output:
+
+```json
+{
+ "pool": [
+ {
+ "denom": "stake",
+ "amount": "1000000000000000000"
+ }
+ ]
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/epochs/README.md b/copy-of-sdk-docs/build/modules/epochs/README.md
new file mode 100644
index 00000000..d5697066
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/epochs/README.md
@@ -0,0 +1,177 @@
+---
+sidebar_position: 1
+---
+
+# `x/epochs`
+
+## Abstract
+
+Often in the SDK, we would like to run certain code every so often. The
+purpose of `epochs` module is to allow other modules to set that they
+would like to be signaled once every period. So another module can
+specify it wants to execute code once a week, starting at UTC-time = x.
+`epochs` creates a generalized epoch interface to other modules so that
+they can easily be signaled upon such events.
+
+## Contents
+
+1. **[Concept](#concepts)**
+2. **[State](#state)**
+3. **[Events](#events)**
+4. **[Keeper](#keepers)**
+5. **[Hooks](#hooks)**
+6. **[Queries](#queries)**
+
+## Concepts
+
+The epochs module defines on-chain timers that execute at fixed time intervals.
+Other SDK modules can then register logic to be executed at the timer ticks.
+We refer to the period in between two timer ticks as an "epoch".
+
+Every timer has a unique identifier.
+Every epoch will have a start time, and an end time, where `end time = start time + timer interval`.
+On mainnet, we only utilize one identifier, with a time interval of `one day`.
+
+The timer will tick at the first block whose block time is greater than the timer end time,
+and set the start as the prior timer end time. (Notably, it's not set to the block time!)
+This means that if the chain has been down for a while, you will get one timer tick per block,
+until the timer has caught up.
+
+## State
+
+The Epochs module keeps a single `EpochInfo` per identifier.
+This contains the current state of the timer with the corresponding identifier.
+Its fields are modified at every timer tick.
+EpochInfos are initialized as part of genesis initialization or upgrade logic,
+and are only modified on begin blockers.
+
+## Events
+
+The `epochs` module emits the following events:
+
+### BeginBlocker
+
+| Type | Attribute Key | Attribute Value |
+| ----------- | ------------- | --------------- |
+| epoch_start | epoch_number | {epoch_number} |
+| epoch_start | start_time | {start_time} |
+
+### EndBlocker
+
+| Type | Attribute Key | Attribute Value |
+| --------- | ------------- | --------------- |
+| epoch_end | epoch_number | {epoch_number} |
+
+## Keepers
+
+### Keeper functions
+
+Epochs keeper module provides utility functions to manage epochs.
+
+## Hooks
+
+```go
+ // the first block whose timestamp is after the duration is counted as the end of the epoch
+ AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64)
+ // new epoch is next block of epoch end block
+ BeforeEpochStart(ctx sdk.Context, epochIdentifier string, epochNumber int64)
+```
+
+### How modules receive hooks
+
+On hook receiver function of other modules, they need to filter
+`epochIdentifier` and only do executions for only specific
+epochIdentifier. Filtering epochIdentifier could be in `Params` of other
+modules so that they can be modified by governance.
+
+This is the standard dev UX of this:
+
+```golang
+func (k MyModuleKeeper) AfterEpochEnd(ctx sdk.Context, epochIdentifier string, epochNumber int64) {
+ params := k.GetParams(ctx)
+ if epochIdentifier == params.DistrEpochIdentifier {
+ // my logic
+ }
+}
+```
+
+### Panic isolation
+
+If a given epoch hook panics, its state update is reverted, but we keep
+proceeding through the remaining hooks. This allows more advanced epoch
+logic to be used, without concern over state machine halting, or halting
+subsequent modules.
+
+This does mean that if there is behavior you expect from a prior epoch
+hook, and that epoch hook reverted, your hook may also have an issue. So
+do keep in mind "what if a prior hook didn't get executed" in the safety
+checks you consider for a new epoch hook.
+
+## Queries
+
+The Epochs module provides the following queries to check the module's state.
+
+```protobuf
+service Query {
+ // EpochInfos provide running epochInfos
+ rpc EpochInfos(QueryEpochsInfoRequest) returns (QueryEpochsInfoResponse) {}
+ // CurrentEpoch provide current epoch of specified identifier
+ rpc CurrentEpoch(QueryCurrentEpochRequest) returns (QueryCurrentEpochResponse) {}
+}
+```
+
+### Epoch Infos
+
+Query the currently running epochInfos
+
+```sh
+ query epochs epoch-infos
+```
+
+:::details Example
+
+An example output:
+
+```sh
+epochs:
+- current_epoch: "183"
+ current_epoch_start_height: "2438409"
+ current_epoch_start_time: "2021-12-18T17:16:09.898160996Z"
+ duration: 86400s
+ epoch_counting_started: true
+ identifier: day
+ start_time: "2021-06-18T17:00:00Z"
+- current_epoch: "26"
+ current_epoch_start_height: "2424854"
+ current_epoch_start_time: "2021-12-17T17:02:07.229632445Z"
+ duration: 604800s
+ epoch_counting_started: true
+ identifier: week
+ start_time: "2021-06-18T17:00:00Z"
+```
+
+:::
+
+### Current Epoch
+
+Query the current epoch by the specified identifier
+
+```sh
+ query epochs current-epoch [identifier]
+```
+
+:::details Example
+
+Query the current `day` epoch:
+
+```sh
+ query epochs current-epoch day
+```
+
+Which in this example outputs:
+
+```sh
+current_epoch: "183"
+```
+
+:::
diff --git a/copy-of-sdk-docs/build/modules/evidence/README.md b/copy-of-sdk-docs/build/modules/evidence/README.md
new file mode 100644
index 00000000..aba2e10e
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/evidence/README.md
@@ -0,0 +1,440 @@
+---
+sidebar_position: 1
+---
+
+# `x/evidence`
+
+* [Concepts](#concepts)
+* [State](#state)
+* [Messages](#messages)
+* [Events](#events)
+* [Parameters](#parameters)
+* [BeginBlock](#beginblock)
+* [Client](#client)
+ * [CLI](#cli)
+ * [REST](#rest)
+ * [gRPC](#grpc)
+
+## Abstract
+
+`x/evidence` is an implementation of a Cosmos SDK module, per [ADR 009](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-009-evidence-module.md),
+that allows for the submission and handling of arbitrary evidence of misbehavior such
+as equivocation and counterfactual signing.
+
+The evidence module differs from standard evidence handling which typically expects the
+underlying consensus engine, e.g. CometBFT, to automatically submit evidence when
+it is discovered by allowing clients and foreign chains to submit more complex evidence
+directly.
+
+All concrete evidence types must implement the `Evidence` interface contract. Submitted
+`Evidence` is first routed through the evidence module's `Router` in which it attempts
+to find a corresponding registered `Handler` for that specific `Evidence` type.
+Each `Evidence` type must have a `Handler` registered with the evidence module's
+keeper in order for it to be successfully routed and executed.
+
+Each corresponding handler must also fulfill the `Handler` interface contract. The
+`Handler` for a given `Evidence` type can perform any arbitrary state transitions
+such as slashing, jailing, and tombstoning.
+
+## Concepts
+
+### Evidence
+
+Any concrete type of evidence submitted to the `x/evidence` module must fulfill the
+`Evidence` contract outlined below. Not all concrete types of evidence will fulfill
+this contract in the same way and some data may be entirely irrelevant to certain
+types of evidence. An additional `ValidatorEvidence`, which extends `Evidence`,
+has also been created to define a contract for evidence against malicious validators.
+
+```go
+// Evidence defines the contract which concrete evidence types of misbehavior
+// must implement.
+type Evidence interface {
+ proto.Message
+
+ Route() string
+ String() string
+ Hash() []byte
+ ValidateBasic() error
+
+ // Height at which the infraction occurred
+ GetHeight() int64
+}
+
+// ValidatorEvidence extends Evidence interface to define contract
+// for evidence against malicious validators
+type ValidatorEvidence interface {
+ Evidence
+
+ // The consensus address of the malicious validator at time of infraction
+ GetConsensusAddress() sdk.ConsAddress
+
+ // The total power of the malicious validator at time of infraction
+ GetValidatorPower() int64
+
+ // The total validator set power at time of infraction
+ GetTotalPower() int64
+}
+```
+
+### Registration & Handling
+
+The `x/evidence` module must first know about all types of evidence it is expected
+to handle. This is accomplished by registering the `Route` method in the `Evidence`
+contract with what is known as a `Router` (defined below). The `Router` accepts
+`Evidence` and attempts to find the corresponding `Handler` for the `Evidence`
+via the `Route` method.
+
+```go
+type Router interface {
+ AddRoute(r string, h Handler) Router
+ HasRoute(r string) bool
+ GetRoute(path string) Handler
+ Seal()
+ Sealed() bool
+}
+```
+
+The `Handler` (defined below) is responsible for executing the entirety of the
+business logic for handling `Evidence`. This typically includes validating the
+evidence, both stateless checks via `ValidateBasic` and stateful checks via any
+keepers provided to the `Handler`. In addition, the `Handler` may also perform
+capabilities such as slashing and jailing a validator. All `Evidence` handled
+by the `Handler` should be persisted.
+
+```go
+// Handler defines an agnostic Evidence handler. The handler is responsible
+// for executing all corresponding business logic necessary for verifying the
+// evidence as valid. In addition, the Handler may execute any necessary
+// slashing and potential jailing.
+type Handler func(context.Context, Evidence) error
+```
+
+
+## State
+
+Currently the `x/evidence` module only stores valid submitted `Evidence` in state.
+The evidence state is also stored and exported in the `x/evidence` module's `GenesisState`.
+
+```protobuf
+// GenesisState defines the evidence module's genesis state.
+message GenesisState {
+ // evidence defines all the evidence at genesis.
+ repeated google.protobuf.Any evidence = 1;
+}
+
+```
+
+All `Evidence` is retrieved and stored via a prefix `KVStore` using prefix `0x00` (`KeyPrefixEvidence`).
+
+
+## Messages
+
+### MsgSubmitEvidence
+
+Evidence is submitted through a `MsgSubmitEvidence` message:
+
+```protobuf
+// MsgSubmitEvidence represents a message that supports submitting arbitrary
+// Evidence of misbehavior such as equivocation or counterfactual signing.
+message MsgSubmitEvidence {
+ string submitter = 1;
+ google.protobuf.Any evidence = 2;
+}
+```
+
+Note, the `Evidence` of a `MsgSubmitEvidence` message must have a corresponding
+`Handler` registered with the `x/evidence` module's `Router` in order to be processed
+and routed correctly.
+
+Given the `Evidence` is registered with a corresponding `Handler`, it is processed
+as follows:
+
+```go
+func SubmitEvidence(ctx Context, evidence Evidence) error {
+ if _, err := GetEvidence(ctx, evidence.Hash()); err == nil {
+ return errorsmod.Wrap(types.ErrEvidenceExists, strings.ToUpper(hex.EncodeToString(evidence.Hash())))
+ }
+ if !router.HasRoute(evidence.Route()) {
+ return errorsmod.Wrap(types.ErrNoEvidenceHandlerExists, evidence.Route())
+ }
+
+ handler := router.GetRoute(evidence.Route())
+ if err := handler(ctx, evidence); err != nil {
+ return errorsmod.Wrap(types.ErrInvalidEvidence, err.Error())
+ }
+
+ ctx.EventManager().EmitEvent(
+ sdk.NewEvent(
+ types.EventTypeSubmitEvidence,
+ sdk.NewAttribute(types.AttributeKeyEvidenceHash, strings.ToUpper(hex.EncodeToString(evidence.Hash()))),
+ ),
+ )
+
+ SetEvidence(ctx, evidence)
+ return nil
+}
+```
+
+First, there must not already exist valid submitted `Evidence` of the exact same
+type. Secondly, the `Evidence` is routed to the `Handler` and executed. Finally,
+if there is no error in handling the `Evidence`, an event is emitted and it is persisted to state.
+
+
+## Events
+
+The `x/evidence` module emits the following events:
+
+### Handlers
+
+#### MsgSubmitEvidence
+
+| Type | Attribute Key | Attribute Value |
+| --------------- | ------------- | --------------- |
+| submit_evidence | evidence_hash | {evidenceHash} |
+| message | module | evidence |
+| message | sender | {senderAddress} |
+| message | action | submit_evidence |
+
+
+## Parameters
+
+The evidence module does not contain any parameters.
+
+
+## BeginBlock
+
+### Evidence Handling
+
+CometBFT blocks can include
+[Evidence](https://github.com/cometbft/cometbft/blob/main/spec/abci/abci%2B%2B_basic_concepts.md#evidence) that indicates if a validator committed malicious behavior. The relevant information is forwarded to the application as ABCI Evidence in `abci.RequestBeginBlock` so that the validator can be punished accordingly.
+
+#### Equivocation
+
+The Cosmos SDK handles two types of evidence inside the ABCI `BeginBlock`:
+
+* `DuplicateVoteEvidence`,
+* `LightClientAttackEvidence`.
+
+The evidence module handles these two evidence types the same way. First, the Cosmos SDK converts the CometBFT concrete evidence type to an SDK `Evidence` interface using `Equivocation` as the concrete type.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/evidence/v1beta1/evidence.proto#L12-L32
+```
+
+For some `Equivocation` submitted in `block` to be valid, it must satisfy:
+
+`Evidence.Timestamp >= block.Timestamp - MaxEvidenceAge`
+
+Where:
+
+* `Evidence.Timestamp` is the timestamp in the block at height `Evidence.Height`
+* `block.Timestamp` is the current block timestamp.
+
+If valid `Equivocation` evidence is included in a block, the validator's stake is
+reduced (slashed) by `SlashFractionDoubleSign` as defined by the `x/slashing` module
+of what their stake was when the infraction occurred, rather than when the evidence was discovered.
+We want to "follow the stake", i.e., the stake that contributed to the infraction
+should be slashed, even if it has since been redelegated or started unbonding.
+
+In addition, the validator is permanently jailed and tombstoned to make it impossible for that
+validator to ever re-enter the validator set.
+
+The `Equivocation` evidence is handled as follows:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/evidence/keeper/infraction.go#L26-L140
+```
+
+**Note:** The slashing, jailing, and tombstoning calls are delegated through the `x/slashing` module
+that emits informative events and finally delegates calls to the `x/staking` module. See documentation
+on slashing and jailing in [State Transitions](../staking/README.md#state-transitions).
+
+## Client
+
+### CLI
+
+A user can query and interact with the `evidence` module using the CLI.
+
+#### Query
+
+The `query` commands allows users to query `evidence` state.
+
+```bash
+simd query evidence --help
+```
+
+#### evidence
+
+The `evidence` command allows users to list all evidence or evidence by hash.
+
+Usage:
+
+```bash
+simd query evidence [flags]
+```
+
+To query evidence by hash
+
+Example:
+
+```bash
+simd query evidence evidence "DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"
+```
+
+Example Output:
+
+```bash
+evidence:
+ consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h
+ height: 11
+ power: 100
+ time: "2021-10-20T16:08:38.194017624Z"
+```
+
+To get all evidence
+
+Example:
+
+```bash
+simd query evidence list
+```
+
+Example Output:
+
+```bash
+evidence:
+ consensus_address: cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h
+ height: 11
+ power: 100
+ time: "2021-10-20T16:08:38.194017624Z"
+pagination:
+ next_key: null
+ total: "1"
+```
+
+### REST
+
+A user can query the `evidence` module using REST endpoints.
+
+#### Evidence
+
+Get evidence by hash
+
+```bash
+/cosmos/evidence/v1beta1/evidence/{hash}
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"
+```
+
+Example Output:
+
+```bash
+{
+ "evidence": {
+ "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h",
+ "height": "11",
+ "power": "100",
+ "time": "2021-10-20T16:08:38.194017624Z"
+ }
+}
+```
+
+#### All evidence
+
+Get all evidence
+
+```bash
+/cosmos/evidence/v1beta1/evidence
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence"
+```
+
+Example Output:
+
+```bash
+{
+ "evidence": [
+ {
+ "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h",
+ "height": "11",
+ "power": "100",
+ "time": "2021-10-20T16:08:38.194017624Z"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+### gRPC
+
+A user can query the `evidence` module using gRPC endpoints.
+
+#### Evidence
+
+Get evidence by hash
+
+```bash
+cosmos.evidence.v1beta1.Query/Evidence
+```
+
+Example:
+
+```bash
+grpcurl -plaintext -d '{"evidence_hash":"DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"}' localhost:9090 cosmos.evidence.v1beta1.Query/Evidence
+```
+
+Example Output:
+
+```bash
+{
+ "evidence": {
+ "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h",
+ "height": "11",
+ "power": "100",
+ "time": "2021-10-20T16:08:38.194017624Z"
+ }
+}
+```
+
+#### All evidence
+
+Get all evidence
+
+```bash
+cosmos.evidence.v1beta1.Query/AllEvidence
+```
+
+Example:
+
+```bash
+grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence
+```
+
+Example Output:
+
+```bash
+{
+ "evidence": [
+ {
+ "consensus_address": "cosmosvalcons1ntk8eualewuprz0gamh8hnvcem2nrcdsgz563h",
+ "height": "11",
+ "power": "100",
+ "time": "2021-10-20T16:08:38.194017624Z"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/feegrant/README.md b/copy-of-sdk-docs/build/modules/feegrant/README.md
new file mode 100644
index 00000000..0ac1c298
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/feegrant/README.md
@@ -0,0 +1,396 @@
+---
+sidebar_position: 1
+---
+
+# `x/feegrant`
+
+## Abstract
+
+This document specifies the fee grant module. For the full ADR, please see [Fee Grant ADR-029](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-029-fee-grant-module.md).
+
+This module allows accounts to grant fee allowances and to use fees from their accounts. Grantees can execute any transaction without the need to maintain sufficient fees.
+
+## Contents
+
+* [Concepts](#concepts)
+* [State](#state)
+ * [FeeAllowance](#feeallowance)
+ * [FeeAllowanceQueue](#feeallowancequeue)
+* [Messages](#messages)
+ * [Msg/GrantAllowance](#msggrantallowance)
+ * [Msg/RevokeAllowance](#msgrevokeallowance)
+* [Events](#events)
+* [Msg Server](#msg-server)
+ * [MsgGrantAllowance](#msggrantallowance-1)
+ * [MsgRevokeAllowance](#msgrevokeallowance-1)
+ * [Exec fee allowance](#exec-fee-allowance)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+
+## Concepts
+
+### Grant
+
+`Grant` is stored in the KVStore to record a grant with full context. Every grant will contain `granter`, `grantee` and what kind of `allowance` is granted. `granter` is an account address who is giving permission to `grantee` (the beneficiary account address) to pay for some or all of `grantee`'s transaction fees. `allowance` defines what kind of fee allowance (`BasicAllowance` or `PeriodicAllowance`, see below) is granted to `grantee`. `allowance` accepts an interface which implements `FeeAllowanceI`, encoded as `Any` type. There can be only one existing fee grant allowed for a `grantee` and `granter`, self grants are not allowed.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L83-L93
+```
+
+`FeeAllowanceI` looks like:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/feegrant/fees.go#L9-L32
+```
+
+### Fee Allowance types
+
+There are two types of fee allowances present at the moment:
+
+* `BasicAllowance`
+* `PeriodicAllowance`
+* `AllowedMsgAllowance`
+
+### BasicAllowance
+
+`BasicAllowance` is permission for `grantee` to use fee from a `granter`'s account. If any of the `spend_limit` or `expiration` reaches its limit, the grant will be removed from the state.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L15-L28
+```
+
+* `spend_limit` is the limit of coins that are allowed to be used from the `granter` account. If it is empty, it assumes there's no spend limit, `grantee` can use any number of available coins from `granter` account address before the expiration.
+
+* `expiration` specifies an optional time when this allowance expires. If the value is left empty, there is no expiry for the grant.
+
+* When a grant is created with empty values for `spend_limit` and `expiration`, it is still a valid grant. It won't restrict the `grantee` to use any number of coins from `granter` and it won't have any expiration. The only way to restrict the `grantee` is by revoking the grant.
+
+### PeriodicAllowance
+
+`PeriodicAllowance` is a repeating fee allowance for the mentioned period, we can mention when the grant can expire as well as when a period can reset. We can also define the maximum number of coins that can be used in a mentioned period of time.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L34-L68
+```
+
+* `basic` is the instance of `BasicAllowance` which is optional for periodic fee allowance. If empty, the grant will have no `expiration` and no `spend_limit`.
+
+* `period` is the specific period of time, after each period passes, `period_can_spend` will be reset.
+
+* `period_spend_limit` specifies the maximum number of coins that can be spent in the period.
+
+* `period_can_spend` is the number of coins left to be spent before the period_reset time.
+
+* `period_reset` keeps track of when a next period reset should happen.
+
+### AllowedMsgAllowance
+
+`AllowedMsgAllowance` is a fee allowance, it can be any of `BasicFeeAllowance`, `PeriodicAllowance` but restricted only to the allowed messages mentioned by the granter.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/feegrant.proto#L70-L81
+```
+
+* `allowance` is either `BasicAllowance` or `PeriodicAllowance`.
+
+* `allowed_messages` is array of messages allowed to execute the given allowance.
+
+### FeeGranter flag
+
+`feegrant` module introduces a `FeeGranter` flag for CLI for the sake of executing transactions with fee granter. When this flag is set, `clientCtx` will append the granter account address for transactions generated through CLI.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/cmd.go#L249-L260
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/client/tx/tx.go#L109-L109
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/auth/tx/builder.go#L275-L284
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/tx/v1beta1/tx.proto#L203-L224
+```
+
+Example cmd:
+
+```go
+./simd tx gov submit-proposal --title="Test Proposal" --description="My awesome proposal" --type="Text" --from validator-key --fee-granter=cosmos1xh44hxt7spr67hqaa7nyx5gnutrz5fraw6grxn --chain-id=testnet --fees="10stake"
+```
+
+### Granted Fee Deductions
+
+Fees are deducted from grants in the `x/auth` ante handler. To learn more about how ante handlers work, read the [Auth Module AnteHandlers Guide](../auth/README.md#antehandlers).
+
+### Gas
+
+In order to prevent DoS attacks, using a filtered `x/feegrant` incurs gas. The SDK must assure that the `grantee`'s transactions all conform to the filter set by the `granter`. The SDK does this by iterating over the allowed messages in the filter and charging 10 gas per filtered message. The SDK will then iterate over the messages being sent by the `grantee` to ensure the messages adhere to the filter, also charging 10 gas per message. The SDK will stop iterating and fail the transaction if it finds a message that does not conform to the filter.
+
+**WARNING**: The gas is charged against the granted allowance. Ensure your messages conform to the filter, if any, before sending transactions using your allowance.
+
+### Pruning
+
+A queue in the state maintained with the prefix of expiration of the grants and checks them on EndBlock with the current block time for every block to prune.
+
+## State
+
+### FeeAllowance
+
+Fee Allowances are identified by combining `Grantee` (the account address of fee allowance grantee) with the `Granter` (the account address of fee allowance granter).
+
+Fee allowance grants are stored in the state as follows:
+
+* Grant: `0x00 | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> ProtocolBuffer(Grant)`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/x/feegrant/feegrant.pb.go#L222-L230
+```
+
+### FeeAllowanceQueue
+
+Fee Allowances queue items are identified by combining the `FeeAllowancePrefixQueue` (i.e., 0x01), `expiration`, `grantee` (the account address of fee allowance grantee), `granter` (the account address of fee allowance granter). Endblocker checks `FeeAllowanceQueue` state for the expired grants and prunes them from `FeeAllowance` if there are any found.
+
+Fee allowance queue keys are stored in the state as follows:
+
+* Grant: `0x01 | expiration_bytes | grantee_addr_len (1 byte) | grantee_addr_bytes | granter_addr_len (1 byte) | granter_addr_bytes -> EmptyBytes`
+
+## Messages
+
+### Msg/GrantAllowance
+
+A fee allowance grant will be created with the `MsgGrantAllowance` message.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/tx.proto#L25-L39
+```
+
+### Msg/RevokeAllowance
+
+An allowed grant fee allowance can be removed with the `MsgRevokeAllowance` message.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/feegrant/v1beta1/tx.proto#L41-L54
+```
+
+## Events
+
+The feegrant module emits the following events:
+
+## Msg Server
+
+### MsgGrantAllowance
+
+| Type | Attribute Key | Attribute Value |
+| ------- | ------------- | ---------------- |
+| message | action | set_feegrant |
+| message | granter | {granterAddress} |
+| message | grantee | {granteeAddress} |
+
+### MsgRevokeAllowance
+
+| Type | Attribute Key | Attribute Value |
+| ------- | ------------- | ---------------- |
+| message | action | revoke_feegrant |
+| message | granter | {granterAddress} |
+| message | grantee | {granteeAddress} |
+
+### Exec fee allowance
+
+| Type | Attribute Key | Attribute Value |
+| ------- | ------------- | ---------------- |
+| message | action | use_feegrant |
+| message | granter | {granterAddress} |
+| message | grantee | {granteeAddress} |
+
+### Prune fee allowances
+
+| Type | Attribute Key | Attribute Value |
+| ------- | ------------- | ---------------- |
+| message | action | prune_feegrant |
+| message | pruner | {prunerAddress} |
+
+
+## Client
+
+### CLI
+
+A user can query and interact with the `feegrant` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `feegrant` state.
+
+```shell
+simd query feegrant --help
+```
+
+##### grant
+
+The `grant` command allows users to query a grant for a given granter-grantee pair.
+
+```shell
+simd query feegrant grant [granter] [grantee] [flags]
+```
+
+Example:
+
+```shell
+simd query feegrant grant cosmos1.. cosmos1..
+```
+
+Example Output:
+
+```yml
+allowance:
+ '@type': /cosmos.feegrant.v1beta1.BasicAllowance
+ expiration: null
+ spend_limit:
+ - amount: "100"
+ denom: stake
+grantee: cosmos1..
+granter: cosmos1..
+```
+
+##### grants
+
+The `grants` command allows users to query all grants for a given grantee.
+
+```shell
+simd query feegrant grants [grantee] [flags]
+```
+
+Example:
+
+```shell
+simd query feegrant grants cosmos1..
+```
+
+Example Output:
+
+```yml
+allowances:
+- allowance:
+ '@type': /cosmos.feegrant.v1beta1.BasicAllowance
+ expiration: null
+ spend_limit:
+ - amount: "100"
+ denom: stake
+ grantee: cosmos1..
+ granter: cosmos1..
+pagination:
+ next_key: null
+ total: "0"
+```
+
+#### Transactions
+
+The `tx` commands allow users to interact with the `feegrant` module.
+
+```shell
+simd tx feegrant --help
+```
+
+##### grant
+
+The `grant` command allows users to grant fee allowances to another account. The fee allowance can have an expiration date, a total spend limit, and/or a periodic spend limit.
+
+```shell
+simd tx feegrant grant [granter] [grantee] [flags]
+```
+
+Example (one-time spend limit):
+
+```shell
+simd tx feegrant grant cosmos1.. cosmos1.. --spend-limit 100stake
+```
+
+Example (periodic spend limit):
+
+```shell
+simd tx feegrant grant cosmos1.. cosmos1.. --period 3600 --period-limit 10stake
+```
+
+##### revoke
+
+The `revoke` command allows users to revoke a granted fee allowance.
+
+```shell
+simd tx feegrant revoke [granter] [grantee] [flags]
+```
+
+Example:
+
+```shell
+simd tx feegrant revoke cosmos1.. cosmos1..
+```
+
+### gRPC
+
+A user can query the `feegrant` module using gRPC endpoints.
+
+#### Allowance
+
+The `Allowance` endpoint allows users to query a granted fee allowance.
+
+```shell
+cosmos.feegrant.v1beta1.Query/Allowance
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"grantee":"cosmos1..","granter":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.feegrant.v1beta1.Query/Allowance
+```
+
+Example Output:
+
+```json
+{
+ "allowance": {
+ "granter": "cosmos1..",
+ "grantee": "cosmos1..",
+ "allowance": {"@type":"/cosmos.feegrant.v1beta1.BasicAllowance","spendLimit":[{"denom":"stake","amount":"100"}]}
+ }
+}
+```
+
+#### Allowances
+
+The `Allowances` endpoint allows users to query all granted fee allowances for a given grantee.
+
+```shell
+cosmos.feegrant.v1beta1.Query/Allowances
+```
+
+Example:
+
+```shell
+grpcurl -plaintext \
+ -d '{"address":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.feegrant.v1beta1.Query/Allowances
+```
+
+Example Output:
+
+```json
+{
+ "allowances": [
+ {
+ "granter": "cosmos1..",
+ "grantee": "cosmos1..",
+ "allowance": {"@type":"/cosmos.feegrant.v1beta1.BasicAllowance","spendLimit":[{"denom":"stake","amount":"100"}]}
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/genutil/README.md b/copy-of-sdk-docs/build/modules/genutil/README.md
new file mode 100644
index 00000000..34bc79d5
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/genutil/README.md
@@ -0,0 +1,89 @@
+# `x/genutil`
+
+## Concepts
+
+The `genutil` package contains a variety of genesis utility functionalities for usage within a blockchain application. Namely:
+
+* Genesis transactions related (gentx)
+* Commands for collection and creation of gentxs
+* `InitChain` processing of gentxs
+* Genesis file creation
+* Genesis file validation
+* Genesis file migration
+* CometBFT related initialization
+ * Translation of an app genesis to a CometBFT genesis
+
+## Genesis
+
+Genutil contains the data structure that defines an application genesis.
+An application genesis consists of a consensus genesis (g.e. CometBFT genesis) and application related genesis data.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-rc.0/x/genutil/types/genesis.go#L24-L34
+```
+
+The application genesis can then be translated to the consensus engine to the right format:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-rc.0/x/genutil/types/genesis.go#L126-L136
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-rc.0/server/start.go#L397-L407
+```
+
+## Client
+
+### CLI
+
+The genutil commands are available under the `genesis` subcommand.
+
+#### add-genesis-account
+
+Add a genesis account to `genesis.json`. Learn more [here](https://docs.cosmos.network/main/run-node/run-node#adding-genesis-accounts).
+
+#### collect-gentxs
+
+Collect genesis txs and output a `genesis.json` file.
+
+```shell
+simd genesis collect-gentxs
+```
+
+This will create a new `genesis.json` file that includes data from all the validators (we sometimes call it the "super genesis file" to distinguish it from single-validator genesis files).
+
+#### gentx
+
+Generate a genesis tx carrying a self delegation.
+
+```shell
+simd genesis gentx [key_name] [amount] --chain-id [chain-id]
+```
+
+This will create the genesis transaction for your new chain. Here `amount` should be at least `1000000000stake`.
+If you provide too much or too little, you will encounter an error when starting a node.
+
+#### migrate
+
+Migrate genesis to a specified target (SDK) version.
+
+```shell
+simd genesis migrate [target-version]
+```
+
+:::tip
+The `migrate` command is extensible and takes a `MigrationMap`. This map is a mapping of target versions to genesis migrations functions.
+When not using the default `MigrationMap`, it is recommended to still call the default `MigrationMap` corresponding the SDK version of the chain and prepend/append your own genesis migrations.
+:::
+
+#### validate-genesis
+
+Validates the genesis file at the default location or at the location passed as an argument.
+
+```shell
+simd genesis validate-genesis
+```
+
+:::warning
+Validate genesis only validates if the genesis is valid at the **current application binary**. For validating a genesis from a previous version of the application, use the `migrate` command to migrate the genesis to the current version.
+:::
diff --git a/copy-of-sdk-docs/build/modules/gov/README.md b/copy-of-sdk-docs/build/modules/gov/README.md
new file mode 100644
index 00000000..7b10700c
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/gov/README.md
@@ -0,0 +1,2588 @@
+---
+sidebar_position: 1
+---
+
+# `x/gov`
+
+## Abstract
+
+This paper specifies the Governance module of the Cosmos SDK, which was first
+described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in
+June 2016.
+
+The module enables Cosmos SDK based blockchain to support an on-chain governance
+system. In this system, holders of the native staking token of the chain can vote
+on proposals on a 1 token 1 vote basis. Next is a list of features the module
+currently supports:
+
+* **Proposal submission:** Users can submit proposals with a deposit. Once the
+minimum deposit is reached, the proposal enters voting period. The minimum deposit can be reached by collecting deposits from different users (including proposer) within deposit period.
+* **Vote:** Participants can vote on proposals that reached MinDeposit and entered voting period.
+* **Inheritance and penalties:** Delegators inherit their validator's vote if
+they don't vote themselves.
+* **Claiming deposit:** Users that deposited on proposals can recover their
+deposits if the proposal was accepted or rejected. If the proposal was vetoed, or never entered voting period (minimum deposit not reached within deposit period), the deposit is burned.
+
+This module is in use on the Cosmos Hub (a.k.a [gaia](https://github.com/cosmos/gaia)).
+Features that may be added in the future are described in [Future Improvements](#future-improvements).
+
+## Contents
+
+The following specification uses *ATOM* as the native staking token. The module
+can be adapted to any Proof-Of-Stake blockchain by replacing *ATOM* with the native
+staking token of the chain.
+
+* [Concepts](#concepts)
+ * [Proposal submission](#proposal-submission)
+ * [Deposit](#deposit)
+ * [Vote](#vote)
+ * [Software Upgrade](#software-upgrade)
+* [State](#state)
+ * [Proposals](#proposals)
+ * [Parameters and base types](#parameters-and-base-types)
+ * [Deposit](#deposit-1)
+ * [ValidatorGovInfo](#validatorgovinfo)
+ * [Stores](#stores)
+ * [Proposal Processing Queue](#proposal-processing-queue)
+ * [Legacy Proposal](#legacy-proposal)
+* [Messages](#messages)
+ * [Proposal Submission](#proposal-submission-1)
+ * [Deposit](#deposit-2)
+ * [Vote](#vote-1)
+* [Events](#events)
+ * [EndBlocker](#endblocker)
+ * [Handlers](#handlers)
+* [Parameters](#parameters)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+* [Metadata](#metadata)
+ * [Proposal](#proposal-3)
+ * [Vote](#vote-5)
+* [Future Improvements](#future-improvements)
+
+## Concepts
+
+*Disclaimer: This is work in progress. Mechanisms are susceptible to change.*
+
+The governance process is divided in a few steps that are outlined below:
+
+* **Proposal submission:** Proposal is submitted to the blockchain with a
+ deposit.
+* **Vote:** Once deposit reaches a certain value (`MinDeposit`), proposal is
+ confirmed and vote opens. Bonded Atom holders can then send `TxGovVote`
+ transactions to vote on the proposal.
+* **Execution** After a period of time, the votes are tallied and depending
+ on the result, the messages in the proposal will be executed.
+
+### Proposal submission
+
+#### Right to submit a proposal
+
+Every account can submit proposals by sending a `MsgSubmitProposal` transaction.
+Once a proposal is submitted, it is identified by its unique `proposalID`.
+
+#### Proposal Messages
+
+A proposal includes an array of `sdk.Msg`s which are executed automatically if the
+proposal passes. The messages are executed by the governance `ModuleAccount` itself. Modules
+such as `x/upgrade`, that want to allow certain messages to be executed by governance
+only should add a whitelist within the respective msg server, granting the governance
+module the right to execute the message once a quorum has been reached. The governance
+module uses the `MsgServiceRouter` to check that these messages are correctly constructed
+and have a respective path to execute on but do not perform a full validity check.
+
+### Deposit
+
+To prevent spam, proposals must be submitted with a deposit in the coins defined by
+the `MinDeposit` param.
+
+When a proposal is submitted, it has to be accompanied with a deposit that must be
+strictly positive, but can be inferior to `MinDeposit`. The submitter doesn't need
+to pay for the entire deposit on their own. The newly created proposal is stored in
+an *inactive proposal queue* and stays there until its deposit passes the `MinDeposit`.
+Other token holders can increase the proposal's deposit by sending a `Deposit`
+transaction. If a proposal doesn't pass the `MinDeposit` before the deposit end time
+(the time when deposits are no longer accepted), the proposal will be destroyed: the
+proposal will be removed from state and the deposit will be burned (see x/gov `EndBlocker`).
+When a proposal deposit passes the `MinDeposit` threshold (even during the proposal
+submission) before the deposit end time, the proposal will be moved into the
+*active proposal queue* and the voting period will begin.
+
+The deposit is kept in escrow and held by the governance `ModuleAccount` until the
+proposal is finalized (passed or rejected).
+
+#### Deposit refund and burn
+
+When a proposal is finalized, the coins from the deposit are either refunded or burned
+according to the final tally of the proposal:
+
+* If the proposal is approved or rejected but *not* vetoed, each deposit will be
+ automatically refunded to its respective depositor (transferred from the governance
+ `ModuleAccount`).
+* When the proposal is vetoed with greater than 1/3, deposits will be burned from the
+ governance `ModuleAccount` and the proposal information along with its deposit
+ information will be removed from state.
+* All refunded or burned deposits are removed from the state. Events are issued when
+ burning or refunding a deposit.
+
+### Vote
+
+#### Participants
+
+*Participants* are users that have the right to vote on proposals. On the
+Cosmos Hub, participants are bonded Atom holders. Unbonded Atom holders and
+other users do not get the right to participate in governance. However, they
+can submit and deposit on proposals.
+
+Note that when *participants* have bonded and unbonded Atoms, their voting power is calculated from their bonded Atom holdings only.
+
+#### Voting period
+
+Once a proposal reaches `MinDeposit`, it immediately enters `Voting period`. We
+define `Voting period` as the interval between the moment the vote opens and
+the moment the vote closes. The initial value of `Voting period` is 2 weeks.
+
+#### Option set
+
+The option set of a proposal refers to the set of choices a participant can
+choose from when casting its vote.
+
+The initial option set includes the following options:
+
+* `Yes`
+* `No`
+* `NoWithVeto`
+* `Abstain`
+
+`NoWithVeto` counts as `No` but also adds a `Veto` vote. `Abstain` option
+allows voters to signal that they do not intend to vote in favor or against the
+proposal but accept the result of the vote.
+
+*Note: from the UI, for urgent proposals we should maybe add a ‘Not Urgent’ option that casts a `NoWithVeto` vote.*
+
+#### Weighted Votes
+
+[ADR-037](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-037-gov-split-vote.md) introduces the weighted vote feature which allows a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No.
+
+Often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll.
+
+To represent weighted vote on chain, we use the following Protobuf message.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1beta1/gov.proto#L34-L47
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1beta1/gov.proto#L181-L201
+```
+
+For a weighted vote to be valid, the `options` field must not contain duplicate vote options, and the sum of weights of all options must be equal to 1.
+
+#### Custom Vote Calculation
+
+Cosmos SDK v0.53.0 introduced an option for developers to define a custom vote result and voting power calculation function.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/x/gov/keeper/tally.go#L15-L24
+```
+
+This gives developers a more expressive way to handle governance on their appchains.
+Developers can now build systems with:
+
+* Quadratic Voting
+* Time-weighted Voting
+* Reputation-Based voting
+
+##### Example
+
+```go
+func myCustomVotingFunction(
+ ctx context.Context,
+ k Keeper,
+ proposal v1.Proposal,
+ validators map[string]v1.ValidatorGovInfo,
+) (totalVoterPower math.LegacyDec, results map[v1.VoteOption]math.LegacyDec, err error) {
+ // ... tally logic
+}
+
+govKeeper := govkeeper.NewKeeper(
+ appCodec,
+ runtime.NewKVStoreService(keys[govtypes.StoreKey]),
+ app.AccountKeeper,
+ app.BankKeeper,
+ app.StakingKeeper,
+ app.DistrKeeper,
+ app.MsgServiceRouter(),
+ govConfig,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+ govkeeper.WithCustomCalculateVoteResultsAndVotingPowerFn(myCustomVotingFunction),
+)
+```
+
+### Quorum
+
+Quorum is defined as the minimum percentage of voting power that needs to be
+cast on a proposal for the result to be valid.
+
+### Expedited Proposals
+
+A proposal can be expedited, making the proposal use shorter voting duration and a higher tally threshold by its default. If an expedited proposal fails to meet the threshold within the scope of shorter voting duration, the expedited proposal is then converted to a regular proposal and restarts voting under regular voting conditions.
+
+#### Threshold
+
+Threshold is defined as the minimum proportion of `Yes` votes (excluding
+`Abstain` votes) for the proposal to be accepted.
+
+Initially, the threshold is set at 50% of `Yes` votes, excluding `Abstain`
+votes. A possibility to veto exists if more than 1/3rd of all votes are
+`NoWithVeto` votes. Note, both of these values are derived from the `TallyParams`
+on-chain parameter, which is modifiable by governance.
+This means that proposals are accepted iff:
+
+* There exist bonded tokens.
+* Quorum has been achieved.
+* The proportion of `Abstain` votes is inferior to 1/1.
+* The proportion of `NoWithVeto` votes is inferior to 1/3, including
+ `Abstain` votes.
+* The proportion of `Yes` votes, excluding `Abstain` votes, at the end of
+ the voting period is superior to 1/2.
+
+For expedited proposals, by default, the threshold is higher than with a *normal proposal*, namely, 66.7%.
+
+#### Inheritance
+
+If a delegator does not vote, it will inherit its validator vote.
+
+* If the delegator votes before its validator, it will not inherit from the
+ validator's vote.
+* If the delegator votes after its validator, it will override its validator
+ vote with its own. If the proposal is urgent, it is possible
+ that the vote will close before delegators have a chance to react and
+ override their validator's vote. This is not a problem, as proposals require more than 2/3rd of the total voting power to pass, when tallied at the end of the voting period. Because as little as 1/3 + 1 validation power could collude to censor transactions, non-collusion is already assumed for ranges exceeding this threshold.
+
+#### Validator’s punishment for non-voting
+
+At present, validators are not punished for failing to vote.
+
+#### Governance address
+
+Later, we may add permissioned keys that could only sign txs from certain modules. For the MVP, the `Governance address` will be the main validator address generated at account creation. This address corresponds to a different PrivKey than the CometBFT PrivKey which is responsible for signing consensus messages. Validators thus do not have to sign governance transactions with the sensitive CometBFT PrivKey.
+
+#### Burnable Params
+
+There are three parameters that define if the deposit of a proposal should be burned or returned to the depositors.
+
+* `BurnVoteVeto` burns the proposal deposit if the proposal gets vetoed.
+* `BurnVoteQuorum` burns the proposal deposit if the proposal deposit if the vote does not reach quorum.
+* `BurnProposalDepositPrevote` burns the proposal deposit if it does not enter the voting phase.
+
+> Note: These parameters are modifiable via governance.
+
+## State
+
+### Constitution
+
+`Constitution` is found in the genesis state. It is a string field intended to be used to describe the purpose of a particular blockchain, and its expected norms. A few examples of how the constitution field can be used:
+
+* define the purpose of the chain, laying a foundation for its future development
+* set expectations for delegators
+* set expectations for validators
+* define the chain's relationship to "meatspace" entities, like a foundation or corporation
+
+Since this is more of a social feature than a technical feature, we'll now get into some items that may have been useful to have in a genesis constitution:
+
+* What limitations on governance exist, if any?
+ * is it okay for the community to slash the wallet of a whale that they no longer feel that they want around? (viz: Juno Proposal 4 and 16)
+ * can governance "socially slash" a validator who is using unapproved MEV? (viz: commonwealth.im/osmosis)
+ * In the event of an economic emergency, what should validators do?
+ * Terra crash of May, 2022, saw validators choose to run a new binary with code that had not been approved by governance, because the governance token had been inflated to nothing.
+* What is the purpose of the chain, specifically?
+ * best example of this is the Cosmos hub, where different founding groups, have different interpretations of the purpose of the network.
+
+This genesis entry, "constitution" hasn't been designed for existing chains, who should likely just ratify a constitution using their governance system. Instead, this is for new chains. It will allow for validators to have a much clearer idea of purpose and the expectations placed on them while operating their nodes. Likewise, for community members, the constitution will give them some idea of what to expect from both the "chain team" and the validators, respectively.
+
+This constitution is designed to be immutable, and placed only in genesis, though that could change over time by a pull request to the cosmos-sdk that allows for the constitution to be changed by governance. Communities whishing to make amendments to their original constitution should use the governance mechanism and a "signaling proposal" to do exactly that.
+
+**Ideal use scenario for a cosmos chain constitution**
+
+As a chain developer, you decide that you'd like to provide clarity to your key user groups:
+
+* validators
+* token holders
+* developers (yourself)
+
+You use the constitution to immutably store some Markdown in genesis, so that when difficult questions come up, the constitution can provide guidance to the community.
+
+### Proposals
+
+`Proposal` objects are used to tally votes and generally track the proposal's state.
+They contain an array of arbitrary `sdk.Msg`'s which the governance module will attempt
+to resolve and then execute if the proposal passes. `Proposal`'s are identified by a
+unique id and contains a series of timestamps: `submit_time`, `deposit_end_time`,
+`voting_start_time`, `voting_end_time` which track the lifecycle of a proposal
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L51-L99
+```
+
+A proposal will generally require more than just a set of messages to explain its
+purpose but need some greater justification and allow a means for interested participants
+to discuss and debate the proposal.
+In most cases, **it is encouraged to have an off-chain system that supports the on-chain governance process**.
+To accommodate for this, a proposal contains a special **`metadata`** field, a string,
+which can be used to add context to the proposal. The `metadata` field allows custom use for networks,
+however, it is expected that the field contains a URL or some form of CID using a system such as
+[IPFS](https://docs.ipfs.io/concepts/content-addressing/). To support the case of
+interoperability across networks, the SDK recommends that the `metadata` represents
+the following `JSON` template:
+
+```json
+{
+ "title": "...",
+ "description": "...",
+ "forum": "...", // a link to the discussion platform (i.e. Discord)
+ "other": "..." // any extra data that doesn't correspond to the other fields
+}
+```
+
+This makes it far easier for clients to support multiple networks.
+
+The metadata has a maximum length that is chosen by the app developer, and
+passed into the gov keeper as a config. The default maximum length in the SDK is 255 characters.
+
+#### Writing a module that uses governance
+
+There are many aspects of a chain, or of the individual modules that you may want to
+use governance to perform such as changing various parameters. This is very simple
+to do. First, write out your message types and `MsgServer` implementation. Add an
+`authority` field to the keeper which will be populated in the constructor with the
+governance module account: `govKeeper.GetGovernanceAccount().GetAddress()`. Then for
+the methods in the `msg_server.go`, perform a check on the message that the signer
+matches `authority`. This will prevent any user from executing that message.
+
+### Parameters and base types
+
+`Parameters` define the rules according to which votes are run. There can only
+be one active parameter set at any given time. If governance wants to change a
+parameter set, either to modify a value or add/remove a parameter field, a new
+parameter set has to be created and the previous one rendered inactive.
+
+#### DepositParams
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L152-L162
+```
+
+#### VotingParams
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L164-L168
+```
+
+#### TallyParams
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L170-L182
+```
+
+Parameters are stored in a global `GlobalParams` KVStore.
+
+Additionally, we introduce some basic types:
+
+```go
+type Vote byte
+
+const (
+ VoteYes = 0x1
+ VoteNo = 0x2
+ VoteNoWithVeto = 0x3
+ VoteAbstain = 0x4
+)
+
+type ProposalType string
+
+const (
+ ProposalTypePlainText = "Text"
+ ProposalTypeSoftwareUpgrade = "SoftwareUpgrade"
+)
+
+type ProposalStatus byte
+
+
+const (
+ StatusNil ProposalStatus = 0x00
+ StatusDepositPeriod ProposalStatus = 0x01 // Proposal is submitted. Participants can deposit on it but not vote
+ StatusVotingPeriod ProposalStatus = 0x02 // MinDeposit is reached, participants can vote
+ StatusPassed ProposalStatus = 0x03 // Proposal passed and successfully executed
+ StatusRejected ProposalStatus = 0x04 // Proposal has been rejected
+ StatusFailed ProposalStatus = 0x05 // Proposal passed but failed execution
+)
+```
+
+### Deposit
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/gov.proto#L38-L49
+```
+
+### ValidatorGovInfo
+
+This type is used in a temp map when tallying
+
+```go
+ type ValidatorGovInfo struct {
+ Minus sdk.Dec
+ Vote Vote
+ }
+```
+
+## Stores
+
+:::note
+Stores are KVStores in the multi-store. The key to find the store is the first parameter in the list
+:::
+
+We will use one KVStore `Governance` to store four mappings:
+
+* A mapping from `proposalID|'proposal'` to `Proposal`.
+* A mapping from `proposalID|'addresses'|address` to `Vote`. This mapping allows
+ us to query all addresses that voted on the proposal along with their vote by
+ doing a range query on `proposalID:addresses`.
+* A mapping from `ParamsKey|'Params'` to `Params`. This map allows to query all
+ x/gov params.
+* A mapping from `VotingPeriodProposalKeyPrefix|proposalID` to a single byte. This allows
+ us to know if a proposal is in the voting period or not with very low gas cost.
+
+For pseudocode purposes, here are the two function we will use to read or write in stores:
+
+* `load(StoreKey, Key)`: Retrieve item stored at key `Key` in store found at key `StoreKey` in the multistore
+* `store(StoreKey, Key, value)`: Write value `Value` at key `Key` in store found at key `StoreKey` in the multistore
+
+### Proposal Processing Queue
+
+**Store:**
+
+* `ProposalProcessingQueue`: A queue `queue[proposalID]` containing all the
+ `ProposalIDs` of proposals that reached `MinDeposit`. During each `EndBlock`,
+ all the proposals that have reached the end of their voting period are processed.
+ To process a finished proposal, the application tallies the votes, computes the
+ votes of each validator and checks if every validator in the validator set has
+ voted. If the proposal is accepted, deposits are refunded. Finally, the proposal
+ content `Handler` is executed.
+
+And the pseudocode for the `ProposalProcessingQueue`:
+
+```go
+ in EndBlock do
+
+ for finishedProposalID in GetAllFinishedProposalIDs(block.Time)
+ proposal = load(Governance, ) // proposal is a const key
+
+ validators = Keeper.getAllValidators()
+ tmpValMap := map(sdk.AccAddress)ValidatorGovInfo
+
+ // Initiate mapping at 0. This is the amount of shares of the validator's vote that will be overridden by their delegator's votes
+ for each validator in validators
+ tmpValMap(validator.OperatorAddr).Minus = 0
+
+ // Tally
+ voterIterator = rangeQuery(Governance, ) //return all the addresses that voted on the proposal
+ for each (voterAddress, vote) in voterIterator
+ delegations = stakingKeeper.getDelegations(voterAddress) // get all delegations for current voter
+
+ for each delegation in delegations
+ // make sure delegation.Shares does NOT include shares being unbonded
+ tmpValMap(delegation.ValidatorAddr).Minus += delegation.Shares
+ proposal.updateTally(vote, delegation.Shares)
+
+ _, isVal = stakingKeeper.getValidator(voterAddress)
+ if (isVal)
+ tmpValMap(voterAddress).Vote = vote
+
+ tallyingParam = load(GlobalParams, 'TallyingParam')
+
+ // Update tally if validator voted
+ for each validator in validators
+ if tmpValMap(validator).HasVoted
+ proposal.updateTally(tmpValMap(validator).Vote, (validator.TotalShares - tmpValMap(validator).Minus))
+
+
+
+ // Check if proposal is accepted or rejected
+ totalNonAbstain := proposal.YesVotes + proposal.NoVotes + proposal.NoWithVetoVotes
+ if (proposal.Votes.YesVotes/totalNonAbstain > tallyingParam.Threshold AND proposal.Votes.NoWithVetoVotes/totalNonAbstain < tallyingParam.Veto)
+ // proposal was accepted at the end of the voting period
+ // refund deposits (non-voters already punished)
+ for each (amount, depositor) in proposal.Deposits
+ depositor.AtomBalance += amount
+
+ stateWriter, err := proposal.Handler()
+ if err != nil
+ // proposal passed but failed during state execution
+ proposal.CurrentStatus = ProposalStatusFailed
+ else
+ // proposal pass and state is persisted
+ proposal.CurrentStatus = ProposalStatusAccepted
+ stateWriter.save()
+ else
+ // proposal was rejected
+ proposal.CurrentStatus = ProposalStatusRejected
+
+ store(Governance, , proposal)
+```
+
+### Legacy Proposal
+
+:::warning
+Legacy proposals are deprecated. Use the new proposal flow by granting the governance module the right to execute the message.
+:::
+
+A legacy proposal is the old implementation of governance proposal.
+Contrary to proposal that can contain any messages, a legacy proposal allows to submit a set of pre-defined proposals.
+These proposals are defined by their types and handled by handlers that are registered in the gov v1beta1 router.
+
+More information on how to submit proposals in the [client section](#client).
+
+## Messages
+
+### Proposal Submission
+
+Proposals can be submitted by any account via a `MsgSubmitProposal` transaction.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/tx.proto#L42-L69
+```
+
+All `sdk.Msgs` passed into the `messages` field of a `MsgSubmitProposal` message
+must be registered in the app's `MsgServiceRouter`. Each of these messages must
+have one signer, namely the gov module account. And finally, the metadata length
+must not be larger than the `maxMetadataLen` config passed into the gov keeper.
+The `initialDeposit` must be strictly positive and conform to the accepted denom of the `MinDeposit` param.
+
+**State modifications:**
+
+* Generate new `proposalID`
+* Create new `Proposal`
+* Initialise `Proposal`'s attributes
+* Decrease balance of sender by `InitialDeposit`
+* If `MinDeposit` is reached:
+ * Push `proposalID` in `ProposalProcessingQueue`
+* Transfer `InitialDeposit` from the `Proposer` to the governance `ModuleAccount`
+
+### Deposit
+
+Once a proposal is submitted, if `Proposal.TotalDeposit < ActiveParam.MinDeposit`, Atom holders can send
+`MsgDeposit` transactions to increase the proposal's deposit.
+
+A deposit is accepted iff:
+
+* The proposal exists
+* The proposal is not in the voting period
+* The deposited coins are conform to the accepted denom from the `MinDeposit` param
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/tx.proto#L134-L147
+```
+
+**State modifications:**
+
+* Decrease balance of sender by `deposit`
+* Add `deposit` of sender in `proposal.Deposits`
+* Increase `proposal.TotalDeposit` by sender's `deposit`
+* If `MinDeposit` is reached:
+ * Push `proposalID` in `ProposalProcessingQueueEnd`
+* Transfer `Deposit` from the `proposer` to the governance `ModuleAccount`
+
+### Vote
+
+Once `ActiveParam.MinDeposit` is reached, voting period starts. From there,
+bonded Atom holders are able to send `MsgVote` transactions to cast their
+vote on the proposal.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/gov/v1/tx.proto#L92-L108
+```
+
+**State modifications:**
+
+* Record `Vote` of sender
+
+:::note
+Gas cost for this message has to take into account the future tallying of the vote in EndBlocker.
+:::
+
+## Events
+
+The governance module emits the following events:
+
+### EndBlocker
+
+| Type | Attribute Key | Attribute Value |
+|-------------------|-----------------|------------------|
+| inactive_proposal | proposal_id | {proposalID} |
+| inactive_proposal | proposal_result | {proposalResult} |
+| active_proposal | proposal_id | {proposalID} |
+| active_proposal | proposal_result | {proposalResult} |
+
+### Handlers
+
+#### MsgSubmitProposal
+
+| Type | Attribute Key | Attribute Value |
+|---------------------|---------------------|-----------------|
+| submit_proposal | proposal_id | {proposalID} |
+| submit_proposal [0] | voting_period_start | {proposalID} |
+| proposal_deposit | amount | {depositAmount} |
+| proposal_deposit | proposal_id | {proposalID} |
+| message | module | governance |
+| message | action | submit_proposal |
+| message | sender | {senderAddress} |
+
+* [0] Event only emitted if the voting period starts during the submission.
+
+#### MsgVote
+
+| Type | Attribute Key | Attribute Value |
+|---------------|---------------|-----------------|
+| proposal_vote | option | {voteOption} |
+| proposal_vote | proposal_id | {proposalID} |
+| message | module | governance |
+| message | action | vote |
+| message | sender | {senderAddress} |
+
+#### MsgVoteWeighted
+
+| Type | Attribute Key | Attribute Value |
+|---------------|---------------|-----------------------|
+| proposal_vote | option | {weightedVoteOptions} |
+| proposal_vote | proposal_id | {proposalID} |
+| message | module | governance |
+| message | action | vote |
+| message | sender | {senderAddress} |
+
+#### MsgDeposit
+
+| Type | Attribute Key | Attribute Value |
+|----------------------|---------------------|-----------------|
+| proposal_deposit | amount | {depositAmount} |
+| proposal_deposit | proposal_id | {proposalID} |
+| proposal_deposit [0] | voting_period_start | {proposalID} |
+| message | module | governance |
+| message | action | deposit |
+| message | sender | {senderAddress} |
+
+* [0] Event only emitted if the voting period starts during the submission.
+
+## Parameters
+
+The governance module contains the following parameters:
+
+| Key | Type | Example |
+|-------------------------------|------------------|-----------------------------------------|
+| min_deposit | array (coins) | [{"denom":"uatom","amount":"10000000"}] |
+| max_deposit_period | string (time ns) | "172800000000000" (17280s) |
+| voting_period | string (time ns) | "172800000000000" (17280s) |
+| quorum | string (dec) | "0.334000000000000000" |
+| threshold | string (dec) | "0.500000000000000000" |
+| veto | string (dec) | "0.334000000000000000" |
+| expedited_threshold | string (time ns) | "0.667000000000000000" |
+| expedited_voting_period | string (time ns) | "86400000000000" (8600s) |
+| expedited_min_deposit | array (coins) | [{"denom":"uatom","amount":"50000000"}] |
+| burn_proposal_deposit_prevote | bool | false |
+| burn_vote_quorum | bool | false |
+| burn_vote_veto | bool | true |
+| min_initial_deposit_ratio | string | "0.1" |
+
+
+**NOTE**: The governance module contains parameters that are objects unlike other
+modules. If only a subset of parameters are desired to be changed, only they need
+to be included and not the entire parameter object structure.
+
+## Client
+
+### CLI
+
+A user can query and interact with the `gov` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `gov` state.
+
+```bash
+simd query gov --help
+```
+
+##### deposit
+
+The `deposit` command allows users to query a deposit for a given proposal from a given depositor.
+
+```bash
+simd query gov deposit [proposal-id] [depositor-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query gov deposit 1 cosmos1..
+```
+
+Example Output:
+
+```bash
+amount:
+- amount: "100"
+ denom: stake
+depositor: cosmos1..
+proposal_id: "1"
+```
+
+##### deposits
+
+The `deposits` command allows users to query all deposits for a given proposal.
+
+```bash
+simd query gov deposits [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd query gov deposits 1
+```
+
+Example Output:
+
+```bash
+deposits:
+- amount:
+ - amount: "100"
+ denom: stake
+ depositor: cosmos1..
+ proposal_id: "1"
+pagination:
+ next_key: null
+ total: "0"
+```
+
+##### param
+
+The `param` command allows users to query a given parameter for the `gov` module.
+
+```bash
+simd query gov param [param-type] [flags]
+```
+
+Example:
+
+```bash
+simd query gov param voting
+```
+
+Example Output:
+
+```bash
+voting_period: "172800000000000"
+```
+
+##### params
+
+The `params` command allows users to query all parameters for the `gov` module.
+
+```bash
+simd query gov params [flags]
+```
+
+Example:
+
+```bash
+simd query gov params
+```
+
+Example Output:
+
+```bash
+deposit_params:
+ max_deposit_period: 172800s
+ min_deposit:
+ - amount: "10000000"
+ denom: stake
+params:
+ expedited_min_deposit:
+ - amount: "50000000"
+ denom: stake
+ expedited_threshold: "0.670000000000000000"
+ expedited_voting_period: 86400s
+ max_deposit_period: 172800s
+ min_deposit:
+ - amount: "10000000"
+ denom: stake
+ min_initial_deposit_ratio: "0.000000000000000000"
+ proposal_cancel_burn_rate: "0.500000000000000000"
+ quorum: "0.334000000000000000"
+ threshold: "0.500000000000000000"
+ veto_threshold: "0.334000000000000000"
+ voting_period: 172800s
+tally_params:
+ quorum: "0.334000000000000000"
+ threshold: "0.500000000000000000"
+ veto_threshold: "0.334000000000000000"
+voting_params:
+ voting_period: 172800s
+```
+
+##### proposal
+
+The `proposal` command allows users to query a given proposal.
+
+```bash
+simd query gov proposal [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd query gov proposal 1
+```
+
+Example Output:
+
+```bash
+deposit_end_time: "2022-03-30T11:50:20.819676256Z"
+final_tally_result:
+ abstain_count: "0"
+ no_count: "0"
+ no_with_veto_count: "0"
+ yes_count: "0"
+id: "1"
+messages:
+- '@type': /cosmos.bank.v1beta1.MsgSend
+ amount:
+ - amount: "10"
+ denom: stake
+ from_address: cosmos1..
+ to_address: cosmos1..
+metadata: AQ==
+status: PROPOSAL_STATUS_DEPOSIT_PERIOD
+submit_time: "2022-03-28T11:50:20.819676256Z"
+total_deposit:
+- amount: "10"
+ denom: stake
+voting_end_time: null
+voting_start_time: null
+```
+
+##### proposals
+
+The `proposals` command allows users to query all proposals with optional filters.
+
+```bash
+simd query gov proposals [flags]
+```
+
+Example:
+
+```bash
+simd query gov proposals
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "0"
+proposals:
+- deposit_end_time: "2022-03-30T11:50:20.819676256Z"
+ final_tally_result:
+ abstain_count: "0"
+ no_count: "0"
+ no_with_veto_count: "0"
+ yes_count: "0"
+ id: "1"
+ messages:
+ - '@type': /cosmos.bank.v1beta1.MsgSend
+ amount:
+ - amount: "10"
+ denom: stake
+ from_address: cosmos1..
+ to_address: cosmos1..
+ metadata: AQ==
+ status: PROPOSAL_STATUS_DEPOSIT_PERIOD
+ submit_time: "2022-03-28T11:50:20.819676256Z"
+ total_deposit:
+ - amount: "10"
+ denom: stake
+ voting_end_time: null
+ voting_start_time: null
+- deposit_end_time: "2022-03-30T14:02:41.165025015Z"
+ final_tally_result:
+ abstain_count: "0"
+ no_count: "0"
+ no_with_veto_count: "0"
+ yes_count: "0"
+ id: "2"
+ messages:
+ - '@type': /cosmos.bank.v1beta1.MsgSend
+ amount:
+ - amount: "10"
+ denom: stake
+ from_address: cosmos1..
+ to_address: cosmos1..
+ metadata: AQ==
+ status: PROPOSAL_STATUS_DEPOSIT_PERIOD
+ submit_time: "2022-03-28T14:02:41.165025015Z"
+ total_deposit:
+ - amount: "10"
+ denom: stake
+ voting_end_time: null
+ voting_start_time: null
+```
+
+##### proposer
+
+The `proposer` command allows users to query the proposer for a given proposal.
+
+```bash
+simd query gov proposer [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd query gov proposer 1
+```
+
+Example Output:
+
+```bash
+proposal_id: "1"
+proposer: cosmos1..
+```
+
+##### tally
+
+The `tally` command allows users to query the tally of a given proposal vote.
+
+```bash
+simd query gov tally [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd query gov tally 1
+```
+
+Example Output:
+
+```bash
+abstain: "0"
+"no": "0"
+no_with_veto: "0"
+"yes": "1"
+```
+
+##### vote
+
+The `vote` command allows users to query a vote for a given proposal.
+
+```bash
+simd query gov vote [proposal-id] [voter-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query gov vote 1 cosmos1..
+```
+
+Example Output:
+
+```bash
+option: VOTE_OPTION_YES
+options:
+- option: VOTE_OPTION_YES
+ weight: "1.000000000000000000"
+proposal_id: "1"
+voter: cosmos1..
+```
+
+##### votes
+
+The `votes` command allows users to query all votes for a given proposal.
+
+```bash
+simd query gov votes [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd query gov votes 1
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "0"
+votes:
+- option: VOTE_OPTION_YES
+ options:
+ - option: VOTE_OPTION_YES
+ weight: "1.000000000000000000"
+ proposal_id: "1"
+ voter: cosmos1..
+```
+
+#### Transactions
+
+The `tx` commands allow users to interact with the `gov` module.
+
+```bash
+simd tx gov --help
+```
+
+##### deposit
+
+The `deposit` command allows users to deposit tokens for a given proposal.
+
+```bash
+simd tx gov deposit [proposal-id] [deposit] [flags]
+```
+
+Example:
+
+```bash
+simd tx gov deposit 1 10000000stake --from cosmos1..
+```
+
+##### draft-proposal
+
+The `draft-proposal` command allows users to draft any type of proposal.
+The command returns a `draft_proposal.json`, to be used by `submit-proposal` after being completed.
+The `draft_metadata.json` is meant to be uploaded to [IPFS](#metadata).
+
+```bash
+simd tx gov draft-proposal
+```
+
+##### submit-proposal
+
+The `submit-proposal` command allows users to submit a governance proposal along with some messages and metadata.
+Messages, metadata and deposit are defined in a JSON file.
+
+```bash
+simd tx gov submit-proposal [path-to-proposal-json] [flags]
+```
+
+Example:
+
+```bash
+simd tx gov submit-proposal /path/to/proposal.json --from cosmos1..
+```
+
+where `proposal.json` contains:
+
+```json
+{
+ "messages": [
+ {
+ "@type": "/cosmos.bank.v1beta1.MsgSend",
+ "from_address": "cosmos1...", // The gov module address
+ "to_address": "cosmos1...",
+ "amount":[{"denom": "stake","amount": "10"}]
+ }
+ ],
+ "metadata": "AQ==",
+ "deposit": "10stake",
+ "title": "Proposal Title",
+ "summary": "Proposal Summary"
+}
+```
+
+:::note
+By default the metadata, summary and title are both limited by 255 characters, this can be overridden by the application developer.
+:::
+
+:::tip
+When metadata is not specified, the title is limited to 255 characters and the summary 40x the title length.
+:::
+
+##### submit-legacy-proposal
+
+The `submit-legacy-proposal` command allows users to submit a governance legacy proposal along with an initial deposit.
+
+```bash
+simd tx gov submit-legacy-proposal [command] [flags]
+```
+
+Example:
+
+```bash
+simd tx gov submit-legacy-proposal --title="Test Proposal" --description="testing" --type="Text" --deposit="100000000stake" --from cosmos1..
+```
+
+Example (`param-change`):
+
+```bash
+simd tx gov submit-legacy-proposal param-change proposal.json --from cosmos1..
+```
+
+```json
+{
+ "title": "Test Proposal",
+ "description": "testing, testing, 1, 2, 3",
+ "changes": [
+ {
+ "subspace": "staking",
+ "key": "MaxValidators",
+ "value": 100
+ }
+ ],
+ "deposit": "10000000stake"
+}
+```
+
+#### cancel-proposal
+
+Once proposal is canceled, from the deposits of proposal `deposits * proposal_cancel_ratio` will be burned or sent to `ProposalCancelDest` address , if `ProposalCancelDest` is empty then deposits will be burned. The `remaining deposits` will be sent to depositors.
+
+```bash
+simd tx gov cancel-proposal [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd tx gov cancel-proposal 1 --from cosmos1...
+```
+
+##### vote
+
+The `vote` command allows users to submit a vote for a given governance proposal.
+
+```bash
+simd tx gov vote [command] [flags]
+```
+
+Example:
+
+```bash
+simd tx gov vote 1 yes --from cosmos1..
+```
+
+##### weighted-vote
+
+The `weighted-vote` command allows users to submit a weighted vote for a given governance proposal.
+
+```bash
+simd tx gov weighted-vote [proposal-id] [weighted-options] [flags]
+```
+
+Example:
+
+```bash
+simd tx gov weighted-vote 1 yes=0.5,no=0.5 --from cosmos1..
+```
+
+### gRPC
+
+A user can query the `gov` module using gRPC endpoints.
+
+#### Proposal
+
+The `Proposal` endpoint allows users to query a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Proposal
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Proposal
+```
+
+Example Output:
+
+```bash
+{
+ "proposal": {
+ "proposalId": "1",
+ "content": {"@type":"/cosmos.gov.v1beta1.TextProposal","description":"testing, testing, 1, 2, 3","title":"Test Proposal"},
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "finalTallyResult": {
+ "yes": "0",
+ "abstain": "0",
+ "no": "0",
+ "noWithVeto": "0"
+ },
+ "submitTime": "2021-09-16T19:40:08.712440474Z",
+ "depositEndTime": "2021-09-18T19:40:08.712440474Z",
+ "totalDeposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ],
+ "votingStartTime": "2021-09-16T19:40:08.712440474Z",
+ "votingEndTime": "2021-09-18T19:40:08.712440474Z",
+ "title": "Test Proposal",
+ "summary": "testing, testing, 1, 2, 3"
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Proposal
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Proposal
+```
+
+Example Output:
+
+```bash
+{
+ "proposal": {
+ "id": "1",
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."}
+ ],
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "finalTallyResult": {
+ "yesCount": "0",
+ "abstainCount": "0",
+ "noCount": "0",
+ "noWithVetoCount": "0"
+ },
+ "submitTime": "2022-03-28T11:50:20.819676256Z",
+ "depositEndTime": "2022-03-30T11:50:20.819676256Z",
+ "totalDeposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ],
+ "votingStartTime": "2022-03-28T14:25:26.644857113Z",
+ "votingEndTime": "2022-03-30T14:25:26.644857113Z",
+ "metadata": "AQ==",
+ "title": "Test Proposal",
+ "summary": "testing, testing, 1, 2, 3"
+ }
+}
+```
+
+#### Proposals
+
+The `Proposals` endpoint allows users to query all proposals with optional filters.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Proposals
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Proposals
+```
+
+Example Output:
+
+```bash
+{
+ "proposals": [
+ {
+ "proposalId": "1",
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "finalTallyResult": {
+ "yes": "0",
+ "abstain": "0",
+ "no": "0",
+ "noWithVeto": "0"
+ },
+ "submitTime": "2022-03-28T11:50:20.819676256Z",
+ "depositEndTime": "2022-03-30T11:50:20.819676256Z",
+ "totalDeposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000010"
+ }
+ ],
+ "votingStartTime": "2022-03-28T14:25:26.644857113Z",
+ "votingEndTime": "2022-03-30T14:25:26.644857113Z"
+ },
+ {
+ "proposalId": "2",
+ "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD",
+ "finalTallyResult": {
+ "yes": "0",
+ "abstain": "0",
+ "no": "0",
+ "noWithVeto": "0"
+ },
+ "submitTime": "2022-03-28T14:02:41.165025015Z",
+ "depositEndTime": "2022-03-30T14:02:41.165025015Z",
+ "totalDeposit": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ],
+ "votingStartTime": "0001-01-01T00:00:00Z",
+ "votingEndTime": "0001-01-01T00:00:00Z"
+ }
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Proposals
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Proposals
+```
+
+Example Output:
+
+```bash
+{
+ "proposals": [
+ {
+ "id": "1",
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."}
+ ],
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "finalTallyResult": {
+ "yesCount": "0",
+ "abstainCount": "0",
+ "noCount": "0",
+ "noWithVetoCount": "0"
+ },
+ "submitTime": "2022-03-28T11:50:20.819676256Z",
+ "depositEndTime": "2022-03-30T11:50:20.819676256Z",
+ "totalDeposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000010"
+ }
+ ],
+ "votingStartTime": "2022-03-28T14:25:26.644857113Z",
+ "votingEndTime": "2022-03-30T14:25:26.644857113Z",
+ "metadata": "AQ==",
+ "title": "Proposal Title",
+ "summary": "Proposal Summary"
+ },
+ {
+ "id": "2",
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"10"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."}
+ ],
+ "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD",
+ "finalTallyResult": {
+ "yesCount": "0",
+ "abstainCount": "0",
+ "noCount": "0",
+ "noWithVetoCount": "0"
+ },
+ "submitTime": "2022-03-28T14:02:41.165025015Z",
+ "depositEndTime": "2022-03-30T14:02:41.165025015Z",
+ "totalDeposit": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ],
+ "metadata": "AQ==",
+ "title": "Proposal Title",
+ "summary": "Proposal Summary"
+ }
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+```
+
+#### Vote
+
+The `Vote` endpoint allows users to query a vote for a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Vote
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1","voter":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Vote
+```
+
+Example Output:
+
+```bash
+{
+ "vote": {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "option": "VOTE_OPTION_YES",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1000000000000000000"
+ }
+ ]
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Vote
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1","voter":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Vote
+```
+
+Example Output:
+
+```bash
+{
+ "vote": {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "option": "VOTE_OPTION_YES",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1.000000000000000000"
+ }
+ ]
+ }
+}
+```
+
+#### Votes
+
+The `Votes` endpoint allows users to query all votes for a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Votes
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Votes
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1000000000000000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Votes
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Votes
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1.000000000000000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### Params
+
+The `Params` endpoint allows users to query all parameters for the `gov` module.
+
+
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Params
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"params_type":"voting"}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Params
+```
+
+Example Output:
+
+```bash
+{
+ "votingParams": {
+ "votingPeriod": "172800s"
+ },
+ "depositParams": {
+ "maxDepositPeriod": "0s"
+ },
+ "tallyParams": {
+ "quorum": "MA==",
+ "threshold": "MA==",
+ "vetoThreshold": "MA=="
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Params
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"params_type":"voting"}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Params
+```
+
+Example Output:
+
+```bash
+{
+ "votingParams": {
+ "votingPeriod": "172800s"
+ }
+}
+```
+
+#### Deposit
+
+The `Deposit` endpoint allows users to query a deposit for a given proposal from a given depositor.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Deposit
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ '{"proposal_id":"1","depositor":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Deposit
+```
+
+Example Output:
+
+```bash
+{
+ "deposit": {
+ "proposalId": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Deposit
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ '{"proposal_id":"1","depositor":"cosmos1.."}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Deposit
+```
+
+Example Output:
+
+```bash
+{
+ "deposit": {
+ "proposalId": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+}
+```
+
+#### deposits
+
+The `Deposits` endpoint allows users to query all deposits for a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/Deposits
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/Deposits
+```
+
+Example Output:
+
+```bash
+{
+ "deposits": [
+ {
+ "proposalId": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/Deposits
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/Deposits
+```
+
+Example Output:
+
+```bash
+{
+ "deposits": [
+ {
+ "proposalId": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### TallyResult
+
+The `TallyResult` endpoint allows users to query the tally of a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+cosmos.gov.v1beta1.Query/TallyResult
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1beta1.Query/TallyResult
+```
+
+Example Output:
+
+```bash
+{
+ "tally": {
+ "yes": "1000000",
+ "abstain": "0",
+ "no": "0",
+ "noWithVeto": "0"
+ }
+}
+```
+
+Using v1:
+
+```bash
+cosmos.gov.v1.Query/TallyResult
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' \
+ localhost:9090 \
+ cosmos.gov.v1.Query/TallyResult
+```
+
+Example Output:
+
+```bash
+{
+ "tally": {
+ "yes": "1000000",
+ "abstain": "0",
+ "no": "0",
+ "noWithVeto": "0"
+ }
+}
+```
+
+### REST
+
+A user can query the `gov` module using REST endpoints.
+
+#### proposal
+
+The `proposals` endpoint allows users to query a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals/{proposal_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals/1
+```
+
+Example Output:
+
+```bash
+{
+ "proposal": {
+ "proposal_id": "1",
+ "content": null,
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "final_tally_result": {
+ "yes": "0",
+ "abstain": "0",
+ "no": "0",
+ "no_with_veto": "0"
+ },
+ "submit_time": "2022-03-28T11:50:20.819676256Z",
+ "deposit_end_time": "2022-03-30T11:50:20.819676256Z",
+ "total_deposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000010"
+ }
+ ],
+ "voting_start_time": "2022-03-28T14:25:26.644857113Z",
+ "voting_end_time": "2022-03-30T14:25:26.644857113Z"
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals/{proposal_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals/1
+```
+
+Example Output:
+
+```bash
+{
+ "proposal": {
+ "id": "1",
+ "messages": [
+ {
+ "@type": "/cosmos.bank.v1beta1.MsgSend",
+ "from_address": "cosmos1..",
+ "to_address": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ]
+ }
+ ],
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "final_tally_result": {
+ "yes_count": "0",
+ "abstain_count": "0",
+ "no_count": "0",
+ "no_with_veto_count": "0"
+ },
+ "submit_time": "2022-03-28T11:50:20.819676256Z",
+ "deposit_end_time": "2022-03-30T11:50:20.819676256Z",
+ "total_deposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ],
+ "voting_start_time": "2022-03-28T14:25:26.644857113Z",
+ "voting_end_time": "2022-03-30T14:25:26.644857113Z",
+ "metadata": "AQ==",
+ "title": "Proposal Title",
+ "summary": "Proposal Summary"
+ }
+}
+```
+
+#### proposals
+
+The `proposals` endpoint also allows users to query all proposals with optional filters.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals
+```
+
+Example Output:
+
+```bash
+{
+ "proposals": [
+ {
+ "proposal_id": "1",
+ "content": null,
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "final_tally_result": {
+ "yes": "0",
+ "abstain": "0",
+ "no": "0",
+ "no_with_veto": "0"
+ },
+ "submit_time": "2022-03-28T11:50:20.819676256Z",
+ "deposit_end_time": "2022-03-30T11:50:20.819676256Z",
+ "total_deposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ],
+ "voting_start_time": "2022-03-28T14:25:26.644857113Z",
+ "voting_end_time": "2022-03-30T14:25:26.644857113Z"
+ },
+ {
+ "proposal_id": "2",
+ "content": null,
+ "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD",
+ "final_tally_result": {
+ "yes": "0",
+ "abstain": "0",
+ "no": "0",
+ "no_with_veto": "0"
+ },
+ "submit_time": "2022-03-28T14:02:41.165025015Z",
+ "deposit_end_time": "2022-03-30T14:02:41.165025015Z",
+ "total_deposit": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ],
+ "voting_start_time": "0001-01-01T00:00:00Z",
+ "voting_end_time": "0001-01-01T00:00:00Z"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals
+```
+
+Example Output:
+
+```bash
+{
+ "proposals": [
+ {
+ "id": "1",
+ "messages": [
+ {
+ "@type": "/cosmos.bank.v1beta1.MsgSend",
+ "from_address": "cosmos1..",
+ "to_address": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ]
+ }
+ ],
+ "status": "PROPOSAL_STATUS_VOTING_PERIOD",
+ "final_tally_result": {
+ "yes_count": "0",
+ "abstain_count": "0",
+ "no_count": "0",
+ "no_with_veto_count": "0"
+ },
+ "submit_time": "2022-03-28T11:50:20.819676256Z",
+ "deposit_end_time": "2022-03-30T11:50:20.819676256Z",
+ "total_deposit": [
+ {
+ "denom": "stake",
+ "amount": "10000000010"
+ }
+ ],
+ "voting_start_time": "2022-03-28T14:25:26.644857113Z",
+ "voting_end_time": "2022-03-30T14:25:26.644857113Z",
+ "metadata": "AQ==",
+ "title": "Proposal Title",
+ "summary": "Proposal Summary"
+ },
+ {
+ "id": "2",
+ "messages": [
+ {
+ "@type": "/cosmos.bank.v1beta1.MsgSend",
+ "from_address": "cosmos1..",
+ "to_address": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ]
+ }
+ ],
+ "status": "PROPOSAL_STATUS_DEPOSIT_PERIOD",
+ "final_tally_result": {
+ "yes_count": "0",
+ "abstain_count": "0",
+ "no_count": "0",
+ "no_with_veto_count": "0"
+ },
+ "submit_time": "2022-03-28T14:02:41.165025015Z",
+ "deposit_end_time": "2022-03-30T14:02:41.165025015Z",
+ "total_deposit": [
+ {
+ "denom": "stake",
+ "amount": "10"
+ }
+ ],
+ "voting_start_time": null,
+ "voting_end_time": null,
+ "metadata": "AQ==",
+ "title": "Proposal Title",
+ "summary": "Proposal Summary"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
+
+#### voter vote
+
+The `votes` endpoint allows users to query a vote for a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals/{proposal_id}/votes/{voter}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "vote": {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "option": "VOTE_OPTION_YES",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1.000000000000000000"
+ }
+ ]
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals/{proposal_id}/votes/{voter}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals/1/votes/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "vote": {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1.000000000000000000"
+ }
+ ],
+ "metadata": ""
+ }
+}
+```
+
+#### votes
+
+The `votes` endpoint allows users to query all votes for a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals/{proposal_id}/votes
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals/1/votes
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "option": "VOTE_OPTION_YES",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1.000000000000000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals/{proposal_id}/votes
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals/1/votes
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "options": [
+ {
+ "option": "VOTE_OPTION_YES",
+ "weight": "1.000000000000000000"
+ }
+ ],
+ "metadata": ""
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### params
+
+The `params` endpoint allows users to query all parameters for the `gov` module.
+
+
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/params/{params_type}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/params/voting
+```
+
+Example Output:
+
+```bash
+{
+ "voting_params": {
+ "voting_period": "172800s"
+ },
+ "deposit_params": {
+ "min_deposit": [
+ ],
+ "max_deposit_period": "0s"
+ },
+ "tally_params": {
+ "quorum": "0.000000000000000000",
+ "threshold": "0.000000000000000000",
+ "veto_threshold": "0.000000000000000000"
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/params/{params_type}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/params/voting
+```
+
+Example Output:
+
+```bash
+{
+ "voting_params": {
+ "voting_period": "172800s"
+ },
+ "deposit_params": {
+ "min_deposit": [
+ ],
+ "max_deposit_period": "0s"
+ },
+ "tally_params": {
+ "quorum": "0.000000000000000000",
+ "threshold": "0.000000000000000000",
+ "veto_threshold": "0.000000000000000000"
+ }
+}
+```
+
+#### deposits
+
+The `deposits` endpoint allows users to query a deposit for a given proposal from a given depositor.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits/{depositor}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "deposit": {
+ "proposal_id": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals/{proposal_id}/deposits/{depositor}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals/1/deposits/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "deposit": {
+ "proposal_id": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+}
+```
+
+#### proposal deposits
+
+The `deposits` endpoint allows users to query all deposits for a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals/{proposal_id}/deposits
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals/1/deposits
+```
+
+Example Output:
+
+```bash
+{
+ "deposits": [
+ {
+ "proposal_id": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals/{proposal_id}/deposits
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals/1/deposits
+```
+
+Example Output:
+
+```bash
+{
+ "deposits": [
+ {
+ "proposal_id": "1",
+ "depositor": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### tally
+
+The `tally` endpoint allows users to query the tally of a given proposal.
+
+Using legacy v1beta1:
+
+```bash
+/cosmos/gov/v1beta1/proposals/{proposal_id}/tally
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1beta1/proposals/1/tally
+```
+
+Example Output:
+
+```bash
+{
+ "tally": {
+ "yes": "1000000",
+ "abstain": "0",
+ "no": "0",
+ "no_with_veto": "0"
+ }
+}
+```
+
+Using v1:
+
+```bash
+/cosmos/gov/v1/proposals/{proposal_id}/tally
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/gov/v1/proposals/1/tally
+```
+
+Example Output:
+
+```bash
+{
+ "tally": {
+ "yes": "1000000",
+ "abstain": "0",
+ "no": "0",
+ "no_with_veto": "0"
+ }
+}
+```
+
+## Metadata
+
+The gov module has two locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the gov and group modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains.
+
+### Proposal
+
+Location: off-chain as json object stored on IPFS (mirrors [group proposal](../group/README.md#metadata))
+
+```json
+{
+ "title": "",
+ "authors": [""],
+ "summary": "",
+ "details": "",
+ "proposal_forum_url": "",
+ "vote_option_context": "",
+}
+```
+
+:::note
+The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata.
+In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility.
+:::
+
+### Vote
+
+Location: on-chain as json within 255 character limit (mirrors [group vote](../group/README.md#metadata))
+
+```json
+{
+ "justification": "",
+}
+```
+
+## Future Improvements
+
+The current documentation only describes the minimum viable product for the
+governance module. Future improvements may include:
+
+* **`BountyProposals`:** If accepted, a `BountyProposal` creates an open
+ bounty. The `BountyProposal` specifies how many Atoms will be given upon
+ completion. These Atoms will be taken from the `reserve pool`. After a
+ `BountyProposal` is accepted by governance, anybody can submit a
+ `SoftwareUpgradeProposal` with the code to claim the bounty. Note that once a
+ `BountyProposal` is accepted, the corresponding funds in the `reserve pool`
+ are locked so that payment can always be honored. In order to link a
+ `SoftwareUpgradeProposal` to an open bounty, the submitter of the
+ `SoftwareUpgradeProposal` will use the `Proposal.LinkedProposal` attribute.
+ If a `SoftwareUpgradeProposal` linked to an open bounty is accepted by
+ governance, the funds that were reserved are automatically transferred to the
+ submitter.
+* **Complex delegation:** Delegators could choose other representatives than
+ their validators. Ultimately, the chain of representatives would always end
+ up to a validator, but delegators could inherit the vote of their chosen
+ representative before they inherit the vote of their validator. In other
+ words, they would only inherit the vote of their validator if their other
+ appointed representative did not vote.
+* **Better process for proposal review:** There would be two parts to
+ `proposal.Deposit`, one for anti-spam (same as in MVP) and an other one to
+ reward third party auditors.
diff --git a/copy-of-sdk-docs/build/modules/group/README.md b/copy-of-sdk-docs/build/modules/group/README.md
new file mode 100644
index 00000000..98fd7ba9
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/group/README.md
@@ -0,0 +1,2168 @@
+---
+sidebar_position: 1
+---
+
+# `x/group`
+
+⚠️ **DEPRECATED**: This package is deprecated and will be removed in the next major release. The `x/group` module will be moved to a separate repo `github.com/cosmos/cosmos-sdk-legacy`.
+
+## Abstract
+
+The following documents specify the group module.
+
+This module allows the creation and management of on-chain multisig accounts and enables voting for message execution based on configurable decision policies.
+
+## Contents
+
+* [Concepts](#concepts)
+ * [Group](#group)
+ * [Group Policy](#group-policy)
+ * [Decision Policy](#decision-policy)
+ * [Proposal](#proposal)
+ * [Pruning](#pruning)
+* [State](#state)
+ * [Group Table](#group-table)
+ * [Group Member Table](#group-member-table)
+ * [Group Policy Table](#group-policy-table)
+ * [Proposal Table](#proposal-table)
+ * [Vote Table](#vote-table)
+* [Msg Service](#msg-service)
+ * [Msg/CreateGroup](#msgcreategroup)
+ * [Msg/UpdateGroupMembers](#msgupdategroupmembers)
+ * [Msg/UpdateGroupAdmin](#msgupdategroupadmin)
+ * [Msg/UpdateGroupMetadata](#msgupdategroupmetadata)
+ * [Msg/CreateGroupPolicy](#msgcreategrouppolicy)
+ * [Msg/CreateGroupWithPolicy](#msgcreategroupwithpolicy)
+ * [Msg/UpdateGroupPolicyAdmin](#msgupdategrouppolicyadmin)
+ * [Msg/UpdateGroupPolicyDecisionPolicy](#msgupdategrouppolicydecisionpolicy)
+ * [Msg/UpdateGroupPolicyMetadata](#msgupdategrouppolicymetadata)
+ * [Msg/SubmitProposal](#msgsubmitproposal)
+ * [Msg/WithdrawProposal](#msgwithdrawproposal)
+ * [Msg/Vote](#msgvote)
+ * [Msg/Exec](#msgexec)
+ * [Msg/LeaveGroup](#msgleavegroup)
+* [Events](#events)
+ * [EventCreateGroup](#eventcreategroup)
+ * [EventUpdateGroup](#eventupdategroup)
+ * [EventCreateGroupPolicy](#eventcreategrouppolicy)
+ * [EventUpdateGroupPolicy](#eventupdategrouppolicy)
+ * [EventCreateProposal](#eventcreateproposal)
+ * [EventWithdrawProposal](#eventwithdrawproposal)
+ * [EventVote](#eventvote)
+ * [EventExec](#eventexec)
+ * [EventLeaveGroup](#eventleavegroup)
+ * [EventProposalPruned](#eventproposalpruned)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+* [Metadata](#metadata)
+
+## Concepts
+
+### Group
+
+A group is simply an aggregation of accounts with associated weights. It is not
+an account and doesn't have a balance. It doesn't in and of itself have any
+sort of voting or decision weight. It does have an "administrator" which has
+the ability to add, remove and update members in the group. Note that a
+group policy account could be an administrator of a group, and that the
+administrator doesn't necessarily have to be a member of the group.
+
+### Group Policy
+
+A group policy is an account associated with a group and a decision policy.
+Group policies are abstracted from groups because a single group may have
+multiple decision policies for different types of actions. Managing group
+membership separately from decision policies results in the least overhead
+and keeps membership consistent across different policies. The pattern that
+is recommended is to have a single master group policy for a given group,
+and then to create separate group policies with different decision policies
+and delegate the desired permissions from the master account to
+those "sub-accounts" using the `x/authz` module.
+
+### Decision Policy
+
+A decision policy is the mechanism by which members of a group can vote on
+proposals, as well as the rules that dictate whether a proposal should pass
+or not based on its tally outcome.
+
+All decision policies generally would have a minimum execution period and a
+maximum voting window. The minimum execution period is the minimum amount of time
+that must pass after submission in order for a proposal to potentially be executed, and it may
+be set to 0. The maximum voting window is the maximum time after submission that a proposal may
+be voted on before it is tallied.
+
+The chain developer also defines an app-wide maximum execution period, which is
+the maximum amount of time after a proposal's voting period end where users are
+allowed to execute a proposal.
+
+The current group module comes shipped with two decision policies: threshold
+and percentage. Any chain developer can extend upon these two, by creating
+custom decision policies, as long as they adhere to the `DecisionPolicy`
+interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/group/types.go#L27-L45
+```
+
+#### Threshold decision policy
+
+A threshold decision policy defines a threshold of yes votes (based on a tally
+of voter weights) that must be achieved in order for a proposal to pass. For
+this decision policy, abstain and veto are simply treated as no's.
+
+This decision policy also has a VotingPeriod window and a MinExecutionPeriod
+window. The former defines the duration after proposal submission where members
+are allowed to vote, after which tallying is performed. The latter specifies
+the minimum duration after proposal submission where the proposal can be
+executed. If set to 0, then the proposal is allowed to be executed immediately
+on submission (using the `TRY_EXEC` option). Obviously, MinExecutionPeriod
+cannot be greater than VotingPeriod+MaxExecutionPeriod (where MaxExecution is
+the app-defined duration that specifies the window after voting ended where a
+proposal can be executed).
+
+#### Percentage decision policy
+
+A percentage decision policy is similar to a threshold decision policy, except
+that the threshold is not defined as a constant weight, but as a percentage.
+It's more suited for groups where the group members' weights can be updated, as
+the percentage threshold stays the same, and doesn't depend on how those member
+weights get updated.
+
+Same as the Threshold decision policy, the percentage decision policy has the
+two VotingPeriod and MinExecutionPeriod parameters.
+
+### Proposal
+
+Any member(s) of a group can submit a proposal for a group policy account to decide upon.
+A proposal consists of a set of messages that will be executed if the proposal
+passes as well as any metadata associated with the proposal.
+
+#### Voting
+
+There are four choices to choose while voting - yes, no, abstain and veto. Not
+all decision policies will take the four choices into account. Votes can contain some optional metadata.
+In the current implementation, the voting window begins as soon as a proposal
+is submitted, and the end is defined by the group policy's decision policy.
+
+#### Withdrawing Proposals
+
+Proposals can be withdrawn any time before the voting period end, either by the
+admin of the group policy or by one of the proposers. Once withdrawn, it is
+marked as `PROPOSAL_STATUS_WITHDRAWN`, and no more voting or execution is
+allowed on it.
+
+#### Aborted Proposals
+
+If the group policy is updated during the voting period of the proposal, then
+the proposal is marked as `PROPOSAL_STATUS_ABORTED`, and no more voting or
+execution is allowed on it. This is because the group policy defines the rules
+of proposal voting and execution, so if those rules change during the lifecycle
+of a proposal, then the proposal should be marked as stale.
+
+#### Tallying
+
+Tallying is the counting of all votes on a proposal. It happens only once in
+the lifecycle of a proposal, but can be triggered by two factors, whichever
+happens first:
+
+* either someone tries to execute the proposal (see next section), which can
+ happen on a `Msg/Exec` transaction, or a `Msg/{SubmitProposal,Vote}`
+ transaction with the `Exec` field set. When a proposal execution is attempted,
+ a tally is done first to make sure the proposal passes.
+* or on `EndBlock` when the proposal's voting period end just passed.
+
+If the tally result passes the decision policy's rules, then the proposal is
+marked as `PROPOSAL_STATUS_ACCEPTED`, or else it is marked as
+`PROPOSAL_STATUS_REJECTED`. In any case, no more voting is allowed anymore, and the tally
+result is persisted to state in the proposal's `FinalTallyResult`.
+
+#### Executing Proposals
+
+Proposals are executed only when the tallying is done, and the group account's
+decision policy allows the proposal to pass based on the tally outcome. They
+are marked by the status `PROPOSAL_STATUS_ACCEPTED`. Execution must happen
+before a duration of `MaxExecutionPeriod` (set by the chain developer) after
+each proposal's voting period end.
+
+Proposals will not be automatically executed by the chain in this current design,
+but rather a user must submit a `Msg/Exec` transaction to attempt to execute the
+proposal based on the current votes and decision policy. Any user (not only the
+group members) can execute proposals that have been accepted, and execution fees are
+paid by the proposal executor.
+It's also possible to try to execute a proposal immediately on creation or on
+new votes using the `Exec` field of `Msg/SubmitProposal` and `Msg/Vote` requests.
+In the former case, proposers signatures are considered as yes votes.
+In these cases, if the proposal can't be executed (i.e. it didn't pass the
+decision policy's rules), it will still be opened for new votes and
+could be tallied and executed later on.
+
+A successful proposal execution will have its `ExecutorResult` marked as
+`PROPOSAL_EXECUTOR_RESULT_SUCCESS`. The proposal will be automatically pruned
+after execution. On the other hand, a failed proposal execution will be marked
+as `PROPOSAL_EXECUTOR_RESULT_FAILURE`. Such a proposal can be re-executed
+multiple times, until it expires after `MaxExecutionPeriod` after voting period
+end.
+
+### Pruning
+
+Proposals and votes are automatically pruned to avoid state bloat.
+
+Votes are pruned:
+
+* either after a successful tally, i.e. a tally whose result passes the decision
+ policy's rules, which can be triggered by a `Msg/Exec` or a
+ `Msg/{SubmitProposal,Vote}` with the `Exec` field set,
+* or on `EndBlock` right after the proposal's voting period end. This applies to proposals with status `aborted` or `withdrawn` too.
+
+whichever happens first.
+
+Proposals are pruned:
+
+* on `EndBlock` whose proposal status is `withdrawn` or `aborted` on proposal's voting period end before tallying,
+* and either after a successful proposal execution,
+* or on `EndBlock` right after the proposal's `voting_period_end` +
+ `max_execution_period` (defined as an app-wide configuration) is passed,
+
+whichever happens first.
+
+## State
+
+The `group` module uses the `orm` package which provides table storage with support for
+primary keys and secondary indexes. `orm` also defines `Sequence` which is a persistent unique key generator based on a counter that can be used along with `Table`s.
+
+Here's the list of tables and associated sequences and indexes stored as part of the `group` module.
+
+### Group Table
+
+The `groupTable` stores `GroupInfo`: `0x0 | BigEndian(GroupId) -> ProtocolBuffer(GroupInfo)`.
+
+#### groupSeq
+
+The value of `groupSeq` is incremented when creating a new group and corresponds to the new `GroupId`: `0x1 | 0x1 -> BigEndian`.
+
+The second `0x1` corresponds to the ORM `sequenceStorageKey`.
+
+#### groupByAdminIndex
+
+`groupByAdminIndex` allows to retrieve groups by admin address:
+`0x2 | len([]byte(group.Admin)) | []byte(group.Admin) | BigEndian(GroupId) -> []byte()`.
+
+### Group Member Table
+
+The `groupMemberTable` stores `GroupMember`s: `0x10 | BigEndian(GroupId) | []byte(member.Address) -> ProtocolBuffer(GroupMember)`.
+
+The `groupMemberTable` is a primary key table and its `PrimaryKey` is given by
+`BigEndian(GroupId) | []byte(member.Address)` which is used by the following indexes.
+
+#### groupMemberByGroupIndex
+
+`groupMemberByGroupIndex` allows to retrieve group members by group id:
+`0x11 | BigEndian(GroupId) | PrimaryKey -> []byte()`.
+
+#### groupMemberByMemberIndex
+
+`groupMemberByMemberIndex` allows to retrieve group members by member address:
+`0x12 | len([]byte(member.Address)) | []byte(member.Address) | PrimaryKey -> []byte()`.
+
+### Group Policy Table
+
+The `groupPolicyTable` stores `GroupPolicyInfo`: `0x20 | len([]byte(Address)) | []byte(Address) -> ProtocolBuffer(GroupPolicyInfo)`.
+
+The `groupPolicyTable` is a primary key table and its `PrimaryKey` is given by
+`len([]byte(Address)) | []byte(Address)` which is used by the following indexes.
+
+#### groupPolicySeq
+
+The value of `groupPolicySeq` is incremented when creating a new group policy and is used to generate the new group policy account `Address`:
+`0x21 | 0x1 -> BigEndian`.
+
+The second `0x1` corresponds to the ORM `sequenceStorageKey`.
+
+#### groupPolicyByGroupIndex
+
+`groupPolicyByGroupIndex` allows to retrieve group policies by group id:
+`0x22 | BigEndian(GroupId) | PrimaryKey -> []byte()`.
+
+#### groupPolicyByAdminIndex
+
+`groupPolicyByAdminIndex` allows to retrieve group policies by admin address:
+`0x23 | len([]byte(Address)) | []byte(Address) | PrimaryKey -> []byte()`.
+
+### Proposal Table
+
+The `proposalTable` stores `Proposal`s: `0x30 | BigEndian(ProposalId) -> ProtocolBuffer(Proposal)`.
+
+#### proposalSeq
+
+The value of `proposalSeq` is incremented when creating a new proposal and corresponds to the new `ProposalId`: `0x31 | 0x1 -> BigEndian`.
+
+The second `0x1` corresponds to the ORM `sequenceStorageKey`.
+
+#### proposalByGroupPolicyIndex
+
+`proposalByGroupPolicyIndex` allows to retrieve proposals by group policy account address:
+`0x32 | len([]byte(account.Address)) | []byte(account.Address) | BigEndian(ProposalId) -> []byte()`.
+
+#### ProposalsByVotingPeriodEndIndex
+
+`proposalsByVotingPeriodEndIndex` allows to retrieve proposals sorted by chronological `voting_period_end`:
+`0x33 | sdk.FormatTimeBytes(proposal.VotingPeriodEnd) | BigEndian(ProposalId) -> []byte()`.
+
+This index is used when tallying the proposal votes at the end of the voting period, and for pruning proposals at `VotingPeriodEnd + MaxExecutionPeriod`.
+
+### Vote Table
+
+The `voteTable` stores `Vote`s: `0x40 | BigEndian(ProposalId) | []byte(voter.Address) -> ProtocolBuffer(Vote)`.
+
+The `voteTable` is a primary key table and its `PrimaryKey` is given by
+`BigEndian(ProposalId) | []byte(voter.Address)` which is used by the following indexes.
+
+#### voteByProposalIndex
+
+`voteByProposalIndex` allows to retrieve votes by proposal id:
+`0x41 | BigEndian(ProposalId) | PrimaryKey -> []byte()`.
+
+#### voteByVoterIndex
+
+`voteByVoterIndex` allows to retrieve votes by voter address:
+`0x42 | len([]byte(voter.Address)) | []byte(voter.Address) | PrimaryKey -> []byte()`.
+
+## Msg Service
+
+### Msg/CreateGroup
+
+A new group can be created with the `MsgCreateGroup`, which has an admin address, a list of members and some optional metadata.
+
+The metadata has a maximum length that is chosen by the app developer, and
+passed into the group keeper as a config.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L67-L80
+```
+
+It's expected to fail if
+
+* metadata length is greater than `MaxMetadataLen` config
+* members are not correctly set (e.g. wrong address format, duplicates, or with 0 weight).
+
+### Msg/UpdateGroupMembers
+
+Group members can be updated with the `UpdateGroupMembers`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L88-L102
+```
+
+In the list of `MemberUpdates`, an existing member can be removed by setting its weight to 0.
+
+It's expected to fail if:
+
+* the signer is not the admin of the group.
+* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group.
+
+### Msg/UpdateGroupAdmin
+
+The `UpdateGroupAdmin` can be used to update a group admin.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L107-L120
+```
+
+It's expected to fail if the signer is not the admin of the group.
+
+### Msg/UpdateGroupMetadata
+
+The `UpdateGroupMetadata` can be used to update a group metadata.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L125-L138
+```
+
+It's expected to fail if:
+
+* new metadata length is greater than `MaxMetadataLen` config.
+* the signer is not the admin of the group.
+
+### Msg/CreateGroupPolicy
+
+A new group policy can be created with the `MsgCreateGroupPolicy`, which has an admin address, a group id, a decision policy and some optional metadata.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L147-L165
+```
+
+It's expected to fail if:
+
+* the signer is not the admin of the group.
+* metadata length is greater than `MaxMetadataLen` config.
+* the decision policy's `Validate()` method doesn't pass against the group.
+
+### Msg/CreateGroupWithPolicy
+
+A new group with policy can be created with the `MsgCreateGroupWithPolicy`, which has an admin address, a list of members, a decision policy, a `group_policy_as_admin` field to optionally set group and group policy admin with group policy address and some optional metadata for group and group policy.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L191-L215
+```
+
+It's expected to fail for the same reasons as `Msg/CreateGroup` and `Msg/CreateGroupPolicy`.
+
+### Msg/UpdateGroupPolicyAdmin
+
+The `UpdateGroupPolicyAdmin` can be used to update a group policy admin.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L173-L186
+```
+
+It's expected to fail if the signer is not the admin of the group policy.
+
+### Msg/UpdateGroupPolicyDecisionPolicy
+
+The `UpdateGroupPolicyDecisionPolicy` can be used to update a decision policy.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L226-L241
+```
+
+It's expected to fail if:
+
+* the signer is not the admin of the group policy.
+* the new decision policy's `Validate()` method doesn't pass against the group.
+
+### Msg/UpdateGroupPolicyMetadata
+
+The `UpdateGroupPolicyMetadata` can be used to update a group policy metadata.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L246-L259
+```
+
+It's expected to fail if:
+
+* new metadata length is greater than `MaxMetadataLen` config.
+* the signer is not the admin of the group.
+
+### Msg/SubmitProposal
+
+A new proposal can be created with the `MsgSubmitProposal`, which has a group policy account address, a list of proposers addresses, a list of messages to execute if the proposal is accepted and some optional metadata.
+An optional `Exec` value can be provided to try to execute the proposal immediately after proposal creation. Proposers signatures are considered as yes votes in this case.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L281-L315
+```
+
+It's expected to fail if:
+
+* metadata, title, or summary length is greater than `MaxMetadataLen` config.
+* if any of the proposers is not a group member.
+
+### Msg/WithdrawProposal
+
+A proposal can be withdrawn using `MsgWithdrawProposal` which has an `address` (can be either a proposer or the group policy admin) and a `proposal_id` (which has to be withdrawn).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L323-L333
+```
+
+It's expected to fail if:
+
+* the signer is neither the group policy admin nor proposer of the proposal.
+* the proposal is already closed or aborted.
+
+### Msg/Vote
+
+A new vote can be created with the `MsgVote`, given a proposal id, a voter address, a choice (yes, no, veto or abstain) and some optional metadata.
+An optional `Exec` value can be provided to try to execute the proposal immediately after voting.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L338-L358
+```
+
+It's expected to fail if:
+
+* metadata length is greater than `MaxMetadataLen` config.
+* the proposal is not in voting period anymore.
+
+### Msg/Exec
+
+A proposal can be executed with the `MsgExec`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L363-L373
+```
+
+The messages that are part of this proposal won't be executed if:
+
+* the proposal has not been accepted by the group policy.
+* the proposal has already been successfully executed.
+
+### Msg/LeaveGroup
+
+The `MsgLeaveGroup` allows group member to leave a group.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/proto/cosmos/group/v1/tx.proto#L381-L391
+```
+
+It's expected to fail if:
+
+* the group member is not part of the group.
+* for any one of the associated group policies, if its decision policy's `Validate()` method fails against the updated group.
+
+## Events
+
+The group module emits the following events:
+
+### EventCreateGroup
+
+| Type | Attribute Key | Attribute Value |
+| -------------------------------- | ------------- | -------------------------------- |
+| message | action | /cosmos.group.v1.Msg/CreateGroup |
+| cosmos.group.v1.EventCreateGroup | group_id | {groupId} |
+
+### EventUpdateGroup
+
+| Type | Attribute Key | Attribute Value |
+| -------------------------------- | ------------- | ---------------------------------------------------------- |
+| message | action | /cosmos.group.v1.Msg/UpdateGroup{Admin\|Metadata\|Members} |
+| cosmos.group.v1.EventUpdateGroup | group_id | {groupId} |
+
+### EventCreateGroupPolicy
+
+| Type | Attribute Key | Attribute Value |
+| -------------------------------------- | ------------- | -------------------------------------- |
+| message | action | /cosmos.group.v1.Msg/CreateGroupPolicy |
+| cosmos.group.v1.EventCreateGroupPolicy | address | {groupPolicyAddress} |
+
+### EventUpdateGroupPolicy
+
+| Type | Attribute Key | Attribute Value |
+| -------------------------------------- | ------------- | ----------------------------------------------------------------------- |
+| message | action | /cosmos.group.v1.Msg/UpdateGroupPolicy{Admin\|Metadata\|DecisionPolicy} |
+| cosmos.group.v1.EventUpdateGroupPolicy | address | {groupPolicyAddress} |
+
+### EventCreateProposal
+
+| Type | Attribute Key | Attribute Value |
+| ----------------------------------- | ------------- | ----------------------------------- |
+| message | action | /cosmos.group.v1.Msg/CreateProposal |
+| cosmos.group.v1.EventCreateProposal | proposal_id | {proposalId} |
+
+### EventWithdrawProposal
+
+| Type | Attribute Key | Attribute Value |
+| ------------------------------------- | ------------- | ------------------------------------- |
+| message | action | /cosmos.group.v1.Msg/WithdrawProposal |
+| cosmos.group.v1.EventWithdrawProposal | proposal_id | {proposalId} |
+
+### EventVote
+
+| Type | Attribute Key | Attribute Value |
+| ------------------------- | ------------- | ------------------------- |
+| message | action | /cosmos.group.v1.Msg/Vote |
+| cosmos.group.v1.EventVote | proposal_id | {proposalId} |
+
+## EventExec
+
+| Type | Attribute Key | Attribute Value |
+| ------------------------- | ------------- | ------------------------- |
+| message | action | /cosmos.group.v1.Msg/Exec |
+| cosmos.group.v1.EventExec | proposal_id | {proposalId} |
+| cosmos.group.v1.EventExec | logs | {logs_string} |
+
+### EventLeaveGroup
+
+| Type | Attribute Key | Attribute Value |
+| ------------------------------- | ------------- | ------------------------------- |
+| message | action | /cosmos.group.v1.Msg/LeaveGroup |
+| cosmos.group.v1.EventLeaveGroup | proposal_id | {proposalId} |
+| cosmos.group.v1.EventLeaveGroup | address | {address} |
+
+### EventProposalPruned
+
+| Type | Attribute Key | Attribute Value |
+|-------------------------------------|---------------|---------------------------------|
+| message | action | /cosmos.group.v1.Msg/LeaveGroup |
+| cosmos.group.v1.EventProposalPruned | proposal_id | {proposalId} |
+| cosmos.group.v1.EventProposalPruned | status | {ProposalStatus} |
+| cosmos.group.v1.EventProposalPruned | tally_result | {TallyResult} |
+
+
+## Client
+
+### CLI
+
+A user can query and interact with the `group` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `group` state.
+
+```bash
+simd query group --help
+```
+
+##### group-info
+
+The `group-info` command allows users to query for group info by given group id.
+
+```bash
+simd query group group-info [id] [flags]
+```
+
+Example:
+
+```bash
+simd query group group-info 1
+```
+
+Example Output:
+
+```bash
+admin: cosmos1..
+group_id: "1"
+metadata: AQ==
+total_weight: "3"
+version: "1"
+```
+
+##### group-policy-info
+
+The `group-policy-info` command allows users to query for group policy info by account address of group policy .
+
+```bash
+simd query group group-policy-info [group-policy-account] [flags]
+```
+
+Example:
+
+```bash
+simd query group group-policy-info cosmos1..
+```
+
+Example Output:
+
+```bash
+address: cosmos1..
+admin: cosmos1..
+decision_policy:
+ '@type': /cosmos.group.v1.ThresholdDecisionPolicy
+ threshold: "1"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+group_id: "1"
+metadata: AQ==
+version: "1"
+```
+
+##### group-members
+
+The `group-members` command allows users to query for group members by group id with pagination flags.
+
+```bash
+simd query group group-members [id] [flags]
+```
+
+Example:
+
+```bash
+simd query group group-members 1
+```
+
+Example Output:
+
+```bash
+members:
+- group_id: "1"
+ member:
+ address: cosmos1..
+ metadata: AQ==
+ weight: "2"
+- group_id: "1"
+ member:
+ address: cosmos1..
+ metadata: AQ==
+ weight: "1"
+pagination:
+ next_key: null
+ total: "2"
+```
+
+##### groups-by-admin
+
+The `groups-by-admin` command allows users to query for groups by admin account address with pagination flags.
+
+```bash
+simd query group groups-by-admin [admin] [flags]
+```
+
+Example:
+
+```bash
+simd query group groups-by-admin cosmos1..
+```
+
+Example Output:
+
+```bash
+groups:
+- admin: cosmos1..
+ group_id: "1"
+ metadata: AQ==
+ total_weight: "3"
+ version: "1"
+- admin: cosmos1..
+ group_id: "2"
+ metadata: AQ==
+ total_weight: "3"
+ version: "1"
+pagination:
+ next_key: null
+ total: "2"
+```
+
+##### group-policies-by-group
+
+The `group-policies-by-group` command allows users to query for group policies by group id with pagination flags.
+
+```bash
+simd query group group-policies-by-group [group-id] [flags]
+```
+
+Example:
+
+```bash
+simd query group group-policies-by-group 1
+```
+
+Example Output:
+
+```bash
+group_policies:
+- address: cosmos1..
+ admin: cosmos1..
+ decision_policy:
+ '@type': /cosmos.group.v1.ThresholdDecisionPolicy
+ threshold: "1"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+ group_id: "1"
+ metadata: AQ==
+ version: "1"
+- address: cosmos1..
+ admin: cosmos1..
+ decision_policy:
+ '@type': /cosmos.group.v1.ThresholdDecisionPolicy
+ threshold: "1"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+ group_id: "1"
+ metadata: AQ==
+ version: "1"
+pagination:
+ next_key: null
+ total: "2"
+```
+
+##### group-policies-by-admin
+
+The `group-policies-by-admin` command allows users to query for group policies by admin account address with pagination flags.
+
+```bash
+simd query group group-policies-by-admin [admin] [flags]
+```
+
+Example:
+
+```bash
+simd query group group-policies-by-admin cosmos1..
+```
+
+Example Output:
+
+```bash
+group_policies:
+- address: cosmos1..
+ admin: cosmos1..
+ decision_policy:
+ '@type': /cosmos.group.v1.ThresholdDecisionPolicy
+ threshold: "1"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+ group_id: "1"
+ metadata: AQ==
+ version: "1"
+- address: cosmos1..
+ admin: cosmos1..
+ decision_policy:
+ '@type': /cosmos.group.v1.ThresholdDecisionPolicy
+ threshold: "1"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+ group_id: "1"
+ metadata: AQ==
+ version: "1"
+pagination:
+ next_key: null
+ total: "2"
+```
+
+##### proposal
+
+The `proposal` command allows users to query for proposal by id.
+
+```bash
+simd query group proposal [id] [flags]
+```
+
+Example:
+
+```bash
+simd query group proposal 1
+```
+
+Example Output:
+
+```bash
+proposal:
+ address: cosmos1..
+ executor_result: EXECUTOR_RESULT_NOT_RUN
+ group_policy_version: "1"
+ group_version: "1"
+ metadata: AQ==
+ msgs:
+ - '@type': /cosmos.bank.v1beta1.MsgSend
+ amount:
+ - amount: "100000000"
+ denom: stake
+ from_address: cosmos1..
+ to_address: cosmos1..
+ proposal_id: "1"
+ proposers:
+ - cosmos1..
+ result: RESULT_UNFINALIZED
+ status: STATUS_SUBMITTED
+ submitted_at: "2021-12-17T07:06:26.310638964Z"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+ vote_state:
+ abstain_count: "0"
+ no_count: "0"
+ veto_count: "0"
+ yes_count: "0"
+ summary: "Summary"
+ title: "Title"
+```
+
+##### proposals-by-group-policy
+
+The `proposals-by-group-policy` command allows users to query for proposals by account address of group policy with pagination flags.
+
+```bash
+simd query group proposals-by-group-policy [group-policy-account] [flags]
+```
+
+Example:
+
+```bash
+simd query group proposals-by-group-policy cosmos1..
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "1"
+proposals:
+- address: cosmos1..
+ executor_result: EXECUTOR_RESULT_NOT_RUN
+ group_policy_version: "1"
+ group_version: "1"
+ metadata: AQ==
+ msgs:
+ - '@type': /cosmos.bank.v1beta1.MsgSend
+ amount:
+ - amount: "100000000"
+ denom: stake
+ from_address: cosmos1..
+ to_address: cosmos1..
+ proposal_id: "1"
+ proposers:
+ - cosmos1..
+ result: RESULT_UNFINALIZED
+ status: STATUS_SUBMITTED
+ submitted_at: "2021-12-17T07:06:26.310638964Z"
+ windows:
+ min_execution_period: 0s
+ voting_period: 432000s
+ vote_state:
+ abstain_count: "0"
+ no_count: "0"
+ veto_count: "0"
+ yes_count: "0"
+ summary: "Summary"
+ title: "Title"
+```
+
+##### vote
+
+The `vote` command allows users to query for vote by proposal id and voter account address.
+
+```bash
+simd query group vote [proposal-id] [voter] [flags]
+```
+
+Example:
+
+```bash
+simd query group vote 1 cosmos1..
+```
+
+Example Output:
+
+```bash
+vote:
+ choice: CHOICE_YES
+ metadata: AQ==
+ proposal_id: "1"
+ submitted_at: "2021-12-17T08:05:02.490164009Z"
+ voter: cosmos1..
+```
+
+##### votes-by-proposal
+
+The `votes-by-proposal` command allows users to query for votes by proposal id with pagination flags.
+
+```bash
+simd query group votes-by-proposal [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd query group votes-by-proposal 1
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "1"
+votes:
+- choice: CHOICE_YES
+ metadata: AQ==
+ proposal_id: "1"
+ submitted_at: "2021-12-17T08:05:02.490164009Z"
+ voter: cosmos1..
+```
+
+##### votes-by-voter
+
+The `votes-by-voter` command allows users to query for votes by voter account address with pagination flags.
+
+```bash
+simd query group votes-by-voter [voter] [flags]
+```
+
+Example:
+
+```bash
+simd query group votes-by-voter cosmos1..
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "1"
+votes:
+- choice: CHOICE_YES
+ metadata: AQ==
+ proposal_id: "1"
+ submitted_at: "2021-12-17T08:05:02.490164009Z"
+ voter: cosmos1..
+```
+
+### Transactions
+
+The `tx` commands allow users to interact with the `group` module.
+
+```bash
+simd tx group --help
+```
+
+#### create-group
+
+The `create-group` command allows users to create a group which is an aggregation of member accounts with associated weights and
+an administrator account.
+
+```bash
+simd tx group create-group [admin] [metadata] [members-json-file]
+```
+
+Example:
+
+```bash
+simd tx group create-group cosmos1.. "AQ==" members.json
+```
+
+#### update-group-admin
+
+The `update-group-admin` command allows users to update a group's admin.
+
+```bash
+simd tx group update-group-admin [admin] [group-id] [new-admin] [flags]
+```
+
+Example:
+
+```bash
+simd tx group update-group-admin cosmos1.. 1 cosmos1..
+```
+
+#### update-group-members
+
+The `update-group-members` command allows users to update a group's members.
+
+```bash
+simd tx group update-group-members [admin] [group-id] [members-json-file] [flags]
+```
+
+Example:
+
+```bash
+simd tx group update-group-members cosmos1.. 1 members.json
+```
+
+#### update-group-metadata
+
+The `update-group-metadata` command allows users to update a group's metadata.
+
+```bash
+simd tx group update-group-metadata [admin] [group-id] [metadata] [flags]
+```
+
+Example:
+
+```bash
+simd tx group update-group-metadata cosmos1.. 1 "AQ=="
+```
+
+#### create-group-policy
+
+The `create-group-policy` command allows users to create a group policy which is an account associated with a group and a decision policy.
+
+```bash
+simd tx group create-group-policy [admin] [group-id] [metadata] [decision-policy] [flags]
+```
+
+Example:
+
+```bash
+simd tx group create-group-policy cosmos1.. 1 "AQ==" '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}'
+```
+
+#### create-group-with-policy
+
+The `create-group-with-policy` command allows users to create a group which is an aggregation of member accounts with associated weights and an administrator account with decision policy. If the `--group-policy-as-admin` flag is set to `true`, the group policy address becomes the group and group policy admin.
+
+```bash
+simd tx group create-group-with-policy [admin] [group-metadata] [group-policy-metadata] [members-json-file] [decision-policy] [flags]
+```
+
+Example:
+
+```bash
+simd tx group create-group-with-policy cosmos1.. "AQ==" "AQ==" members.json '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"1", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}'
+```
+
+#### update-group-policy-admin
+
+The `update-group-policy-admin` command allows users to update a group policy admin.
+
+```bash
+simd tx group update-group-policy-admin [admin] [group-policy-account] [new-admin] [flags]
+```
+
+Example:
+
+```bash
+simd tx group update-group-policy-admin cosmos1.. cosmos1.. cosmos1..
+```
+
+#### update-group-policy-metadata
+
+The `update-group-policy-metadata` command allows users to update a group policy metadata.
+
+```bash
+simd tx group update-group-policy-metadata [admin] [group-policy-account] [new-metadata] [flags]
+```
+
+Example:
+
+```bash
+simd tx group update-group-policy-metadata cosmos1.. cosmos1.. "AQ=="
+```
+
+#### update-group-policy-decision-policy
+
+The `update-group-policy-decision-policy` command allows users to update a group policy's decision policy.
+
+```bash
+simd tx group update-group-policy-decision-policy [admin] [group-policy-account] [decision-policy] [flags]
+```
+
+Example:
+
+```bash
+simd tx group update-group-policy-decision-policy cosmos1.. cosmos1.. '{"@type":"/cosmos.group.v1.ThresholdDecisionPolicy", "threshold":"2", "windows": {"voting_period": "120h", "min_execution_period": "0s"}}'
+```
+
+#### submit-proposal
+
+The `submit-proposal` command allows users to submit a new proposal.
+
+```bash
+simd tx group submit-proposal [group-policy-account] [proposer[,proposer]*] [msg_tx_json_file] [metadata] [flags]
+```
+
+Example:
+
+```bash
+simd tx group submit-proposal cosmos1.. cosmos1.. msg_tx.json "AQ=="
+```
+
+#### withdraw-proposal
+
+The `withdraw-proposal` command allows users to withdraw a proposal.
+
+```bash
+simd tx group withdraw-proposal [proposal-id] [group-policy-admin-or-proposer]
+```
+
+Example:
+
+```bash
+simd tx group withdraw-proposal 1 cosmos1..
+```
+
+#### vote
+
+The `vote` command allows users to vote on a proposal.
+
+```bash
+simd tx group vote proposal-id] [voter] [choice] [metadata] [flags]
+```
+
+Example:
+
+```bash
+simd tx group vote 1 cosmos1.. CHOICE_YES "AQ=="
+```
+
+#### exec
+
+The `exec` command allows users to execute a proposal.
+
+```bash
+simd tx group exec [proposal-id] [flags]
+```
+
+Example:
+
+```bash
+simd tx group exec 1
+```
+
+#### leave-group
+
+The `leave-group` command allows group member to leave the group.
+
+```bash
+simd tx group leave-group [member-address] [group-id]
+```
+
+Example:
+
+```bash
+simd tx group leave-group cosmos1... 1
+```
+
+### gRPC
+
+A user can query the `group` module using gRPC endpoints.
+
+#### GroupInfo
+
+The `GroupInfo` endpoint allows users to query for group info by given group id.
+
+```bash
+cosmos.group.v1.Query/GroupInfo
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"group_id":1}' localhost:9090 cosmos.group.v1.Query/GroupInfo
+```
+
+Example Output:
+
+```bash
+{
+ "info": {
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "totalWeight": "3"
+ }
+}
+```
+
+#### GroupPolicyInfo
+
+The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy.
+
+```bash
+cosmos.group.v1.Query/GroupPolicyInfo
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPolicyInfo
+```
+
+Example Output:
+
+```bash
+{
+ "info": {
+ "address": "cosmos1..",
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "version": "1",
+ "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows": {"voting_period": "120h", "min_execution_period": "0s"}},
+ }
+}
+```
+
+#### GroupMembers
+
+The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags.
+
+```bash
+cosmos.group.v1.Query/GroupMembers
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupMembers
+```
+
+Example Output:
+
+```bash
+{
+ "members": [
+ {
+ "groupId": "1",
+ "member": {
+ "address": "cosmos1..",
+ "weight": "1"
+ }
+ },
+ {
+ "groupId": "1",
+ "member": {
+ "address": "cosmos1..",
+ "weight": "2"
+ }
+ }
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+```
+
+#### GroupsByAdmin
+
+The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags.
+
+```bash
+cosmos.group.v1.Query/GroupsByAdmin
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupsByAdmin
+```
+
+Example Output:
+
+```bash
+{
+ "groups": [
+ {
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "totalWeight": "3"
+ },
+ {
+ "groupId": "2",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "totalWeight": "3"
+ }
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+```
+
+#### GroupPoliciesByGroup
+
+The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags.
+
+```bash
+cosmos.group.v1.Query/GroupPoliciesByGroup
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"group_id":"1"}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByGroup
+```
+
+Example Output:
+
+```bash
+{
+ "GroupPolicies": [
+ {
+ "address": "cosmos1..",
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "version": "1",
+ "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}},
+ },
+ {
+ "address": "cosmos1..",
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "version": "1",
+ "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}},
+ }
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+```
+
+#### GroupPoliciesByAdmin
+
+The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags.
+
+```bash
+cosmos.group.v1.Query/GroupPoliciesByAdmin
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"admin":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/GroupPoliciesByAdmin
+```
+
+Example Output:
+
+```bash
+{
+ "GroupPolicies": [
+ {
+ "address": "cosmos1..",
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "version": "1",
+ "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}},
+ },
+ {
+ "address": "cosmos1..",
+ "groupId": "1",
+ "admin": "cosmos1..",
+ "version": "1",
+ "decisionPolicy": {"@type":"/cosmos.group.v1.ThresholdDecisionPolicy","threshold":"1","windows":{"voting_period": "120h", "min_execution_period": "0s"}},
+ }
+ ],
+ "pagination": {
+ "total": "2"
+ }
+}
+```
+
+#### Proposal
+
+The `Proposal` endpoint allows users to query for proposal by id.
+
+```bash
+cosmos.group.v1.Query/Proposal
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/Proposal
+```
+
+Example Output:
+
+```bash
+{
+ "proposal": {
+ "proposalId": "1",
+ "address": "cosmos1..",
+ "proposers": [
+ "cosmos1.."
+ ],
+ "submittedAt": "2021-12-17T07:06:26.310638964Z",
+ "groupVersion": "1",
+ "GroupPolicyVersion": "1",
+ "status": "STATUS_SUBMITTED",
+ "result": "RESULT_UNFINALIZED",
+ "voteState": {
+ "yesCount": "0",
+ "noCount": "0",
+ "abstainCount": "0",
+ "vetoCount": "0"
+ },
+ "windows": {
+ "min_execution_period": "0s",
+ "voting_period": "432000s"
+ },
+ "executorResult": "EXECUTOR_RESULT_NOT_RUN",
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."}
+ ],
+ "title": "Title",
+ "summary": "Summary",
+ }
+}
+```
+
+#### ProposalsByGroupPolicy
+
+The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags.
+
+```bash
+cosmos.group.v1.Query/ProposalsByGroupPolicy
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"address":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/ProposalsByGroupPolicy
+```
+
+Example Output:
+
+```bash
+{
+ "proposals": [
+ {
+ "proposalId": "1",
+ "address": "cosmos1..",
+ "proposers": [
+ "cosmos1.."
+ ],
+ "submittedAt": "2021-12-17T08:03:27.099649352Z",
+ "groupVersion": "1",
+ "GroupPolicyVersion": "1",
+ "status": "STATUS_CLOSED",
+ "result": "RESULT_ACCEPTED",
+ "voteState": {
+ "yesCount": "1",
+ "noCount": "0",
+ "abstainCount": "0",
+ "vetoCount": "0"
+ },
+ "windows": {
+ "min_execution_period": "0s",
+ "voting_period": "432000s"
+ },
+ "executorResult": "EXECUTOR_RESULT_NOT_RUN",
+ "messages": [
+ {"@type":"/cosmos.bank.v1beta1.MsgSend","amount":[{"denom":"stake","amount":"100000000"}],"fromAddress":"cosmos1..","toAddress":"cosmos1.."}
+ ],
+ "title": "Title",
+ "summary": "Summary",
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### VoteByProposalVoter
+
+The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address.
+
+```bash
+cosmos.group.v1.Query/VoteByProposalVoter
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1","voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VoteByProposalVoter
+```
+
+Example Output:
+
+```bash
+{
+ "vote": {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "choice": "CHOICE_YES",
+ "submittedAt": "2021-12-17T08:05:02.490164009Z"
+ }
+}
+```
+
+#### VotesByProposal
+
+The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags.
+
+```bash
+cosmos.group.v1.Query/VotesByProposal
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"proposal_id":"1"}' localhost:9090 cosmos.group.v1.Query/VotesByProposal
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "choice": "CHOICE_YES",
+ "submittedAt": "2021-12-17T08:05:02.490164009Z"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### VotesByVoter
+
+The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags.
+
+```bash
+cosmos.group.v1.Query/VotesByVoter
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"voter":"cosmos1.."}' localhost:9090 cosmos.group.v1.Query/VotesByVoter
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposalId": "1",
+ "voter": "cosmos1..",
+ "choice": "CHOICE_YES",
+ "submittedAt": "2021-12-17T08:05:02.490164009Z"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+### REST
+
+A user can query the `group` module using REST endpoints.
+
+#### GroupInfo
+
+The `GroupInfo` endpoint allows users to query for group info by given group id.
+
+```bash
+/cosmos/group/v1/group_info/{group_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/group_info/1
+```
+
+Example Output:
+
+```bash
+{
+ "info": {
+ "id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "total_weight": "3"
+ }
+}
+```
+
+#### GroupPolicyInfo
+
+The `GroupPolicyInfo` endpoint allows users to query for group policy info by account address of group policy.
+
+```bash
+/cosmos/group/v1/group_policy_info/{address}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/group_policy_info/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "info": {
+ "address": "cosmos1..",
+ "group_id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "decision_policy": {
+ "@type": "/cosmos.group.v1.ThresholdDecisionPolicy",
+ "threshold": "1",
+ "windows": {
+ "voting_period": "120h",
+ "min_execution_period": "0s"
+ }
+ },
+ }
+}
+```
+
+#### GroupMembers
+
+The `GroupMembers` endpoint allows users to query for group members by group id with pagination flags.
+
+```bash
+/cosmos/group/v1/group_members/{group_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/group_members/1
+```
+
+Example Output:
+
+```bash
+{
+ "members": [
+ {
+ "group_id": "1",
+ "member": {
+ "address": "cosmos1..",
+ "weight": "1",
+ "metadata": "AQ=="
+ }
+ },
+ {
+ "group_id": "1",
+ "member": {
+ "address": "cosmos1..",
+ "weight": "2",
+ "metadata": "AQ=="
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
+
+#### GroupsByAdmin
+
+The `GroupsByAdmin` endpoint allows users to query for groups by admin account address with pagination flags.
+
+```bash
+/cosmos/group/v1/groups_by_admin/{admin}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/groups_by_admin/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "groups": [
+ {
+ "id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "total_weight": "3"
+ },
+ {
+ "id": "2",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "total_weight": "3"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
+
+#### GroupPoliciesByGroup
+
+The `GroupPoliciesByGroup` endpoint allows users to query for group policies by group id with pagination flags.
+
+```bash
+/cosmos/group/v1/group_policies_by_group/{group_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/group_policies_by_group/1
+```
+
+Example Output:
+
+```bash
+{
+ "group_policies": [
+ {
+ "address": "cosmos1..",
+ "group_id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "decision_policy": {
+ "@type": "/cosmos.group.v1.ThresholdDecisionPolicy",
+ "threshold": "1",
+ "windows": {
+ "voting_period": "120h",
+ "min_execution_period": "0s"
+ }
+ },
+ },
+ {
+ "address": "cosmos1..",
+ "group_id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "decision_policy": {
+ "@type": "/cosmos.group.v1.ThresholdDecisionPolicy",
+ "threshold": "1",
+ "windows": {
+ "voting_period": "120h",
+ "min_execution_period": "0s"
+ }
+ },
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
+
+#### GroupPoliciesByAdmin
+
+The `GroupPoliciesByAdmin` endpoint allows users to query for group policies by admin account address with pagination flags.
+
+```bash
+/cosmos/group/v1/group_policies_by_admin/{admin}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/group_policies_by_admin/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "group_policies": [
+ {
+ "address": "cosmos1..",
+ "group_id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "decision_policy": {
+ "@type": "/cosmos.group.v1.ThresholdDecisionPolicy",
+ "threshold": "1",
+ "windows": {
+ "voting_period": "120h",
+ "min_execution_period": "0s"
+ }
+ },
+ },
+ {
+ "address": "cosmos1..",
+ "group_id": "1",
+ "admin": "cosmos1..",
+ "metadata": "AQ==",
+ "version": "1",
+ "decision_policy": {
+ "@type": "/cosmos.group.v1.ThresholdDecisionPolicy",
+ "threshold": "1",
+ "windows": {
+ "voting_period": "120h",
+ "min_execution_period": "0s"
+ }
+ },
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+```
+
+#### Proposal
+
+The `Proposal` endpoint allows users to query for proposal by id.
+
+```bash
+/cosmos/group/v1/proposal/{proposal_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/proposal/1
+```
+
+Example Output:
+
+```bash
+{
+ "proposal": {
+ "proposal_id": "1",
+ "address": "cosmos1..",
+ "metadata": "AQ==",
+ "proposers": [
+ "cosmos1.."
+ ],
+ "submitted_at": "2021-12-17T07:06:26.310638964Z",
+ "group_version": "1",
+ "group_policy_version": "1",
+ "status": "STATUS_SUBMITTED",
+ "result": "RESULT_UNFINALIZED",
+ "vote_state": {
+ "yes_count": "0",
+ "no_count": "0",
+ "abstain_count": "0",
+ "veto_count": "0"
+ },
+ "windows": {
+ "min_execution_period": "0s",
+ "voting_period": "432000s"
+ },
+ "executor_result": "EXECUTOR_RESULT_NOT_RUN",
+ "messages": [
+ {
+ "@type": "/cosmos.bank.v1beta1.MsgSend",
+ "from_address": "cosmos1..",
+ "to_address": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "100000000"
+ }
+ ]
+ }
+ ],
+ "title": "Title",
+ "summary": "Summary",
+ }
+}
+```
+
+#### ProposalsByGroupPolicy
+
+The `ProposalsByGroupPolicy` endpoint allows users to query for proposals by account address of group policy with pagination flags.
+
+```bash
+/cosmos/group/v1/proposals_by_group_policy/{address}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/proposals_by_group_policy/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "proposals": [
+ {
+ "id": "1",
+ "group_policy_address": "cosmos1..",
+ "metadata": "AQ==",
+ "proposers": [
+ "cosmos1.."
+ ],
+ "submit_time": "2021-12-17T08:03:27.099649352Z",
+ "group_version": "1",
+ "group_policy_version": "1",
+ "status": "STATUS_CLOSED",
+ "result": "RESULT_ACCEPTED",
+ "vote_state": {
+ "yes_count": "1",
+ "no_count": "0",
+ "abstain_count": "0",
+ "veto_count": "0"
+ },
+ "windows": {
+ "min_execution_period": "0s",
+ "voting_period": "432000s"
+ },
+ "executor_result": "EXECUTOR_RESULT_NOT_RUN",
+ "messages": [
+ {
+ "@type": "/cosmos.bank.v1beta1.MsgSend",
+ "from_address": "cosmos1..",
+ "to_address": "cosmos1..",
+ "amount": [
+ {
+ "denom": "stake",
+ "amount": "100000000"
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### VoteByProposalVoter
+
+The `VoteByProposalVoter` endpoint allows users to query for vote by proposal id and voter account address.
+
+```bash
+/cosmos/group/v1/vote_by_proposal_voter/{proposal_id}/{voter}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1beta1/vote_by_proposal_voter/1/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "vote": {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "choice": "CHOICE_YES",
+ "metadata": "AQ==",
+ "submitted_at": "2021-12-17T08:05:02.490164009Z"
+ }
+}
+```
+
+#### VotesByProposal
+
+The `VotesByProposal` endpoint allows users to query for votes by proposal id with pagination flags.
+
+```bash
+/cosmos/group/v1/votes_by_proposal/{proposal_id}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/votes_by_proposal/1
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "option": "CHOICE_YES",
+ "metadata": "AQ==",
+ "submit_time": "2021-12-17T08:05:02.490164009Z"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### VotesByVoter
+
+The `VotesByVoter` endpoint allows users to query for votes by voter account address with pagination flags.
+
+```bash
+/cosmos/group/v1/votes_by_voter/{voter}
+```
+
+Example:
+
+```bash
+curl localhost:1317/cosmos/group/v1/votes_by_voter/cosmos1..
+```
+
+Example Output:
+
+```bash
+{
+ "votes": [
+ {
+ "proposal_id": "1",
+ "voter": "cosmos1..",
+ "choice": "CHOICE_YES",
+ "metadata": "AQ==",
+ "submitted_at": "2021-12-17T08:05:02.490164009Z"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+## Metadata
+
+The group module has four locations for metadata where users can provide further context about the on-chain actions they are taking. By default all metadata fields have a 255 character length field where metadata can be stored in json format, either on-chain or off-chain depending on the amount of data required. Here we provide a recommendation for the json structure and where the data should be stored. There are two important factors in making these recommendations. First, that the group and gov modules are consistent with one another, note the number of proposals made by all groups may be quite large. Second, that client applications such as block explorers and governance interfaces have confidence in the consistency of metadata structure across chains.
+
+### Proposal
+
+Location: off-chain as json object stored on IPFS (mirrors [gov proposal](../gov/README.md#metadata))
+
+```json
+{
+ "title": "",
+ "authors": [""],
+ "summary": "",
+ "details": "",
+ "proposal_forum_url": "",
+ "vote_option_context": "",
+}
+```
+
+:::note
+The `authors` field is an array of strings, this is to allow for multiple authors to be listed in the metadata.
+In v0.46, the `authors` field is a comma-separated string. Frontends are encouraged to support both formats for backwards compatibility.
+:::
+
+### Vote
+
+Location: on-chain as json within 255 character limit (mirrors [gov vote](../gov/README.md#metadata))
+
+```json
+{
+ "justification": "",
+}
+```
+
+### Group
+
+Location: off-chain as json object stored on IPFS
+
+```json
+{
+ "name": "",
+ "description": "",
+ "group_website_url": "",
+ "group_forum_url": "",
+}
+```
+
+### Decision policy
+
+Location: on-chain as json within 255 character limit
+
+```json
+{
+ "name": "",
+ "description": "",
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/mint/README.md b/copy-of-sdk-docs/build/modules/mint/README.md
new file mode 100644
index 00000000..89dab770
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/mint/README.md
@@ -0,0 +1,460 @@
+---
+sidebar_position: 1
+---
+
+# `x/mint`
+
+The `x/mint` module handles the regular minting of new tokens in a configurable manner.
+
+## Contents
+
+* [State](#state)
+ * [Minter](#minter)
+ * [Params](#params)
+* [Begin-Block](#begin-block)
+ * [NextInflationRate](#nextinflationrate)
+ * [NextAnnualProvisions](#nextannualprovisions)
+ * [BlockProvision](#blockprovision)
+* [Parameters](#parameters)
+* [Events](#events)
+ * [BeginBlocker](#beginblocker)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+
+## Concepts
+
+### The Minting Mechanism
+
+The default minting mechanism was designed to:
+
+* allow for a flexible inflation rate determined by market demand targeting a particular bonded-stake ratio
+* effect a balance between market liquidity and staked supply
+
+In order to best determine the appropriate market rate for inflation rewards, a
+moving change rate is used. The moving change rate mechanism ensures that if
+the % bonded is either over or under the goal %-bonded, the inflation rate will
+adjust to further incentivize or disincentivize being bonded, respectively. Setting the goal
+%-bonded at less than 100% encourages the network to maintain some non-staked tokens
+which should help provide some liquidity.
+
+It can be broken down in the following way:
+
+* If the actual percentage of bonded tokens is below the goal %-bonded the inflation rate will
+ increase until a maximum value is reached
+* If the goal % bonded (67% in Cosmos-Hub) is maintained, then the inflation
+ rate will stay constant
+* If the actual percentage of bonded tokens is above the goal %-bonded the inflation rate will
+ decrease until a minimum value is reached
+
+### Custom Minters
+
+As of Cosmos SDK v0.53.0, developers can set a custom `MintFn` for the module for specialized token minting logic.
+
+The function signature that a `MintFn` must implement is as follows:
+
+```go
+// MintFn defines the function that needs to be implemented in order to customize the minting process.
+type MintFn func(ctx sdk.Context, k *Keeper) error
+```
+
+This can be passed to the `Keeper` upon creation with an additional `Option`:
+
+```go
+app.MintKeeper = mintkeeper.NewKeeper(
+ appCodec,
+ runtime.NewKVStoreService(keys[minttypes.StoreKey]),
+ app.StakingKeeper,
+ app.AccountKeeper,
+ app.BankKeeper,
+ authtypes.FeeCollectorName,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+ // mintkeeper.WithMintFn(CUSTOM_MINT_FN), // custom mintFn can be added here
+ )
+```
+
+#### Custom Minter DI Example
+
+Below is a simple approach to creating a custom mint function with extra dependencies in DI configurations.
+For this basic example, we will make the minter simply double the supply of `foo` coin.
+
+First, we will define a function that takes our required dependencies, and returns a `MintFn`.
+
+```go
+// MyCustomMintFunction is a custom mint function that doubles the supply of `foo` coin.
+func MyCustomMintFunction(bank bankkeeper.BaseKeeper) mintkeeper.MintFn {
+ return func(ctx sdk.Context, k *mintkeeper.Keeper) error {
+ supply := bank.GetSupply(ctx, "foo")
+ err := k.MintCoins(ctx, sdk.NewCoins(supply.Add(supply)))
+ if err != nil {
+ return err
+ }
+ return nil
+ }
+}
+```
+
+Then, pass the function defined above into the `depinject.Supply` function with the required dependencies.
+
+```go
+// NewSimApp returns a reference to an initialized SimApp.
+func NewSimApp(
+ logger log.Logger,
+ db dbm.DB,
+ traceStore io.Writer,
+ loadLatest bool,
+ appOpts servertypes.AppOptions,
+ baseAppOptions ...func(*baseapp.BaseApp),
+) *SimApp {
+ var (
+ app = &SimApp{}
+ appBuilder *runtime.AppBuilder
+ appConfig = depinject.Configs(
+ AppConfig,
+ depinject.Supply(
+ appOpts,
+ logger,
+ // our custom mint function with the necessary dependency passed in.
+ MyCustomMintFunction(app.BankKeeper),
+ ),
+ )
+ )
+ // ...
+}
+```
+
+## State
+
+### Minter
+
+The minter is a space for holding current inflation information.
+
+* Minter: `0x00 -> ProtocolBuffer(minter)`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/mint/v1beta1/mint.proto#L10-L24
+```
+
+### Params
+
+The mint module stores its params in state with the prefix of `0x01`,
+it can be updated with governance or the address with authority.
+
+* Params: `mint/params -> legacy_amino(params)`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/mint/v1beta1/mint.proto#L26-L59
+```
+
+## Begin-Block
+
+Minting parameters are recalculated and inflation paid at the beginning of each block.
+
+### Inflation rate calculation
+
+Inflation rate is calculated using an "inflation calculation function" that's
+passed to the `NewAppModule` function. If no function is passed, then the SDK's
+default inflation function will be used (`NextInflationRate`). In case a custom
+inflation calculation logic is needed, this can be achieved by defining and
+passing a function that matches `InflationCalculationFn`'s signature.
+
+```go
+type InflationCalculationFn func(ctx sdk.Context, minter Minter, params Params, bondedRatio math.LegacyDec) math.LegacyDec
+```
+
+#### NextInflationRate
+
+The target annual inflation rate is recalculated each block.
+The inflation is also subject to a rate change (positive or negative)
+depending on the distance from the desired ratio (67%). The maximum rate change
+possible is defined to be 13% per year, however, the annual inflation is capped
+as between 7% and 20%.
+
+```go
+NextInflationRate(params Params, bondedRatio math.LegacyDec) (inflation math.LegacyDec) {
+ inflationRateChangePerYear = (1 - bondedRatio/params.GoalBonded) * params.InflationRateChange
+ inflationRateChange = inflationRateChangePerYear/blocksPerYr
+
+ // increase the new annual inflation for this next block
+ inflation += inflationRateChange
+ if inflation > params.InflationMax {
+ inflation = params.InflationMax
+ }
+ if inflation < params.InflationMin {
+ inflation = params.InflationMin
+ }
+
+ return inflation
+}
+```
+
+### NextAnnualProvisions
+
+Calculate the annual provisions based on current total supply and inflation
+rate. This parameter is calculated once per block.
+
+```go
+NextAnnualProvisions(params Params, totalSupply math.LegacyDec) (provisions math.LegacyDec) {
+ return Inflation * totalSupply
+```
+
+### BlockProvision
+
+Calculate the provisions generated for each block based on current annual provisions. The provisions are then minted by the `mint` module's `ModuleMinterAccount` and then transferred to the `auth`'s `FeeCollector` `ModuleAccount`.
+
+```go
+BlockProvision(params Params) sdk.Coin {
+ provisionAmt = AnnualProvisions/ params.BlocksPerYear
+ return sdk.NewCoin(params.MintDenom, provisionAmt.Truncate())
+```
+
+
+## Parameters
+
+The minting module contains the following parameters:
+
+| Key | Type | Example |
+|---------------------|-----------------|------------------------|
+| MintDenom | string | "uatom" |
+| InflationRateChange | string (dec) | "0.130000000000000000" |
+| InflationMax | string (dec) | "0.200000000000000000" |
+| InflationMin | string (dec) | "0.070000000000000000" |
+| GoalBonded | string (dec) | "0.670000000000000000" |
+| BlocksPerYear | string (uint64) | "6311520" |
+
+
+## Events
+
+The minting module emits the following events:
+
+### BeginBlocker
+
+| Type | Attribute Key | Attribute Value |
+|------|-------------------|--------------------|
+| mint | bonded_ratio | {bondedRatio} |
+| mint | inflation | {inflation} |
+| mint | annual_provisions | {annualProvisions} |
+| mint | amount | {amount} |
+
+
+## Client
+
+### CLI
+
+A user can query and interact with the `mint` module using the CLI.
+
+#### Query
+
+The `query` commands allows users to query `mint` state.
+
+```shell
+simd query mint --help
+```
+
+##### annual-provisions
+
+The `annual-provisions` command allows users to query the current minting annual provisions value
+
+```shell
+simd query mint annual-provisions [flags]
+```
+
+Example:
+
+```shell
+simd query mint annual-provisions
+```
+
+Example Output:
+
+```shell
+22268504368893.612100895088410693
+```
+
+##### inflation
+
+The `inflation` command allows users to query the current minting inflation value
+
+```shell
+simd query mint inflation [flags]
+```
+
+Example:
+
+```shell
+simd query mint inflation
+```
+
+Example Output:
+
+```shell
+0.199200302563256955
+```
+
+##### params
+
+The `params` command allows users to query the current minting parameters
+
+```shell
+simd query mint params [flags]
+```
+
+Example:
+
+```yml
+blocks_per_year: "4360000"
+goal_bonded: "0.670000000000000000"
+inflation_max: "0.200000000000000000"
+inflation_min: "0.070000000000000000"
+inflation_rate_change: "0.130000000000000000"
+mint_denom: stake
+```
+
+### gRPC
+
+A user can query the `mint` module using gRPC endpoints.
+
+#### AnnualProvisions
+
+The `AnnualProvisions` endpoint allows users to query the current minting annual provisions value
+
+```shell
+/cosmos.mint.v1beta1.Query/AnnualProvisions
+```
+
+Example:
+
+```shell
+grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions
+```
+
+Example Output:
+
+```json
+{
+ "annualProvisions": "1432452520532626265712995618"
+}
+```
+
+#### Inflation
+
+The `Inflation` endpoint allows users to query the current minting inflation value
+
+```shell
+/cosmos.mint.v1beta1.Query/Inflation
+```
+
+Example:
+
+```shell
+grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Inflation
+```
+
+Example Output:
+
+```json
+{
+ "inflation": "130197115720711261"
+}
+```
+
+#### Params
+
+The `Params` endpoint allows users to query the current minting parameters
+
+```shell
+/cosmos.mint.v1beta1.Query/Params
+```
+
+Example:
+
+```shell
+grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/Params
+```
+
+Example Output:
+
+```json
+{
+ "params": {
+ "mintDenom": "stake",
+ "inflationRateChange": "130000000000000000",
+ "inflationMax": "200000000000000000",
+ "inflationMin": "70000000000000000",
+ "goalBonded": "670000000000000000",
+ "blocksPerYear": "6311520"
+ }
+}
+```
+
+### REST
+
+A user can query the `mint` module using REST endpoints.
+
+#### annual-provisions
+
+```shell
+/cosmos/mint/v1beta1/annual_provisions
+```
+
+Example:
+
+```shell
+curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions"
+```
+
+Example Output:
+
+```json
+{
+ "annualProvisions": "1432452520532626265712995618"
+}
+```
+
+#### inflation
+
+```shell
+/cosmos/mint/v1beta1/inflation
+```
+
+Example:
+
+```shell
+curl "localhost:1317/cosmos/mint/v1beta1/inflation"
+```
+
+Example Output:
+
+```json
+{
+ "inflation": "130197115720711261"
+}
+```
+
+#### params
+
+```shell
+/cosmos/mint/v1beta1/params
+```
+
+Example:
+
+```shell
+curl "localhost:1317/cosmos/mint/v1beta1/params"
+```
+
+Example Output:
+
+```json
+{
+ "params": {
+ "mintDenom": "stake",
+ "inflationRateChange": "130000000000000000",
+ "inflationMax": "200000000000000000",
+ "inflationMin": "70000000000000000",
+ "goalBonded": "670000000000000000",
+ "blocksPerYear": "6311520"
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/nft/README.md b/copy-of-sdk-docs/build/modules/nft/README.md
new file mode 100644
index 00000000..4348aaca
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/nft/README.md
@@ -0,0 +1,91 @@
+---
+sidebar_position: 1
+---
+
+# `x/nft`
+
+⚠️ **DEPRECATED**: This package is deprecated and will be removed in the next major release. The `x/nft` module will be moved to a separate repo `github.com/cosmos/cosmos-sdk-legacy`.
+
+## Contents
+
+## Abstract
+
+`x/nft` is an implementation of a Cosmos SDK module, per [ADR 43](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-043-nft-module.md), that allows you to create nft classification, create nft, transfer nft, update nft, and support various queries by integrating the module. It is fully compatible with the ERC721 specification.
+
+* [Concepts](#concepts)
+ * [Class](#class)
+ * [NFT](#nft)
+* [State](#state)
+ * [Class](#class-1)
+ * [NFT](#nft-1)
+ * [NFTOfClassByOwner](#nftofclassbyowner)
+ * [Owner](#owner)
+ * [TotalSupply](#totalsupply)
+* [Messages](#messages)
+ * [MsgSend](#msgsend)
+* [Events](#events)
+
+## Concepts
+
+### Class
+
+`x/nft` module defines a struct `Class` to describe the common characteristics of a class of nft, under this class, you can create a variety of nft, which is equivalent to an erc721 contract for Ethereum. The design is defined in the [ADR 043](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-043-nft-module.md).
+
+### NFT
+
+The full name of NFT is Non-Fungible Tokens. Because of the irreplaceable nature of NFT, it means that it can be used to represent unique things. The nft implemented by this module is fully compatible with Ethereum ERC721 standard.
+
+## State
+
+### Class
+
+Class is mainly composed of `id`, `name`, `symbol`, `description`, `uri`, `uri_hash`,`data` where `id` is the unique identifier of the class, similar to the Ethereum ERC721 contract address, the others are optional.
+
+* Class: `0x01 | classID | -> ProtocolBuffer(Class)`
+
+### NFT
+
+NFT is mainly composed of `class_id`, `id`, `uri`, `uri_hash` and `data`. Among them, `class_id` and `id` are two-tuples that identify the uniqueness of nft, `uri` and `uri_hash` is optional, which identifies the off-chain storage location of the nft, and `data` is an Any type. Use Any chain of `x/nft` modules can be customized by extending this field
+
+* NFT: `0x02 | classID | 0x00 | nftID |-> ProtocolBuffer(NFT)`
+
+### NFTOfClassByOwner
+
+NFTOfClassByOwner is mainly to realize the function of querying all nfts using classID and owner, without other redundant functions.
+
+* NFTOfClassByOwner: `0x03 | owner | 0x00 | classID | 0x00 | nftID |-> 0x01`
+
+### Owner
+
+Since there is no extra field in NFT to indicate the owner of nft, an additional key-value pair is used to save the ownership of nft. With the transfer of nft, the key-value pair is updated synchronously.
+
+* OwnerKey: `0x04 | classID | 0x00 | nftID |-> owner`
+
+### TotalSupply
+
+TotalSupply is responsible for tracking the number of all nfts under a certain class. Mint operation is performed under the changed class, supply increases by one, burn operation, and supply decreases by one.
+
+* OwnerKey: `0x05 | classID |-> totalSupply`
+
+## Messages
+
+In this section we describe the processing of messages for the NFT module.
+
+:::warning
+The validation of `ClassID` and `NftID` is left to the app developer.
+The SDK does not provide any validation for these fields.
+:::
+
+### MsgSend
+
+You can use the `MsgSend` message to transfer the ownership of nft. This is a function provided by the `x/nft` module. Of course, you can use the `Transfer` method to implement your own transfer logic, but you need to pay extra attention to the transfer permissions.
+
+The message handling should fail if:
+
+* provided `ClassID` does not exist.
+* provided `Id` does not exist.
+* provided `Sender` does not the owner of nft.
+
+## Events
+
+The nft module emits proto events defined in [the Protobuf reference](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.nft.v1beta1).
diff --git a/copy-of-sdk-docs/build/modules/params/README.md b/copy-of-sdk-docs/build/modules/params/README.md
new file mode 100644
index 00000000..10b47da4
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/params/README.md
@@ -0,0 +1,79 @@
+---
+sidebar_position: 1
+---
+
+# `x/params`
+
+NOTE: `x/params` is deprecated as of Cosmos SDK v0.53 and will be removed in the next release.
+
+## Abstract
+
+Package params provides a globally available parameter store.
+
+There are two main types, Keeper and Subspace. Subspace is an isolated namespace for a
+paramstore, where keys are prefixed by preconfigured spacename. Keeper has a
+permission to access all existing spaces.
+
+Subspace can be used by the individual keepers, which need a private parameter store
+that the other keepers cannot modify. The params Keeper can be used to add a route to `x/gov` router in order to modify any parameter in case a proposal passes.
+
+The following contents explains how to use params module for master and user modules.
+
+## Contents
+
+* [Keeper](#keeper)
+* [Subspace](#subspace)
+ * [Key](#key)
+ * [KeyTable](#keytable)
+ * [ParamSet](#paramset)
+
+## Keeper
+
+In the app initialization stage, [subspaces](#subspace) can be allocated for other modules' keeper using `Keeper.Subspace` and are stored in `Keeper.spaces`. Then, those modules can have a reference to their specific parameter store through `Keeper.GetSubspace`.
+
+Example:
+
+```go
+type ExampleKeeper struct {
+ paramSpace paramtypes.Subspace
+}
+
+func (k ExampleKeeper) SetParams(ctx sdk.Context, params types.Params) {
+ k.paramSpace.SetParamSet(ctx, ¶ms)
+}
+```
+
+## Subspace
+
+`Subspace` is a prefixed subspace of the parameter store. Each module which uses the
+parameter store will take a `Subspace` to isolate permission to access.
+
+### Key
+
+Parameter keys are human readable alphanumeric strings. A parameter for the key
+`"ExampleParameter"` is stored under `[]byte("SubspaceName" + "/" + "ExampleParameter")`,
+ where `"SubspaceName"` is the name of the subspace.
+
+Subkeys are secondary parameter keys those are used along with a primary parameter key.
+Subkeys can be used for grouping or dynamic parameter key generation during runtime.
+
+### KeyTable
+
+All of the parameter keys that will be used should be registered at the compile
+time. `KeyTable` is essentially a `map[string]attribute`, where the `string` is a parameter key.
+
+Currently, `attribute` consists of a `reflect.Type`, which indicates the parameter
+type to check that provided key and value are compatible and registered, as well as a function `ValueValidatorFn` to validate values.
+
+Only primary keys have to be registered on the `KeyTable`. Subkeys inherit the
+attribute of the primary key.
+
+### ParamSet
+
+Modules often define parameters as a proto message. The generated struct can implement
+`ParamSet` interface to be used with the following methods:
+
+* `KeyTable.RegisterParamSet()`: registers all parameters in the struct
+* `Subspace.{Get, Set}ParamSet()`: Get to & Set from the struct
+
+The implementer should be a pointer in order to use `GetParamSet()`.
diff --git a/copy-of-sdk-docs/build/modules/protocolpool/README.md b/copy-of-sdk-docs/build/modules/protocolpool/README.md
new file mode 100644
index 00000000..d88b1ee1
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/protocolpool/README.md
@@ -0,0 +1,162 @@
+---
+sidebar_position: 1
+---
+
+# `x/protocolpool`
+
+## Concepts
+
+`x/protocolpool` is a supplemental Cosmos SDK module that handles functionality for community pool funds. The module provides a separate module account for the community pool making it easier to track the pool assets. Starting with v0.53 of the Cosmos SDK, community funds can be tracked using this module instead of the `x/distribution` module. Funds are migrated from the `x/distribution` module's community pool to `x/protocolpool`'s module account automatically.
+
+This module is `supplemental`; it is not required to run a Cosmos SDK chain. `x/protocolpool` enhances the community pool functionality provided by `x/distribution` and enables custom modules to further extend the community pool.
+
+Note: _as long as an external community pool keeper (here, `x/protocolpool`) is wired in DI configs, `x/distribution` will automatically use it for its external pool._
+
+## Usage Limitations
+
+The following `x/distribution` handlers will now return an error when the `protocolpool` module is used with `x/distribution`:
+
+**QueryService**
+
+* `CommunityPool`
+
+**MsgService**
+
+* `CommunityPoolSpend`
+* `FundCommunityPool`
+
+If you have services that rely on this functionality from `x/distribution`, please update them to use the `x/protocolpool` equivalents.
+
+## State Transitions
+
+### FundCommunityPool
+
+FundCommunityPool can be called by any valid account to send funds to the `x/protocolpool` module account.
+
+```protobuf
+ // FundCommunityPool defines a method to allow an account to directly
+ // fund the community pool.
+ rpc FundCommunityPool(MsgFundCommunityPool) returns (MsgFundCommunityPoolResponse);
+```
+
+### CommunityPoolSpend
+
+CommunityPoolSpend can be called by the module authority (default governance module account) or any account with authorization to spend funds from the `x/protocolpool` module account to a receiver address.
+
+```protobuf
+ // CommunityPoolSpend defines a governance operation for sending tokens from
+ // the community pool in the x/protocolpool module to another account, which
+ // could be the governance module itself. The authority is defined in the
+ // keeper.
+ rpc CommunityPoolSpend(MsgCommunityPoolSpend) returns (MsgCommunityPoolSpendResponse);
+```
+
+### CreateContinuousFund
+
+CreateContinuousFund is a message used to initiate a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. The fund distribution continues until expiry time is reached or continuous fund request is canceled.
+NOTE: This feature is designed to work with the SDK's default bond denom.
+
+```protobuf
+ // CreateContinuousFund defines a method to distribute a percentage of funds to an address continuously.
+ // This ContinuousFund can be indefinite or run until a given expiry time.
+ // Funds come from validator block rewards from x/distribution, but may also come from
+ // any user who funds the ProtocolPoolEscrow module account directly through x/bank.
+ rpc CreateContinuousFund(MsgCreateContinuousFund) returns (MsgCreateContinuousFundResponse);
+```
+
+### CancelContinuousFund
+
+CancelContinuousFund is a message used to cancel an existing continuous fund proposal for a specific recipient. Cancelling a continuous fund stops further distribution of funds, and the state object is removed from storage.
+
+```protobuf
+ // CancelContinuousFund defines a method for cancelling continuous fund.
+ rpc CancelContinuousFund(MsgCancelContinuousFund) returns (MsgCancelContinuousFundResponse);
+```
+
+## Messages
+
+### MsgFundCommunityPool
+
+This message sends coins directly from the sender to the community pool.
+
+:::tip
+If you know the `x/protocolpool` module account address, you can directly use bank `send` transaction instead.
+:::
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/proto/cosmos/protocolpool/v1/tx.proto#L43-L53
+```
+
+* The msg will fail if the amount cannot be transferred from the sender to the `x/protocolpool` module account.
+
+```go
+func (k Keeper) FundCommunityPool(ctx context.Context, amount sdk.Coins, sender sdk.AccAddress) error {
+ return k.bankKeeper.SendCoinsFromAccountToModule(ctx, sender, types.ModuleName, amount)
+}
+```
+
+### MsgCommunityPoolSpend
+
+This message distributes funds from the `x/protocolpool` module account to the recipient using `DistributeFromCommunityPool` keeper method.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/proto/cosmos/protocolpool/v1/tx.proto#L58-L69
+```
+
+The message will fail under the following conditions:
+
+* The amount cannot be transferred to the recipient from the `x/protocolpool` module account.
+* The `recipient` address is restricted
+
+```go
+func (k Keeper) DistributeFromCommunityPool(ctx context.Context, amount sdk.Coins, receiveAddr sdk.AccAddress) error {
+ return k.bankKeeper.SendCoinsFromModuleToAccount(ctx, types.ModuleName, receiveAddr, amount)
+}
+```
+
+### MsgCreateContinuousFund
+
+This message is used to create a continuous fund for a specific recipient. The proposed percentage of funds will be distributed only on withdraw request for the recipient. This fund distribution continues until expiry time is reached or continuous fund request is canceled.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/proto/cosmos/protocolpool/v1/tx.proto#L114-L130
+```
+
+The message will fail under the following conditions:
+
+* The recipient address is empty or restricted.
+* The percentage is zero/negative/greater than one.
+* The Expiry time is less than the current block time.
+
+:::warning
+If two continuous fund proposals to the same address are created, the previous ContinuousFund will be updated with the new ContinuousFund.
+:::
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/keeper/msg_server.go#L103-L166
+```
+
+### MsgCancelContinuousFund
+
+This message is used to cancel an existing continuous fund proposal for a specific recipient. Once canceled, the continuous fund will no longer distribute funds at each begin block, and the state object will be removed.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/proto/cosmos/protocolpool/v1/tx.proto#L136-L161
+```
+
+The message will fail under the following conditions:
+
+* The recipient address is empty or restricted.
+* The ContinuousFund for the recipient does not exist.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/keeper/msg_server.go#L188-L226
+```
+
+## Client
+
+It takes the advantage of `AutoCLI`
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/x/protocolpool/autocli.go
+```
diff --git a/copy-of-sdk-docs/build/modules/slashing/README.md b/copy-of-sdk-docs/build/modules/slashing/README.md
new file mode 100644
index 00000000..5bf95b80
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/slashing/README.md
@@ -0,0 +1,813 @@
+---
+sidebar_position: 1
+---
+
+# `x/slashing`
+
+## Abstract
+
+This section specifies the slashing module of the Cosmos SDK, which implements functionality
+first outlined in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper) in June 2016.
+
+The slashing module enables Cosmos SDK-based blockchains to disincentivize any attributable action
+by a protocol-recognized actor with value at stake by penalizing them ("slashing").
+
+Penalties may include, but are not limited to:
+
+* Burning some amount of their stake
+* Removing their ability to vote on future blocks for a period of time.
+
+This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosystem.
+
+## Contents
+
+* [Concepts](#concepts)
+ * [States](#states)
+ * [Tombstone Caps](#tombstone-caps)
+ * [Infraction Timelines](#infraction-timelines)
+* [State](#state)
+ * [Signing Info (Liveness)](#signing-info-liveness)
+ * [Params](#params)
+* [Messages](#messages)
+ * [Unjail](#unjail)
+* [BeginBlock](#beginblock)
+ * [Liveness Tracking](#liveness-tracking)
+* [Hooks](#hooks)
+* [Events](#events)
+* [Staking Tombstone](#staking-tombstone)
+* [Parameters](#parameters)
+* [CLI](#cli)
+ * [Query](#query)
+ * [Transactions](#transactions)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+
+## Concepts
+
+### States
+
+At any given time, there are any number of validators registered in the state
+machine. Each block, the top `MaxValidators` (defined by `x/staking`) validators
+who are not jailed become _bonded_, meaning that they may propose and vote on
+blocks. Validators who are _bonded_ are _at stake_, meaning that part or all of
+their stake and their delegators' stake is at risk if they commit a protocol fault.
+
+For each of these validators we keep a `ValidatorSigningInfo` record that contains
+information pertaining to validator's liveness and other infraction related
+attributes.
+
+### Tombstone Caps
+
+In order to mitigate the impact of initially likely categories of non-malicious
+protocol faults, the Cosmos Hub implements for each validator
+a _tombstone_ cap, which only allows a validator to be slashed once for a double
+sign fault. For example, if you misconfigure your HSM and double-sign a bunch of
+old blocks, you'll only be punished for the first double-sign (and then immediately tombstoned). This will still be quite expensive and desirable to avoid, but tombstone caps
+somewhat blunt the economic impact of unintentional misconfiguration.
+
+Liveness faults do not have caps, as they can't stack upon each other. Liveness bugs are "detected" as soon as the infraction occurs, and the validators are immediately put in jail, so it is not possible for them to commit multiple liveness faults without unjailing in between.
+
+### Infraction Timelines
+
+To illustrate how the `x/slashing` module handles submitted evidence through
+CometBFT consensus, consider the following examples:
+
+**Definitions**:
+
+_[_ : timeline start
+_]_ : timeline end
+_Cn_ : infraction `n` committed
+_Dn_ : infraction `n` discovered
+_Vb_ : validator bonded
+_Vu_ : validator unbonded
+
+#### Single Double Sign Infraction
+
+\[----------C1----D1,Vu-----\]
+
+A single infraction is committed then later discovered, at which point the
+validator is unbonded and slashed at the full amount for the infraction.
+
+#### Multiple Double Sign Infractions
+
+\[----------C1--C2---C3---D1,D2,D3Vu-----\]
+
+Multiple infractions are committed and then later discovered, at which point the
+validator is jailed and slashed for only one infraction. Because the validator
+is also tombstoned, they can not rejoin the validator set.
+
+## State
+
+### Signing Info (Liveness)
+
+Every block includes a set of precommits by the validators for the previous block,
+known as the `LastCommitInfo` provided by CometBFT. A `LastCommitInfo` is valid so
+long as it contains precommits from +2/3 of total voting power.
+
+Proposers are incentivized to include precommits from all validators in the CometBFT `LastCommitInfo`
+by receiving additional fees proportional to the difference between the voting
+power included in the `LastCommitInfo` and +2/3 (see [fee distribution](../distribution/README.md#begin-block)).
+
+```go
+type LastCommitInfo struct {
+ Round int32
+ Votes []VoteInfo
+}
+```
+
+Validators are penalized for failing to be included in the `LastCommitInfo` for some
+number of blocks by being automatically jailed, potentially slashed, and unbonded.
+
+Information about validator's liveness activity is tracked through `ValidatorSigningInfo`.
+It is indexed in the store as follows:
+
+* ValidatorSigningInfo: `0x01 | ConsAddrLen (1 byte) | ConsAddress -> ProtocolBuffer(ValSigningInfo)`
+* MissedBlocksBitArray: `0x02 | ConsAddrLen (1 byte) | ConsAddress | LittleEndianUint64(signArrayIndex) -> VarInt(didMiss)` (varint is a number encoding format)
+
+The first mapping allows us to easily lookup the recent signing info for a
+validator based on the validator's consensus address.
+
+The second mapping (`MissedBlocksBitArray`) acts
+as a bit-array of size `SignedBlocksWindow` that tells us if the validator missed
+the block for a given index in the bit-array. The index in the bit-array is given
+as little endian uint64.
+The result is a `varint` that takes on `0` or `1`, where `0` indicates the
+validator did not miss (did sign) the corresponding block, and `1` indicates
+they missed the block (did not sign).
+
+Note that the `MissedBlocksBitArray` is not explicitly initialized up-front. Keys
+are added as we progress through the first `SignedBlocksWindow` blocks for a newly
+bonded validator. The `SignedBlocksWindow` parameter defines the size
+(number of blocks) of the sliding window used to track validator liveness.
+
+The information stored for tracking validator liveness is as follows:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/slashing/v1beta1/slashing.proto#L13-L35
+```
+
+### Params
+
+The slashing module stores it's params in state with the prefix of `0x00`,
+it can be updated with governance or the address with authority.
+
+* Params: `0x00 | ProtocolBuffer(Params)`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/slashing/v1beta1/slashing.proto#L37-L59
+```
+
+## Messages
+
+In this section we describe the processing of messages for the `slashing` module.
+
+### Unjail
+
+If a validator was automatically unbonded due to downtime and wishes to come back online &
+possibly rejoin the bonded set, it must send `MsgUnjail`:
+
+```protobuf
+// MsgUnjail is an sdk.Msg used for unjailing a jailed validator, thus returning
+// them into the bonded validator set, so they can begin receiving provisions
+// and rewards again.
+message MsgUnjail {
+ string validator_addr = 1;
+}
+```
+
+Below is a pseudocode of the `MsgSrv/Unjail` RPC:
+
+```go
+unjail(tx MsgUnjail)
+ validator = getValidator(tx.ValidatorAddr)
+ if validator == nil
+ fail with "No validator found"
+
+ if getSelfDelegation(validator) == 0
+ fail with "validator must self delegate before unjailing"
+
+ if !validator.Jailed
+ fail with "Validator not jailed, cannot unjail"
+
+ info = GetValidatorSigningInfo(operator)
+ if info.Tombstoned
+ fail with "Tombstoned validator cannot be unjailed"
+ if block time < info.JailedUntil
+ fail with "Validator still jailed, cannot unjail until period has expired"
+
+ validator.Jailed = false
+ setValidator(validator)
+
+ return
+```
+
+If the validator has enough stake to be in the top `n = MaximumBondedValidators`, it will be automatically rebonded,
+and all delegators still delegated to the validator will be rebonded and begin to again collect
+provisions and rewards.
+
+## BeginBlock
+
+### Liveness Tracking
+
+At the beginning of each block, we update the `ValidatorSigningInfo` for each
+validator and check if they've crossed below the liveness threshold over a
+sliding window. This sliding window is defined by `SignedBlocksWindow` and the
+index in this window is determined by `IndexOffset` found in the validator's
+`ValidatorSigningInfo`. For each block processed, the `IndexOffset` is incremented
+regardless if the validator signed or not. Once the index is determined, the
+`MissedBlocksBitArray` and `MissedBlocksCounter` are updated accordingly.
+
+Finally, in order to determine if a validator crosses below the liveness threshold,
+we fetch the maximum number of blocks missed, `maxMissed`, which is
+`SignedBlocksWindow - (MinSignedPerWindow * SignedBlocksWindow)` and the minimum
+height at which we can determine liveness, `minHeight`. If the current block is
+greater than `minHeight` and the validator's `MissedBlocksCounter` is greater than
+`maxMissed`, they will be slashed by `SlashFractionDowntime`, will be jailed
+for `DowntimeJailDuration`, and have the following values reset:
+`MissedBlocksBitArray`, `MissedBlocksCounter`, and `IndexOffset`.
+
+**Note**: Liveness slashes do **NOT** lead to a tombstoning.
+
+```go
+height := block.Height
+
+for vote in block.LastCommitInfo.Votes {
+ signInfo := GetValidatorSigningInfo(vote.Validator.Address)
+
+ // This is a relative index, so we count blocks the validator SHOULD have
+ // signed. We use the 0-value default signing info if not present, except for
+ // start height.
+ index := signInfo.IndexOffset % SignedBlocksWindow()
+ signInfo.IndexOffset++
+
+ // Update MissedBlocksBitArray and MissedBlocksCounter. The MissedBlocksCounter
+ // just tracks the sum of MissedBlocksBitArray. That way we avoid needing to
+ // read/write the whole array each time.
+ missedPrevious := GetValidatorMissedBlockBitArray(vote.Validator.Address, index)
+ missed := !signed
+
+ switch {
+ case !missedPrevious && missed:
+ // array index has changed from not missed to missed, increment counter
+ SetValidatorMissedBlockBitArray(vote.Validator.Address, index, true)
+ signInfo.MissedBlocksCounter++
+
+ case missedPrevious && !missed:
+ // array index has changed from missed to not missed, decrement counter
+ SetValidatorMissedBlockBitArray(vote.Validator.Address, index, false)
+ signInfo.MissedBlocksCounter--
+
+ default:
+ // array index at this index has not changed; no need to update counter
+ }
+
+ if missed {
+ // emit events...
+ }
+
+ minHeight := signInfo.StartHeight + SignedBlocksWindow()
+ maxMissed := SignedBlocksWindow() - MinSignedPerWindow()
+
+ // If we are past the minimum height and the validator has missed too many
+ // jail and slash them.
+ if height > minHeight && signInfo.MissedBlocksCounter > maxMissed {
+ validator := ValidatorByConsAddr(vote.Validator.Address)
+
+ // emit events...
+
+ // We need to retrieve the stake distribution which signed the block, so we
+ // subtract ValidatorUpdateDelay from the block height, and subtract an
+ // additional 1 since this is the LastCommit.
+ //
+ // Note, that this CAN result in a negative "distributionHeight" up to
+ // -ValidatorUpdateDelay-1, i.e. at the end of the pre-genesis block (none) = at the beginning of the genesis block.
+ // That's fine since this is just used to filter unbonding delegations & redelegations.
+ distributionHeight := height - sdk.ValidatorUpdateDelay - 1
+
+ SlashWithInfractionReason(vote.Validator.Address, distributionHeight, vote.Validator.Power, SlashFractionDowntime(), stakingtypes.Downtime)
+ Jail(vote.Validator.Address)
+
+ signInfo.JailedUntil = block.Time.Add(DowntimeJailDuration())
+
+ // We need to reset the counter & array so that the validator won't be
+ // immediately slashed for downtime upon rebonding.
+ signInfo.MissedBlocksCounter = 0
+ signInfo.IndexOffset = 0
+ ClearValidatorMissedBlockBitArray(vote.Validator.Address)
+ }
+
+ SetValidatorSigningInfo(vote.Validator.Address, signInfo)
+}
+```
+
+## Hooks
+
+This section contains a description of the module's `hooks`. Hooks are operations that are executed automatically when events are raised.
+
+### Staking hooks
+
+The slashing module implements the `StakingHooks` defined in `x/staking` and are used as record-keeping of validators information. During the app initialization, these hooks should be registered in the staking module struct.
+
+The following hooks impact the slashing state:
+
+* `AfterValidatorBonded` creates a `ValidatorSigningInfo` instance as described in the following section.
+* `AfterValidatorCreated` stores a validator's consensus key.
+* `AfterValidatorRemoved` removes a validator's consensus key.
+
+### Validator Bonded
+
+Upon successful first-time bonding of a new validator, we create a new `ValidatorSigningInfo` structure for the
+now-bonded validator, which `StartHeight` of the current block.
+
+If the validator was out of the validator set and gets bonded again, its new bonded height is set.
+
+```go
+onValidatorBonded(address sdk.ValAddress)
+
+ signingInfo, found = GetValidatorSigningInfo(address)
+ if !found {
+ signingInfo = ValidatorSigningInfo {
+ StartHeight : CurrentHeight,
+ IndexOffset : 0,
+ JailedUntil : time.Unix(0, 0),
+ Tombstone : false,
+ MissedBlockCounter : 0
+ } else {
+ signingInfo.StartHeight = CurrentHeight
+ }
+
+ setValidatorSigningInfo(signingInfo)
+ }
+
+ return
+```
+
+## Events
+
+The slashing module emits the following events:
+
+### MsgServer
+
+#### MsgUnjail
+
+| Type | Attribute Key | Attribute Value |
+| ------- | ------------- | ------------------ |
+| message | module | slashing |
+| message | sender | {validatorAddress} |
+
+### Keeper
+
+### BeginBlocker: HandleValidatorSignature
+
+| Type | Attribute Key | Attribute Value |
+| ----- | ------------- | --------------------------- |
+| slash | address | {validatorConsensusAddress} |
+| slash | power | {validatorPower} |
+| slash | reason | {slashReason} |
+| slash | jailed [0] | {validatorConsensusAddress} |
+| slash | burned coins | {math.Int} |
+
+* [0] Only included if the validator is jailed.
+
+| Type | Attribute Key | Attribute Value |
+| -------- | ------------- | --------------------------- |
+| liveness | address | {validatorConsensusAddress} |
+| liveness | missed_blocks | {missedBlocksCounter} |
+| liveness | height | {blockHeight} |
+
+#### Slash
+
+* same as `"slash"` event from `HandleValidatorSignature`, but without the `jailed` attribute.
+
+#### Jail
+
+| Type | Attribute Key | Attribute Value |
+| ----- | ------------- | ------------------ |
+| slash | jailed | {validatorAddress} |
+
+## Staking Tombstone
+
+### Abstract
+
+In the current implementation of the `slashing` module, when the consensus engine
+informs the state machine of a validator's consensus fault, the validator is
+partially slashed, and put into a "jail period", a period of time in which they
+are not allowed to rejoin the validator set. However, because of the nature of
+consensus faults and ABCI, there can be a delay between an infraction occurring,
+and evidence of the infraction reaching the state machine (this is one of the
+primary reasons for the existence of the unbonding period).
+
+> Note: The tombstone concept, only applies to faults that have a delay between
+> the infraction occurring and evidence reaching the state machine. For example,
+> evidence of a validator double signing may take a while to reach the state machine
+> due to unpredictable evidence gossip layer delays and the ability of validators to
+> selectively reveal double-signatures (e.g. to infrequently-online light clients).
+> Liveness slashing, on the other hand, is detected immediately as soon as the
+> infraction occurs, and therefore no slashing period is needed. A validator is
+> immediately put into jail period, and they cannot commit another liveness fault
+> until they unjail. In the future, there may be other types of byzantine faults
+> that have delays (for example, submitting evidence of an invalid proposal as a transaction).
+> When implemented, it will have to be decided whether these future types of
+> byzantine faults will result in a tombstoning (and if not, the slash amounts
+> will not be capped by a slashing period).
+
+In the current system design, once a validator is put in the jail for a consensus
+fault, after the `JailPeriod` they are allowed to send a transaction to `unjail`
+themselves, and thus rejoin the validator set.
+
+One of the "design desires" of the `slashing` module is that if multiple
+infractions occur before evidence is executed (and a validator is put in jail),
+they should only be punished for single worst infraction, but not cumulatively.
+For example, if the sequence of events is:
+
+1. Validator A commits Infraction 1 (worth 30% slash)
+2. Validator A commits Infraction 2 (worth 40% slash)
+3. Validator A commits Infraction 3 (worth 35% slash)
+4. Evidence for Infraction 1 reaches state machine (and validator is put in jail)
+5. Evidence for Infraction 2 reaches state machine
+6. Evidence for Infraction 3 reaches state machine
+
+Only Infraction 2 should have its slash take effect, as it is the highest. This
+is done, so that in the case of the compromise of a validator's consensus key,
+they will only be punished once, even if the hacker double-signs many blocks.
+Because, the unjailing has to be done with the validator's operator key, they
+have a chance to re-secure their consensus key, and then signal that they are
+ready using their operator key. We call this period during which we track only
+the max infraction, the "slashing period".
+
+Once, a validator rejoins by unjailing themselves, we begin a new slashing period;
+if they commit a new infraction after unjailing, it gets slashed cumulatively on
+top of the worst infraction from the previous slashing period.
+
+However, while infractions are grouped based off of the slashing periods, because
+evidence can be submitted up to an `unbondingPeriod` after the infraction, we
+still have to allow for evidence to be submitted for previous slashing periods.
+For example, if the sequence of events is:
+
+1. Validator A commits Infraction 1 (worth 30% slash)
+2. Validator A commits Infraction 2 (worth 40% slash)
+3. Evidence for Infraction 1 reaches state machine (and Validator A is put in jail)
+4. Validator A unjails
+
+We are now in a new slashing period, however we still have to keep the door open
+for the previous infraction, as the evidence for Infraction 2 may still come in.
+As the number of slashing periods increase, it creates more complexity as we have
+to keep track of the highest infraction amount for every single slashing period.
+
+> Note: Currently, according to the `slashing` module spec, a new slashing period
+> is created every time a validator is unbonded then rebonded. This should probably
+> be changed to jailed/unjailed. See issue [#3205](https://github.com/cosmos/cosmos-sdk/issues/3205)
+> for further details. For the remainder of this, I will assume that we only start
+> a new slashing period when a validator gets unjailed.
+
+The maximum number of slashing periods is the `len(UnbondingPeriod) / len(JailPeriod)`.
+The current defaults in Gaia for the `UnbondingPeriod` and `JailPeriod` are 3 weeks
+and 2 days, respectively. This means there could potentially be up to 11 slashing
+periods concurrently being tracked per validator. If we set the `JailPeriod >= UnbondingPeriod`,
+we only have to track 1 slashing period (i.e not have to track slashing periods).
+
+Currently, in the jail period implementation, once a validator unjails, all of
+their delegators who are delegated to them (haven't unbonded / redelegated away),
+stay with them. Given that consensus safety faults are so egregious
+(way more so than liveness faults), it is probably prudent to have delegators not
+"auto-rebond" to the validator.
+
+#### Proposal: infinite jail
+
+We propose setting the "jail time" for a
+validator who commits a consensus safety fault, to `infinite` (i.e. a tombstone state).
+This essentially kicks the validator out of the validator set and does not allow
+them to re-enter the validator set. All of their delegators (including the operator themselves)
+have to either unbond or redelegate away. The validator operator can create a new
+validator if they would like, with a new operator key and consensus key, but they
+have to "re-earn" their delegations back.
+
+Implementing the tombstone system and getting rid of the slashing period tracking
+will make the `slashing` module way simpler, especially because we can remove all
+of the hooks defined in the `slashing` module consumed by the `staking` module
+(the `slashing` module still consumes hooks defined in `staking`).
+
+#### Single slashing amount
+
+Another optimization that can be made is that if we assume that all ABCI faults
+for CometBFT consensus are slashed at the same level, we don't have to keep
+track of "max slash". Once an ABCI fault happens, we don't have to worry about
+comparing potential future ones to find the max.
+
+Currently the only CometBFT ABCI fault is:
+
+* Unjustified precommits (double signs)
+
+It is currently planned to include the following fault in the near future:
+
+* Signing a precommit when you're in unbonding phase (needed to make light client bisection safe)
+
+Given that these faults are both attributable byzantine faults, we will likely
+want to slash them equally, and thus we can enact the above change.
+
+> Note: This change may make sense for current CometBFT consensus, but maybe
+> not for a different consensus algorithm or future versions of CometBFT that
+> may want to punish at different levels (for example, partial slashing).
+
+## Parameters
+
+The slashing module contains the following parameters:
+
+| Key | Type | Example |
+| ----------------------- | -------------- | ---------------------- |
+| SignedBlocksWindow | string (int64) | "100" |
+| MinSignedPerWindow | string (dec) | "0.500000000000000000" |
+| DowntimeJailDuration | string (ns) | "600000000000" |
+| SlashFractionDoubleSign | string (dec) | "0.050000000000000000" |
+| SlashFractionDowntime | string (dec) | "0.010000000000000000" |
+
+## CLI
+
+A user can query and interact with the `slashing` module using the CLI.
+
+### Query
+
+The `query` commands allow users to query `slashing` state.
+
+```shell
+simd query slashing --help
+```
+
+#### params
+
+The `params` command allows users to query genesis parameters for the slashing module.
+
+```shell
+simd query slashing params [flags]
+```
+
+Example:
+
+```shell
+simd query slashing params
+```
+
+Example Output:
+
+```yml
+downtime_jail_duration: 600s
+min_signed_per_window: "0.500000000000000000"
+signed_blocks_window: "100"
+slash_fraction_double_sign: "0.050000000000000000"
+slash_fraction_downtime: "0.010000000000000000"
+```
+
+#### signing-info
+
+The `signing-info` command allows users to query signing-info of the validator using consensus public key.
+
+```shell
+simd query slashing signing-infos [flags]
+```
+
+Example:
+
+```shell
+simd query slashing signing-info '{"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys6jD5B6tPgC8="}'
+
+```
+
+Example Output:
+
+```yml
+address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c
+index_offset: "2068"
+jailed_until: "1970-01-01T00:00:00Z"
+missed_blocks_counter: "0"
+start_height: "0"
+tombstoned: false
+```
+
+#### signing-infos
+
+The `signing-infos` command allows users to query signing infos of all validators.
+
+```shell
+simd query slashing signing-infos [flags]
+```
+
+Example:
+
+```shell
+simd query slashing signing-infos
+```
+
+Example Output:
+
+```yml
+info:
+- address: cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c
+ index_offset: "2075"
+ jailed_until: "1970-01-01T00:00:00Z"
+ missed_blocks_counter: "0"
+ start_height: "0"
+ tombstoned: false
+pagination:
+ next_key: null
+ total: "0"
+```
+
+### Transactions
+
+The `tx` commands allow users to interact with the `slashing` module.
+
+```bash
+simd tx slashing --help
+```
+
+#### unjail
+
+The `unjail` command allows users to unjail a validator previously jailed for downtime.
+
+```bash
+simd tx slashing unjail --from mykey [flags]
+```
+
+Example:
+
+```bash
+simd tx slashing unjail --from mykey
+```
+
+### gRPC
+
+A user can query the `slashing` module using gRPC endpoints.
+
+#### Params
+
+The `Params` endpoint allows users to query the parameters of slashing module.
+
+```shell
+cosmos.slashing.v1beta1.Query/Params
+```
+
+Example:
+
+```shell
+grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/Params
+```
+
+Example Output:
+
+```json
+{
+ "params": {
+ "signedBlocksWindow": "100",
+ "minSignedPerWindow": "NTAwMDAwMDAwMDAwMDAwMDAw",
+ "downtimeJailDuration": "600s",
+ "slashFractionDoubleSign": "NTAwMDAwMDAwMDAwMDAwMDA=",
+ "slashFractionDowntime": "MTAwMDAwMDAwMDAwMDAwMDA="
+ }
+}
+```
+
+#### SigningInfo
+
+The SigningInfo queries the signing info of given cons address.
+
+```shell
+cosmos.slashing.v1beta1.Query/SigningInfo
+```
+
+Example:
+
+```shell
+grpcurl -plaintext -d '{"cons_address":"cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c"}' localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfo
+```
+
+Example Output:
+
+```json
+{
+ "valSigningInfo": {
+ "address": "cosmosvalcons1nrqsld3aw6lh6t082frdqc84uwxn0t958c",
+ "indexOffset": "3493",
+ "jailedUntil": "1970-01-01T00:00:00Z"
+ }
+}
+```
+
+#### SigningInfos
+
+The SigningInfos queries signing info of all validators.
+
+```shell
+cosmos.slashing.v1beta1.Query/SigningInfos
+```
+
+Example:
+
+```shell
+grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/SigningInfos
+```
+
+Example Output:
+
+```json
+{
+ "info": [
+ {
+ "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c",
+ "indexOffset": "2467",
+ "jailedUntil": "1970-01-01T00:00:00Z"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+### REST
+
+A user can query the `slashing` module using REST endpoints.
+
+#### Params
+
+```shell
+/cosmos/slashing/v1beta1/params
+```
+
+Example:
+
+```shell
+curl "localhost:1317/cosmos/slashing/v1beta1/params"
+```
+
+Example Output:
+
+```json
+{
+ "params": {
+ "signed_blocks_window": "100",
+ "min_signed_per_window": "0.500000000000000000",
+ "downtime_jail_duration": "600s",
+ "slash_fraction_double_sign": "0.050000000000000000",
+ "slash_fraction_downtime": "0.010000000000000000"
+}
+```
+
+#### signing_info
+
+```shell
+/cosmos/slashing/v1beta1/signing_infos/%s
+```
+
+Example:
+
+```shell
+curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos/cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c"
+```
+
+Example Output:
+
+```json
+{
+ "val_signing_info": {
+ "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c",
+ "start_height": "0",
+ "index_offset": "4184",
+ "jailed_until": "1970-01-01T00:00:00Z",
+ "tombstoned": false,
+ "missed_blocks_counter": "0"
+ }
+}
+```
+
+#### signing_infos
+
+```shell
+/cosmos/slashing/v1beta1/signing_infos
+```
+
+Example:
+
+```shell
+curl "localhost:1317/cosmos/slashing/v1beta1/signing_infos
+```
+
+Example Output:
+
+```json
+{
+ "info": [
+ {
+ "address": "cosmosvalcons1nrqslkwd3pz096lh6t082frdqc84uwxn0t958c",
+ "start_height": "0",
+ "index_offset": "4169",
+ "jailed_until": "1970-01-01T00:00:00Z",
+ "tombstoned": false,
+ "missed_blocks_counter": "0"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/staking/README.md b/copy-of-sdk-docs/build/modules/staking/README.md
new file mode 100644
index 00000000..afed4bee
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/staking/README.md
@@ -0,0 +1,3058 @@
+---
+sidebar_position: 1
+---
+
+# `x/staking`
+
+## Abstract
+
+This paper specifies the Staking module of the Cosmos SDK that was first
+described in the [Cosmos Whitepaper](https://cosmos.network/about/whitepaper)
+in June 2016.
+
+The module enables Cosmos SDK-based blockchain to support an advanced
+Proof-of-Stake (PoS) system. In this system, holders of the native staking token of
+the chain can become validators and can delegate tokens to validators,
+ultimately determining the effective validator set for the system.
+
+This module is used in the Cosmos Hub, the first Hub in the Cosmos
+network.
+
+## Contents
+
+* [State](#state)
+ * [Pool](#pool)
+ * [LastTotalPower](#lasttotalpower)
+ * [ValidatorUpdates](#validatorupdates)
+ * [UnbondingID](#unbondingid)
+ * [Params](#params)
+ * [Validator](#validator)
+ * [Delegation](#delegation)
+ * [UnbondingDelegation](#unbondingdelegation)
+ * [Redelegation](#redelegation)
+ * [Queues](#queues)
+ * [HistoricalInfo](#historicalinfo)
+* [State Transitions](#state-transitions)
+ * [Validators](#validators)
+ * [Delegations](#delegations)
+ * [Slashing](#slashing)
+ * [How Shares are calculated](#how-shares-are-calculated)
+* [Messages](#messages)
+ * [MsgCreateValidator](#msgcreatevalidator)
+ * [MsgEditValidator](#msgeditvalidator)
+ * [MsgDelegate](#msgdelegate)
+ * [MsgUndelegate](#msgundelegate)
+ * [MsgCancelUnbondingDelegation](#msgcancelunbondingdelegation)
+ * [MsgBeginRedelegate](#msgbeginredelegate)
+ * [MsgUpdateParams](#msgupdateparams)
+* [Begin-Block](#begin-block)
+ * [Historical Info Tracking](#historical-info-tracking)
+* [End-Block](#end-block)
+ * [Validator Set Changes](#validator-set-changes)
+ * [Queues](#queues-1)
+* [Hooks](#hooks)
+* [Events](#events)
+ * [EndBlocker](#endblocker)
+ * [Msg's](#msgs)
+* [Parameters](#parameters)
+* [Client](#client)
+ * [CLI](#cli)
+ * [gRPC](#grpc)
+ * [REST](#rest)
+
+## State
+
+### Pool
+
+Pool is used for tracking bonded and not-bonded token supply of the bond denomination.
+
+### LastTotalPower
+
+LastTotalPower tracks the total amounts of bonded tokens recorded during the previous end block.
+Store entries prefixed with "Last" must remain unchanged until EndBlock.
+
+* LastTotalPower: `0x12 -> ProtocolBuffer(math.Int)`
+
+### ValidatorUpdates
+
+ValidatorUpdates contains the validator updates returned to ABCI at the end of every block.
+The values are overwritten in every block.
+
+* ValidatorUpdates `0x61 -> []abci.ValidatorUpdate`
+
+### UnbondingID
+
+UnbondingID stores the ID of the latest unbonding operation. It enables creating unique IDs for unbonding operations, i.e., UnbondingID is incremented every time a new unbonding operation (validator unbonding, unbonding delegation, redelegation) is initiated.
+
+* UnbondingID: `0x37 -> uint64`
+
+### Params
+
+The staking module stores its params in state with the prefix of `0x51`,
+it can be updated with governance or the address with authority.
+
+* Params: `0x51 | ProtocolBuffer(Params)`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L310-L333
+```
+
+### Validator
+
+Validators can have one of three statuses
+
+* `Unbonded`: The validator is not in the active set. They cannot sign blocks and do not earn
+ rewards. They can receive delegations.
+* `Bonded`: Once the validator receives sufficient bonded tokens they automatically join the
+ active set during [`EndBlock`](#validator-set-changes) and their status is updated to `Bonded`.
+ They are signing blocks and receiving rewards. They can receive further delegations.
+ They can be slashed for misbehavior. Delegators to this validator who unbond their delegation
+ must wait the duration of the UnbondingTime, a chain-specific param, during which time
+ they are still slashable for offences of the source validator if those offences were committed
+ during the period of time that the tokens were bonded.
+* `Unbonding`: When a validator leaves the active set, either by choice or due to slashing, jailing or
+ tombstoning, an unbonding of all their delegations begins. All delegations must then wait the UnbondingTime
+ before their tokens are moved to their accounts from the `BondedPool`.
+
+:::warning
+Tombstoning is permanent, once tombstoned a validator's consensus key can not be reused within the chain where the tombstoning happened.
+:::
+
+Validators objects should be primarily stored and accessed by the
+`OperatorAddr`, an SDK validator address for the operator of the validator. Two
+additional indices are maintained per validator object in order to fulfill
+required lookups for slashing and validator-set updates. A third special index
+(`LastValidatorPower`) is also maintained which however remains constant
+throughout each block, unlike the first two indices which mirror the validator
+records within a block.
+
+* Validators: `0x21 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(validator)`
+* ValidatorsByConsAddr: `0x22 | ConsAddrLen (1 byte) | ConsAddr -> OperatorAddr`
+* ValidatorsByPower: `0x23 | BigEndian(ConsensusPower) | OperatorAddrLen (1 byte) | OperatorAddr -> OperatorAddr`
+* LastValidatorsPower: `0x11 | OperatorAddrLen (1 byte) | OperatorAddr -> ProtocolBuffer(ConsensusPower)`
+* ValidatorsByUnbondingID: `0x38 | UnbondingID -> 0x21 | OperatorAddrLen (1 byte) | OperatorAddr`
+
+`Validators` is the primary index - it ensures that each operator can have only one
+associated validator, where the public key of that validator can change in the
+future. Delegators can refer to the immutable operator of the validator, without
+concern for the changing public key.
+
+`ValidatorsByUnbondingID` is an additional index that enables lookups for
+ validators by the unbonding IDs corresponding to their current unbonding.
+
+`ValidatorByConsAddr` is an additional index that enables lookups for slashing.
+When CometBFT reports evidence, it provides the validator address, so this
+map is needed to find the operator. Note that the `ConsAddr` corresponds to the
+address which can be derived from the validator's `ConsPubKey`.
+
+`ValidatorsByPower` is an additional index that provides a sorted list of
+potential validators to quickly determine the current active set. Here
+ConsensusPower is validator.Tokens/10^6 by default. Note that all validators
+where `Jailed` is true are not stored within this index.
+
+`LastValidatorsPower` is a special index that provides a historical list of the
+last-block's bonded validators. This index remains constant during a block but
+is updated during the validator set update process which takes place in [`EndBlock`](#end-block).
+
+Each validator's state is stored in a `Validator` struct:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L82-L138
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L26-L80
+```
+
+### Delegation
+
+Delegations are identified by combining `DelegatorAddr` (the address of the delegator)
+with the `ValidatorAddr` Delegators are indexed in the store as follows:
+
+* Delegation: `0x31 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(delegation)`
+
+Stake holders may delegate coins to validators; under this circumstance their
+funds are held in a `Delegation` data structure. It is owned by one
+delegator, and is associated with the shares for one validator. The sender of
+the transaction is the owner of the bond.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L198-L216
+```
+
+#### Delegator Shares
+
+When one delegates tokens to a Validator, they are issued a number of delegator shares based on a
+dynamic exchange rate, calculated as follows from the total number of tokens delegated to the
+validator and the number of shares issued so far:
+
+`Shares per Token = validator.TotalShares() / validator.Tokens()`
+
+Only the number of shares received is stored on the DelegationEntry. When a delegator then
+Undelegates, the token amount they receive is calculated from the number of shares they currently
+hold and the inverse exchange rate:
+
+`Tokens per Share = validator.Tokens() / validatorShares()`
+
+These `Shares` are simply an accounting mechanism. They are not a fungible asset. The reason for
+this mechanism is to simplify the accounting around slashing. Rather than iteratively slashing the
+tokens of every delegation entry, instead the Validator's total bonded tokens can be slashed,
+effectively reducing the value of each issued delegator share.
+
+### UnbondingDelegation
+
+Shares in a `Delegation` can be unbonded, but they must for some time exist as
+an `UnbondingDelegation`, where shares can be reduced if Byzantine behavior is
+detected.
+
+`UnbondingDelegation` are indexed in the store as:
+
+* UnbondingDelegation: `0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr -> ProtocolBuffer(unbondingDelegation)`
+* UnbondingDelegationsFromValidator: `0x33 | ValidatorAddrLen (1 byte) | ValidatorAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil`
+* UnbondingDelegationByUnbondingId: `0x38 | UnbondingId -> 0x32 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorAddr`
+ `UnbondingDelegation` is used in queries, to lookup all unbonding delegations for
+ a given delegator.
+
+`UnbondingDelegationsFromValidator` is used in slashing, to lookup all
+ unbonding delegations associated with a given validator that need to be
+ slashed.
+
+ `UnbondingDelegationByUnbondingId` is an additional index that enables
+ lookups for unbonding delegations by the unbonding IDs of the containing
+ unbonding delegation entries.
+
+
+A UnbondingDelegation object is created every time an unbonding is initiated.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L218-L261
+```
+
+### Redelegation
+
+The bonded tokens worth of a `Delegation` may be instantly redelegated from a
+source validator to a different validator (destination validator). However when
+this occurs they must be tracked in a `Redelegation` object, whereby their
+shares can be slashed if their tokens have contributed to a Byzantine fault
+committed by the source validator.
+
+`Redelegation` are indexed in the store as:
+
+* Redelegations: `0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr -> ProtocolBuffer(redelegation)`
+* RedelegationsBySrc: `0x35 | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil`
+* RedelegationsByDst: `0x36 | ValidatorDstAddrLen (1 byte) | ValidatorDstAddr | ValidatorSrcAddrLen (1 byte) | ValidatorSrcAddr | DelegatorAddrLen (1 byte) | DelegatorAddr -> nil`
+* RedelegationByUnbondingId: `0x38 | UnbondingId -> 0x34 | DelegatorAddrLen (1 byte) | DelegatorAddr | ValidatorAddrLen (1 byte) | ValidatorSrcAddr | ValidatorDstAddr`
+
+ `Redelegations` is used for queries, to lookup all redelegations for a given
+ delegator.
+
+ `RedelegationsBySrc` is used for slashing based on the `ValidatorSrcAddr`.
+
+ `RedelegationsByDst` is used for slashing based on the `ValidatorDstAddr`
+
+The first map here is used for queries, to lookup all redelegations for a given
+delegator. The second map is used for slashing based on the `ValidatorSrcAddr`,
+while the third map is for slashing based on the `ValidatorDstAddr`.
+
+`RedelegationByUnbondingId` is an additional index that enables
+ lookups for redelegations by the unbonding IDs of the containing
+ redelegation entries.
+
+A redelegation object is created every time a redelegation occurs. To prevent
+"redelegation hopping" redelegations may not occur under the situation that:
+
+* the (re)delegator already has another immature redelegation in progress
+ with a destination to a validator (let's call it `Validator X`)
+* and, the (re)delegator is attempting to create a _new_ redelegation
+ where the source validator for this new redelegation is `Validator X`.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L263-L308
+```
+
+### Queues
+
+All queue objects are sorted by timestamp. The time used within any queue is
+firstly converted to UTC, rounded to the nearest nanosecond then sorted. The sortable time format
+used is a slight modification of the RFC3339Nano and uses the format string
+`"2006-01-02T15:04:05.000000000"`. Notably this format:
+
+* right pads all zeros
+* drops the time zone info (we already use UTC)
+
+In all cases, the stored timestamp represents the maturation time of the queue
+element.
+
+#### UnbondingDelegationQueue
+
+For the purpose of tracking progress of unbonding delegations the unbonding
+delegations queue is kept.
+
+* UnbondingDelegation: `0x41 | format(time) -> []DVPair`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L162-L172
+```
+
+#### RedelegationQueue
+
+For the purpose of tracking progress of redelegations the redelegation queue is
+kept.
+
+* RedelegationQueue: `0x42 | format(time) -> []DVVTriplet`
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L179-L191
+```
+
+#### ValidatorQueue
+
+For the purpose of tracking progress of unbonding validators the validator
+queue is kept.
+
+* ValidatorQueueTime: `0x43 | format(time) -> []sdk.ValAddress`
+
+The stored object by each key is an array of validator operator addresses from
+which the validator object can be accessed. Typically it is expected that only
+a single validator record will be associated with a given timestamp however it is possible
+that multiple validators exist in the queue at the same location.
+
+### HistoricalInfo
+
+HistoricalInfo objects are stored and pruned at each block such that the staking keeper persists
+the `n` most recent historical info defined by staking module parameter: `HistoricalEntries`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/staking.proto#L17-L24
+```
+
+At each BeginBlock, the staking keeper will persist the current Header and the Validators that committed
+the current block in a `HistoricalInfo` object. The Validators are sorted on their address to ensure that
+they are in a deterministic order.
+The oldest HistoricalEntries will be pruned to ensure that there only exist the parameter-defined number of
+historical entries.
+
+## State Transitions
+
+### Validators
+
+State transitions in validators are performed on every [`EndBlock`](#validator-set-changes)
+in order to check for changes in the active `ValidatorSet`.
+
+A validator can be `Unbonded`, `Unbonding` or `Bonded`. `Unbonded`
+and `Unbonding` are collectively called `Not Bonded`. A validator can move
+directly between all the states, except for from `Bonded` to `Unbonded`.
+
+#### Not bonded to Bonded
+
+The following transition occurs when a validator's ranking in the `ValidatorPowerIndex` surpasses
+that of the `LastValidator`.
+
+* set `validator.Status` to `Bonded`
+* send the `validator.Tokens` from the `NotBondedTokens` to the `BondedPool` `ModuleAccount`
+* delete the existing record from `ValidatorByPowerIndex`
+* add a new updated record to the `ValidatorByPowerIndex`
+* update the `Validator` object for this validator
+* if it exists, delete any `ValidatorQueue` record for this validator
+
+#### Bonded to Unbonding
+
+When a validator begins the unbonding process the following operations occur:
+
+* send the `validator.Tokens` from the `BondedPool` to the `NotBondedTokens` `ModuleAccount`
+* set `validator.Status` to `Unbonding`
+* delete the existing record from `ValidatorByPowerIndex`
+* add a new updated record to the `ValidatorByPowerIndex`
+* update the `Validator` object for this validator
+* insert a new record into the `ValidatorQueue` for this validator
+
+#### Unbonding to Unbonded
+
+A validator moves from unbonding to unbonded when the `ValidatorQueue` object
+moves from bonded to unbonded
+
+* update the `Validator` object for this validator
+* set `validator.Status` to `Unbonded`
+
+#### Jail/Unjail
+
+when a validator is jailed it is effectively removed from the CometBFT set.
+this process may be also be reversed. the following operations occur:
+
+* set `Validator.Jailed` and update object
+* if jailed delete record from `ValidatorByPowerIndex`
+* if unjailed add record to `ValidatorByPowerIndex`
+
+Jailed validators are not present in any of the following stores:
+
+* the power store (from consensus power to address)
+
+### Delegations
+
+#### Delegate
+
+When a delegation occurs both the validator and the delegation objects are affected
+
+* determine the delegators shares based on tokens delegated and the validator's exchange rate
+* remove tokens from the sending account
+* add shares the delegation object or add them to a created validator object
+* add new delegator shares and update the `Validator` object
+* transfer the `delegation.Amount` from the delegator's account to the `BondedPool` or the `NotBondedPool` `ModuleAccount` depending if the `validator.Status` is `Bonded` or not
+* delete the existing record from `ValidatorByPowerIndex`
+* add an new updated record to the `ValidatorByPowerIndex`
+
+#### Begin Unbonding
+
+As a part of the Undelegate and Complete Unbonding state transitions Unbond
+Delegation may be called.
+
+* subtract the unbonded shares from delegator
+* add the unbonded tokens to an `UnbondingDelegationEntry`
+* update the delegation or remove the delegation if there are no more shares
+* if the delegation is the operator of the validator and no more shares exist then trigger a jail validator
+* update the validator with removed the delegator shares and associated coins
+* if the validator state is `Bonded`, transfer the `Coins` worth of the unbonded
+ shares from the `BondedPool` to the `NotBondedPool` `ModuleAccount`
+* remove the validator if it is unbonded and there are no more delegation shares.
+* remove the validator if it is unbonded and there are no more delegation shares
+* get a unique `unbondingId` and map it to the `UnbondingDelegationEntry` in `UnbondingDelegationByUnbondingId`
+* call the `AfterUnbondingInitiated(unbondingId)` hook
+* add the unbonding delegation to `UnbondingDelegationQueue` with the completion time set to `UnbondingTime`
+
+#### Cancel an `UnbondingDelegation` Entry
+
+When a `cancel unbond delegation` occurs both the `validator`, the `delegation` and an `UnbondingDelegationQueue` state will be updated.
+
+* if cancel unbonding delegation amount equals to the `UnbondingDelegation` entry `balance`, then the `UnbondingDelegation` entry deleted from `UnbondingDelegationQueue`.
+* if the `cancel unbonding delegation amount is less than the `UnbondingDelegation` entry balance, then the `UnbondingDelegation` entry will be updated with new balance in the `UnbondingDelegationQueue`.
+* cancel `amount` is [Delegated](#delegations) back to the original `validator`.
+
+#### Complete Unbonding
+
+For undelegations which do not complete immediately, the following operations
+occur when the unbonding delegation queue element matures:
+
+* remove the entry from the `UnbondingDelegation` object
+* transfer the tokens from the `NotBondedPool` `ModuleAccount` to the delegator `Account`
+
+#### Begin Redelegation
+
+Redelegations affect the delegation, source and destination validators.
+
+* perform an `unbond` delegation from the source validator to retrieve the tokens worth of the unbonded shares
+* using the unbonded tokens, `Delegate` them to the destination validator
+* if the `sourceValidator.Status` is `Bonded`, and the `destinationValidator` is not,
+ transfer the newly delegated tokens from the `BondedPool` to the `NotBondedPool` `ModuleAccount`
+* otherwise, if the `sourceValidator.Status` is not `Bonded`, and the `destinationValidator`
+ is `Bonded`, transfer the newly delegated tokens from the `NotBondedPool` to the `BondedPool` `ModuleAccount`
+* record the token amount in an new entry in the relevant `Redelegation`
+
+From when a redelegation begins until it completes, the delegator is in a state of "pseudo-unbonding", and can still be
+slashed for infractions that occurred before the redelegation began.
+
+#### Complete Redelegation
+
+When a redelegations complete the following occurs:
+
+* remove the entry from the `Redelegation` object
+
+### Slashing
+
+#### Slash Validator
+
+When a Validator is slashed, the following occurs:
+
+* The total `slashAmount` is calculated as the `slashFactor` (a chain parameter) \* `TokensFromConsensusPower`,
+ the total number of tokens bonded to the validator at the time of the infraction.
+* Every unbonding delegation and pseudo-unbonding redelegation such that the infraction occurred before the unbonding or
+ redelegation began from the validator are slashed by the `slashFactor` percentage of the initialBalance.
+* Each amount slashed from redelegations and unbonding delegations is subtracted from the
+ total slash amount.
+* The `remainingSlashAmount` is then slashed from the validator's tokens in the `BondedPool` or
+ `NonBondedPool` depending on the validator's status. This reduces the total supply of tokens.
+
+In the case of a slash due to any infraction that requires evidence to submitted (for example double-sign), the slash
+occurs at the block where the evidence is included, not at the block where the infraction occurred.
+Put otherwise, validators are not slashed retroactively, only when they are caught.
+
+#### Slash Unbonding Delegation
+
+When a validator is slashed, so are those unbonding delegations from the validator that began unbonding
+after the time of the infraction. Every entry in every unbonding delegation from the validator
+is slashed by `slashFactor`. The amount slashed is calculated from the `InitialBalance` of the
+delegation and is capped to prevent a resulting negative balance. Completed (or mature) unbondings are not slashed.
+
+#### Slash Redelegation
+
+When a validator is slashed, so are all redelegations from the validator that began after the
+infraction. Redelegations are slashed by `slashFactor`.
+Redelegations that began before the infraction are not slashed.
+The amount slashed is calculated from the `InitialBalance` of the delegation and is capped to
+prevent a resulting negative balance.
+Mature redelegations (that have completed pseudo-unbonding) are not slashed.
+
+### How Shares are calculated
+
+At any given point in time, each validator has a number of tokens, `T`, and has a number of shares issued, `S`.
+Each delegator, `i`, holds a number of shares, `S_i`.
+The number of tokens is the sum of all tokens delegated to the validator, plus the rewards, minus the slashes.
+
+The delegator is entitled to a portion of the underlying tokens proportional to their proportion of shares.
+So delegator `i` is entitled to `T * S_i / S` of the validator's tokens.
+
+When a delegator delegates new tokens to the validator, they receive a number of shares proportional to their contribution.
+So when delegator `j` delegates `T_j` tokens, they receive `S_j = S * T_j / T` shares.
+The total number of tokens is now `T + T_j`, and the total number of shares is `S + S_j`.
+`j`s proportion of the shares is the same as their proportion of the total tokens contributed: `(S + S_j) / S = (T + T_j) / T`.
+
+A special case is the initial delegation, when `T = 0` and `S = 0`, so `T_j / T` is undefined.
+For the initial delegation, delegator `j` who delegates `T_j` tokens receive `S_j = T_j` shares.
+So a validator that hasn't received any rewards and has not been slashed will have `T = S`.
+
+## Messages
+
+In this section we describe the processing of the staking messages and the corresponding updates to the state. All created/modified state objects specified by each message are defined within the [state](#state) section.
+
+### MsgCreateValidator
+
+A validator is created using the `MsgCreateValidator` message.
+The validator must be created with an initial delegation from the operator.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L20-L21
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L50-L73
+```
+
+This message is expected to fail if:
+
+* another validator with this operator address is already registered
+* another validator with this pubkey is already registered
+* the initial self-delegation tokens are of a denom not specified as the bonding denom
+* the commission parameters are faulty, namely:
+ * `MaxRate` is either > 1 or < 0
+ * the initial `Rate` is either negative or > `MaxRate`
+ * the initial `MaxChangeRate` is either negative or > `MaxRate`
+* the description fields are too large
+
+This message creates and stores the `Validator` object at appropriate indexes.
+Additionally a self-delegation is made with the initial tokens delegation
+tokens `Delegation`. The validator always starts as unbonded but may be bonded
+in the first end-block.
+
+### MsgEditValidator
+
+The `Description`, `CommissionRate` of a validator can be updated using the
+`MsgEditValidator` message.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L23-L24
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L78-L97
+```
+
+This message is expected to fail if:
+
+* the initial `CommissionRate` is either negative or > `MaxRate`
+* the `CommissionRate` has already been updated within the previous 24 hours
+* the `CommissionRate` is > `MaxChangeRate`
+* the description fields are too large
+
+This message stores the updated `Validator` object.
+
+### MsgDelegate
+
+Within this message the delegator provides coins, and in return receives
+some amount of their validator's (newly created) delegator-shares that are
+assigned to `Delegation.Shares`.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L26-L28
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L102-L114
+```
+
+This message is expected to fail if:
+
+* the validator does not exist
+* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom`
+* the exchange rate is invalid, meaning the validator has no tokens (due to slashing) but there are outstanding shares
+* the amount delegated is less than the minimum allowed delegation
+
+If an existing `Delegation` object for provided addresses does not already
+exist then it is created as part of this message otherwise the existing
+`Delegation` is updated to include the newly received shares.
+
+The delegator receives newly minted shares at the current exchange rate.
+The exchange rate is the number of existing shares in the validator divided by
+the number of currently delegated tokens.
+
+The validator is updated in the `ValidatorByPower` index, and the delegation is
+tracked in validator object in the `Validators` index.
+
+It is possible to delegate to a jailed validator, the only difference being it
+will not be added to the power index until it is unjailed.
+
+
+
+### MsgUndelegate
+
+The `MsgUndelegate` message allows delegators to undelegate their tokens from
+validator.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L34-L36
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L140-L152
+```
+
+This message returns a response containing the completion time of the undelegation:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L154-L158
+```
+
+This message is expected to fail if:
+
+* the delegation doesn't exist
+* the validator doesn't exist
+* the delegation has less shares than the ones worth of `Amount`
+* existing `UnbondingDelegation` has maximum entries as defined by `params.MaxEntries`
+* the `Amount` has a denomination different than one defined by `params.BondDenom`
+
+When this message is processed the following actions occur:
+
+* validator's `DelegatorShares` and the delegation's `Shares` are both reduced by the message `SharesAmount`
+* calculate the token worth of the shares remove that amount tokens held within the validator
+* with those removed tokens, if the validator is:
+ * `Bonded` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares.
+ * `Unbonding` - add them to an entry in `UnbondingDelegation` (create `UnbondingDelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`).
+ * `Unbonded` - then send the coins the message `DelegatorAddr`
+* if there are no more `Shares` in the delegation, then the delegation object is removed from the store
+ * under this situation if the delegation is the validator's self-delegation then also jail the validator.
+
+
+
+### MsgCancelUnbondingDelegation
+
+The `MsgCancelUnbondingDelegation` message allows delegators to cancel the `unbondingDelegation` entry and delegate back to a previous validator.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L38-L42
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L160-L175
+```
+
+This message is expected to fail if:
+
+* the `unbondingDelegation` entry is already processed.
+* the `cancel unbonding delegation` amount is greater than the `unbondingDelegation` entry balance.
+* the `cancel unbonding delegation` height doesn't exist in the `unbondingDelegationQueue` of the delegator.
+
+When this message is processed the following actions occur:
+
+* if the `unbondingDelegation` Entry balance is zero
+ * in this condition `unbondingDelegation` entry will be removed from `unbondingDelegationQueue`.
+ * otherwise `unbondingDelegationQueue` will be updated with new `unbondingDelegation` entry balance and initial balance
+* the validator's `DelegatorShares` and the delegation's `Shares` are both increased by the message `Amount`.
+
+### MsgBeginRedelegate
+
+The redelegation command allows delegators to instantly switch validators. Once
+the unbonding period has passed, the redelegation is automatically completed in
+the EndBlocker.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L30-L32
+```
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L119-L132
+```
+
+This message returns a response containing the completion time of the redelegation:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L133-L138
+```
+
+This message is expected to fail if:
+
+* the delegation doesn't exist
+* the source or destination validators don't exist
+* the delegation has less shares than the ones worth of `Amount`
+* the source validator has a receiving redelegation which is not matured (aka. the redelegation may be transitive)
+* existing `Redelegation` has maximum entries as defined by `params.MaxEntries`
+* the `Amount` `Coin` has a denomination different than one defined by `params.BondDenom`
+
+When this message is processed the following actions occur:
+
+* the source validator's `DelegatorShares` and the delegations `Shares` are both reduced by the message `SharesAmount`
+* calculate the token worth of the shares remove that amount tokens held within the source validator.
+* if the source validator is:
+ * `Bonded` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with a completion time a full unbonding period from the current time. Update pool shares to reduce BondedTokens and increase NotBondedTokens by token worth of the shares (this may be effectively reversed in the next step however).
+ * `Unbonding` - add an entry to the `Redelegation` (create `Redelegation` if it doesn't exist) with the same completion time as the validator (`UnbondingMinTime`).
+ * `Unbonded` - no action required in this step
+* Delegate the token worth to the destination validator, possibly moving tokens back to the bonded state.
+* if there are no more `Shares` in the source delegation, then the source delegation object is removed from the store
+ * under this situation if the delegation is the validator's self-delegation then also jail the validator.
+
+
+
+
+### MsgUpdateParams
+
+The `MsgUpdateParams` update the staking module parameters.
+The params are updated through a governance proposal where the signer is the gov module account address.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/staking/v1beta1/tx.proto#L182-L195
+```
+
+The message handling can fail if:
+
+* signer is not the authority defined in the staking keeper (usually the gov module account).
+
+## Begin-Block
+
+Each abci begin block call, the historical info will get stored and pruned
+according to the `HistoricalEntries` parameter.
+
+### Historical Info Tracking
+
+If the `HistoricalEntries` parameter is 0, then the `BeginBlock` performs a no-op.
+
+Otherwise, the latest historical info is stored under the key `historicalInfoKey|height`, while any entries older than `height - HistoricalEntries` is deleted.
+In most cases, this results in a single entry being pruned per block.
+However, if the parameter `HistoricalEntries` has changed to a lower value there will be multiple entries in the store that must be pruned.
+
+## End-Block
+
+Each abci end block call, the operations to update queues and validator set
+changes are specified to execute.
+
+### Validator Set Changes
+
+The staking validator set is updated during this process by state transitions
+that run at the end of every block. As a part of this process any updated
+validators are also returned back to CometBFT for inclusion in the CometBFT
+validator set which is responsible for validating CometBFT messages at the
+consensus layer. Operations are as following:
+
+* the new validator set is taken as the top `params.MaxValidators` number of
+ validators retrieved from the `ValidatorsByPower` index
+* the previous validator set is compared with the new validator set:
+ * missing validators begin unbonding and their `Tokens` are transferred from the
+ `BondedPool` to the `NotBondedPool` `ModuleAccount`
+ * new validators are instantly bonded and their `Tokens` are transferred from the
+ `NotBondedPool` to the `BondedPool` `ModuleAccount`
+
+In all cases, any validators leaving or entering the bonded validator set or
+changing balances and staying within the bonded validator set incur an update
+message reporting their new consensus power which is passed back to CometBFT.
+
+The `LastTotalPower` and `LastValidatorsPower` hold the state of the total power
+and validator power from the end of the last block, and are used to check for
+changes that have occurred in `ValidatorsByPower` and the total new power, which
+is calculated during `EndBlock`.
+
+### Queues
+
+Within staking, certain state-transitions are not instantaneous but take place
+over a duration of time (typically the unbonding period). When these
+transitions are mature certain operations must take place in order to complete
+the state operation. This is achieved through the use of queues which are
+checked/processed at the end of each block.
+
+#### Unbonding Validators
+
+When a validator is kicked out of the bonded validator set (either through
+being jailed, or not having sufficient bonded tokens) it begins the unbonding
+process along with all its delegations begin unbonding (while still being
+delegated to this validator). At this point the validator is said to be an
+"unbonding validator", whereby it will mature to become an "unbonded validator"
+after the unbonding period has passed.
+
+Each block the validator queue is to be checked for mature unbonding validators
+(namely with a completion time <= current time and completion height <= current
+block height). At this point any mature validators which do not have any
+delegations remaining are deleted from state. For all other mature unbonding
+validators that still have remaining delegations, the `validator.Status` is
+switched from `types.Unbonding` to
+`types.Unbonded`.
+
+Unbonding operations can be put on hold by external modules via the `PutUnbondingOnHold(unbondingId)` method.
+ As a result, an unbonding operation (e.g., an unbonding delegation) that is on hold, cannot complete
+ even if it reaches maturity. For an unbonding operation with `unbondingId` to eventually complete
+ (after it reaches maturity), every call to `PutUnbondingOnHold(unbondingId)` must be matched
+ by a call to `UnbondingCanComplete(unbondingId)`.
+
+#### Unbonding Delegations
+
+Complete the unbonding of all mature `UnbondingDelegations.Entries` within the
+`UnbondingDelegations` queue with the following procedure:
+
+* transfer the balance coins to the delegator's wallet address
+* remove the mature entry from `UnbondingDelegation.Entries`
+* remove the `UnbondingDelegation` object from the store if there are no
+ remaining entries.
+
+#### Redelegations
+
+Complete the unbonding of all mature `Redelegation.Entries` within the
+`Redelegations` queue with the following procedure:
+
+* remove the mature entry from `Redelegation.Entries`
+* remove the `Redelegation` object from the store if there are no
+ remaining entries.
+
+## Hooks
+
+Other modules may register operations to execute when a certain event has
+occurred within staking. These events can be registered to execute either
+right `Before` or `After` the staking event (as per the hook name). The
+following hooks can registered with staking:
+
+* `AfterValidatorCreated(Context, ValAddress) error`
+ * called when a validator is created
+* `BeforeValidatorModified(Context, ValAddress) error`
+ * called when a validator's state is changed
+* `AfterValidatorRemoved(Context, ConsAddress, ValAddress) error`
+ * called when a validator is deleted
+* `AfterValidatorBonded(Context, ConsAddress, ValAddress) error`
+ * called when a validator is bonded
+* `AfterValidatorBeginUnbonding(Context, ConsAddress, ValAddress) error`
+ * called when a validator begins unbonding
+* `BeforeDelegationCreated(Context, AccAddress, ValAddress) error`
+ * called when a delegation is created
+* `BeforeDelegationSharesModified(Context, AccAddress, ValAddress) error`
+ * called when a delegation's shares are modified
+* `AfterDelegationModified(Context, AccAddress, ValAddress) error`
+ * called when a delegation is created or modified
+* `BeforeDelegationRemoved(Context, AccAddress, ValAddress) error`
+ * called when a delegation is removed
+* `AfterUnbondingInitiated(Context, UnbondingID)`
+ * called when an unbonding operation (validator unbonding, unbonding delegation, redelegation) was initiated
+
+
+## Events
+
+The staking module emits the following events:
+
+### EndBlocker
+
+| Type | Attribute Key | Attribute Value |
+| --------------------- | --------------------- | ------------------------- |
+| complete_unbonding | amount | {totalUnbondingAmount} |
+| complete_unbonding | validator | {validatorAddress} |
+| complete_unbonding | delegator | {delegatorAddress} |
+| complete_redelegation | amount | {totalRedelegationAmount} |
+| complete_redelegation | source_validator | {srcValidatorAddress} |
+| complete_redelegation | destination_validator | {dstValidatorAddress} |
+| complete_redelegation | delegator | {delegatorAddress} |
+
+## Msg's
+
+### MsgCreateValidator
+
+| Type | Attribute Key | Attribute Value |
+| ---------------- | ------------- | ------------------ |
+| create_validator | validator | {validatorAddress} |
+| create_validator | amount | {delegationAmount} |
+| message | module | staking |
+| message | action | create_validator |
+| message | sender | {senderAddress} |
+
+### MsgEditValidator
+
+| Type | Attribute Key | Attribute Value |
+| -------------- | ------------------- | ------------------- |
+| edit_validator | commission_rate | {commissionRate} |
+| edit_validator | min_self_delegation | {minSelfDelegation} |
+| message | module | staking |
+| message | action | edit_validator |
+| message | sender | {senderAddress} |
+
+### MsgDelegate
+
+| Type | Attribute Key | Attribute Value |
+| -------- | ------------- | ------------------ |
+| delegate | validator | {validatorAddress} |
+| delegate | amount | {delegationAmount} |
+| message | module | staking |
+| message | action | delegate |
+| message | sender | {senderAddress} |
+
+### MsgUndelegate
+
+| Type | Attribute Key | Attribute Value |
+| ------- | ------------------- | ------------------ |
+| unbond | validator | {validatorAddress} |
+| unbond | amount | {unbondAmount} |
+| unbond | completion_time [0] | {completionTime} |
+| message | module | staking |
+| message | action | begin_unbonding |
+| message | sender | {senderAddress} |
+
+* [0] Time is formatted in the RFC3339 standard
+
+### MsgCancelUnbondingDelegation
+
+| Type | Attribute Key | Attribute Value |
+| ----------------------------- | ------------------ | ------------------------------------|
+| cancel_unbonding_delegation | validator | {validatorAddress} |
+| cancel_unbonding_delegation | delegator | {delegatorAddress} |
+| cancel_unbonding_delegation | amount | {cancelUnbondingDelegationAmount} |
+| cancel_unbonding_delegation | creation_height | {unbondingCreationHeight} |
+| message | module | staking |
+| message | action | cancel_unbond |
+| message | sender | {senderAddress} |
+
+### MsgBeginRedelegate
+
+| Type | Attribute Key | Attribute Value |
+| ---------- | --------------------- | --------------------- |
+| redelegate | source_validator | {srcValidatorAddress} |
+| redelegate | destination_validator | {dstValidatorAddress} |
+| redelegate | amount | {unbondAmount} |
+| redelegate | completion_time [0] | {completionTime} |
+| message | module | staking |
+| message | action | begin_redelegate |
+| message | sender | {senderAddress} |
+
+* [0] Time is formatted in the RFC3339 standard
+
+## Parameters
+
+The staking module contains the following parameters:
+
+| Key | Type | Example |
+|-------------------|------------------|------------------------|
+| UnbondingTime | string (time ns) | "259200000000000" |
+| MaxValidators | uint16 | 100 |
+| KeyMaxEntries | uint16 | 7 |
+| HistoricalEntries | uint16 | 3 |
+| BondDenom | string | "stake" |
+| MinCommissionRate | string | "0.000000000000000000" |
+
+## Client
+
+### CLI
+
+A user can query and interact with the `staking` module using the CLI.
+
+#### Query
+
+The `query` commands allows users to query `staking` state.
+
+```bash
+simd query staking --help
+```
+
+##### delegation
+
+The `delegation` command allows users to query delegations for an individual delegator on an individual validator.
+
+Usage:
+
+```bash
+simd query staking delegation [delegator-addr] [validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+Example Output:
+
+```bash
+balance:
+ amount: "10000000000"
+ denom: stake
+delegation:
+ delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ shares: "10000000000.000000000000000000"
+ validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+##### delegations
+
+The `delegations` command allows users to query delegations for an individual delegator on all validators.
+
+Usage:
+
+```bash
+simd query staking delegations [delegator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+```
+
+Example Output:
+
+```bash
+delegation_responses:
+- balance:
+ amount: "10000000000"
+ denom: stake
+ delegation:
+ delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ shares: "10000000000.000000000000000000"
+ validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+- balance:
+ amount: "10000000000"
+ denom: stake
+ delegation:
+ delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ shares: "10000000000.000000000000000000"
+ validator_address: cosmosvaloper1x20lytyf6zkcrv5edpkfkn8sz578qg5sqfyqnp
+pagination:
+ next_key: null
+ total: "0"
+```
+
+##### delegations-to
+
+The `delegations-to` command allows users to query delegations on an individual validator.
+
+Usage:
+
+```bash
+simd query staking delegations-to [validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking delegations-to cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+Example Output:
+
+```bash
+- balance:
+ amount: "504000000"
+ denom: stake
+ delegation:
+ delegator_address: cosmos1q2qwwynhv8kh3lu5fkeex4awau9x8fwt45f5cp
+ shares: "504000000.000000000000000000"
+ validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+- balance:
+ amount: "78125000000"
+ denom: uixo
+ delegation:
+ delegator_address: cosmos1qvppl3479hw4clahe0kwdlfvf8uvjtcd99m2ca
+ shares: "78125000000.000000000000000000"
+ validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+pagination:
+ next_key: null
+ total: "0"
+```
+
+##### historical-info
+
+The `historical-info` command allows users to query historical information at given height.
+
+Usage:
+
+```bash
+simd query staking historical-info [height] [flags]
+```
+
+Example:
+
+```bash
+simd query staking historical-info 10
+```
+
+Example Output:
+
+```bash
+header:
+ app_hash: Lbx8cXpI868wz8sgp4qPYVrlaKjevR5WP/IjUxwp3oo=
+ chain_id: testnet
+ consensus_hash: BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=
+ data_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=
+ evidence_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=
+ height: "10"
+ last_block_id:
+ hash: RFbkpu6pWfSThXxKKl6EZVDnBSm16+U0l0xVjTX08Fk=
+ part_set_header:
+ hash: vpIvXD4rxD5GM4MXGz0Sad9I7//iVYLzZsEU4BVgWIU=
+ total: 1
+ last_commit_hash: Ne4uXyx4QtNp4Zx89kf9UK7oG9QVbdB6e7ZwZkhy8K0=
+ last_results_hash: 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=
+ next_validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs=
+ proposer_address: mMEP2c2IRPLr99LedSRtBg9eONM=
+ time: "2021-10-01T06:00:49.785790894Z"
+ validators_hash: nGBgKeWBjoxeKFti00CxHsnULORgKY4LiuQwBuUrhCs=
+ version:
+ app: "0"
+ block: "11"
+valset:
+- commission:
+ commission_rates:
+ max_change_rate: "0.010000000000000000"
+ max_rate: "0.200000000000000000"
+ rate: "0.100000000000000000"
+ update_time: "2021-10-01T05:52:50.380144238Z"
+ consensus_pubkey:
+ '@type': /cosmos.crypto.ed25519.PubKey
+ key: Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8=
+ delegator_shares: "10000000.000000000000000000"
+ description:
+ details: ""
+ identity: ""
+ moniker: myvalidator
+ security_contact: ""
+ website: ""
+ jailed: false
+ min_self_delegation: "1"
+ operator_address: cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc
+ status: BOND_STATUS_BONDED
+ tokens: "10000000"
+ unbonding_height: "0"
+ unbonding_time: "1970-01-01T00:00:00Z"
+```
+
+##### params
+
+The `params` command allows users to query values set as staking parameters.
+
+Usage:
+
+```bash
+simd query staking params [flags]
+```
+
+Example:
+
+```bash
+simd query staking params
+```
+
+Example Output:
+
+```bash
+bond_denom: stake
+historical_entries: 10000
+max_entries: 7
+max_validators: 50
+unbonding_time: 1814400s
+```
+
+##### pool
+
+The `pool` command allows users to query values for amounts stored in the staking pool.
+
+Usage:
+
+```bash
+simd q staking pool [flags]
+```
+
+Example:
+
+```bash
+simd q staking pool
+```
+
+Example Output:
+
+```bash
+bonded_tokens: "10000000"
+not_bonded_tokens: "0"
+```
+
+##### redelegation
+
+The `redelegation` command allows users to query a redelegation record based on delegator and a source and destination validator address.
+
+Usage:
+
+```bash
+simd query staking redelegation [delegator-addr] [src-validator-addr] [dst-validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+Example Output:
+
+```bash
+pagination: null
+redelegation_responses:
+- entries:
+ - balance: "50000000"
+ redelegation_entry:
+ completion_time: "2021-10-24T20:33:21.960084845Z"
+ creation_height: 2.382847e+06
+ initial_balance: "50000000"
+ shares_dst: "50000000.000000000000000000"
+ - balance: "5000000000"
+ redelegation_entry:
+ completion_time: "2021-10-25T21:33:54.446846862Z"
+ creation_height: 2.397271e+06
+ initial_balance: "5000000000"
+ shares_dst: "5000000000.000000000000000000"
+ redelegation:
+ delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ entries: null
+ validator_dst_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm
+ validator_src_address: cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm
+```
+
+##### redelegations
+
+The `redelegations` command allows users to query all redelegation records for an individual delegator.
+
+Usage:
+
+```bash
+simd query staking redelegations [delegator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking redelegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "0"
+redelegation_responses:
+- entries:
+ - balance: "50000000"
+ redelegation_entry:
+ completion_time: "2021-10-24T20:33:21.960084845Z"
+ creation_height: 2.382847e+06
+ initial_balance: "50000000"
+ shares_dst: "50000000.000000000000000000"
+ - balance: "5000000000"
+ redelegation_entry:
+ completion_time: "2021-10-25T21:33:54.446846862Z"
+ creation_height: 2.397271e+06
+ initial_balance: "5000000000"
+ shares_dst: "5000000000.000000000000000000"
+ redelegation:
+ delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ entries: null
+ validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm
+ validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp
+- entries:
+ - balance: "562770000000"
+ redelegation_entry:
+ completion_time: "2021-10-25T21:42:07.336911677Z"
+ creation_height: 2.39735e+06
+ initial_balance: "562770000000"
+ shares_dst: "562770000000.000000000000000000"
+ redelegation:
+ delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ entries: null
+ validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm
+ validator_src_address: cosmosvaloper1zppjyal5emta5cquje8ndkpz0rs046m7zqxrpp
+```
+
+##### redelegations-from
+
+The `redelegations-from` command allows users to query delegations that are redelegating _from_ a validator.
+
+Usage:
+
+```bash
+simd query staking redelegations-from [validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking redelegations-from cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "0"
+redelegation_responses:
+- entries:
+ - balance: "50000000"
+ redelegation_entry:
+ completion_time: "2021-10-24T20:33:21.960084845Z"
+ creation_height: 2.382847e+06
+ initial_balance: "50000000"
+ shares_dst: "50000000.000000000000000000"
+ - balance: "5000000000"
+ redelegation_entry:
+ completion_time: "2021-10-25T21:33:54.446846862Z"
+ creation_height: 2.397271e+06
+ initial_balance: "5000000000"
+ shares_dst: "5000000000.000000000000000000"
+ redelegation:
+ delegator_address: cosmos1pm6e78p4pgn0da365plzl4t56pxy8hwtqp2mph
+ entries: null
+ validator_dst_address: cosmosvaloper1uccl5ugxrm7vqlzwqr04pjd320d2fz0z3hc6vm
+ validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy
+- entries:
+ - balance: "221000000"
+ redelegation_entry:
+ completion_time: "2021-10-05T21:05:45.669420544Z"
+ creation_height: 2.120693e+06
+ initial_balance: "221000000"
+ shares_dst: "221000000.000000000000000000"
+ redelegation:
+ delegator_address: cosmos1zqv8qxy2zgn4c58fz8jt8jmhs3d0attcussrf6
+ entries: null
+ validator_dst_address: cosmosvaloper10mseqwnwtjaqfrwwp2nyrruwmjp6u5jhah4c3y
+ validator_src_address: cosmosvaloper1y4rzzrgl66eyhzt6gse2k7ej3zgwmngeleucjy
+```
+
+##### unbonding-delegation
+
+The `unbonding-delegation` command allows users to query unbonding delegations for an individual delegator on an individual validator.
+
+Usage:
+
+```bash
+simd query staking unbonding-delegation [delegator-addr] [validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking unbonding-delegation cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+Example Output:
+
+```bash
+delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+entries:
+- balance: "52000000"
+ completion_time: "2021-11-02T11:35:55.391594709Z"
+ creation_height: "55078"
+ initial_balance: "52000000"
+validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+##### unbonding-delegations
+
+The `unbonding-delegations` command allows users to query all unbonding-delegations records for one delegator.
+
+Usage:
+
+```bash
+simd query staking unbonding-delegations [delegator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking unbonding-delegations cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "0"
+unbonding_responses:
+- delegator_address: cosmos1gghjut3ccd8ay0zduzj64hwre2fxs9ld75ru9p
+ entries:
+ - balance: "52000000"
+ completion_time: "2021-11-02T11:35:55.391594709Z"
+ creation_height: "55078"
+ initial_balance: "52000000"
+ validator_address: cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa
+
+```
+
+##### unbonding-delegations-from
+
+The `unbonding-delegations-from` command allows users to query delegations that are unbonding _from_ a validator.
+
+Usage:
+
+```bash
+simd query staking unbonding-delegations-from [validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking unbonding-delegations-from cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: null
+ total: "0"
+unbonding_responses:
+- delegator_address: cosmos1qqq9txnw4c77sdvzx0tkedsafl5s3vk7hn53fn
+ entries:
+ - balance: "150000000"
+ completion_time: "2021-11-01T21:41:13.098141574Z"
+ creation_height: "46823"
+ initial_balance: "150000000"
+ validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+- delegator_address: cosmos1peteje73eklqau66mr7h7rmewmt2vt99y24f5z
+ entries:
+ - balance: "24000000"
+ completion_time: "2021-10-31T02:57:18.192280361Z"
+ creation_height: "21516"
+ initial_balance: "24000000"
+ validator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+##### validator
+
+The `validator` command allows users to query details about an individual validator.
+
+Usage:
+
+```bash
+simd query staking validator [validator-addr] [flags]
+```
+
+Example:
+
+```bash
+simd query staking validator cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+```
+
+Example Output:
+
+```bash
+commission:
+ commission_rates:
+ max_change_rate: "0.020000000000000000"
+ max_rate: "0.200000000000000000"
+ rate: "0.050000000000000000"
+ update_time: "2021-10-01T19:24:52.663191049Z"
+consensus_pubkey:
+ '@type': /cosmos.crypto.ed25519.PubKey
+ key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=
+delegator_shares: "32948270000.000000000000000000"
+description:
+ details: Witval is the validator arm from Vitwit. Vitwit is into software consulting
+ and services business since 2015. We are working closely with Cosmos ecosystem
+ since 2018. We are also building tools for the ecosystem, Aneka is our explorer
+ for the cosmos ecosystem.
+ identity: 51468B615127273A
+ moniker: Witval
+ security_contact: ""
+ website: ""
+jailed: false
+min_self_delegation: "1"
+operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+status: BOND_STATUS_BONDED
+tokens: "32948270000"
+unbonding_height: "0"
+unbonding_time: "1970-01-01T00:00:00Z"
+```
+
+##### validators
+
+The `validators` command allows users to query details about all validators on a network.
+
+Usage:
+
+```bash
+simd query staking validators [flags]
+```
+
+Example:
+
+```bash
+simd query staking validators
+```
+
+Example Output:
+
+```bash
+pagination:
+ next_key: FPTi7TKAjN63QqZh+BaXn6gBmD5/
+ total: "0"
+validators:
+commission:
+ commission_rates:
+ max_change_rate: "0.020000000000000000"
+ max_rate: "0.200000000000000000"
+ rate: "0.050000000000000000"
+ update_time: "2021-10-01T19:24:52.663191049Z"
+consensus_pubkey:
+ '@type': /cosmos.crypto.ed25519.PubKey
+ key: sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc=
+delegator_shares: "32948270000.000000000000000000"
+description:
+ details: Witval is the validator arm from Vitwit. Vitwit is into software consulting
+ and services business since 2015. We are working closely with Cosmos ecosystem
+ since 2018. We are also building tools for the ecosystem, Aneka is our explorer
+ for the cosmos ecosystem.
+ identity: 51468B615127273A
+ moniker: Witval
+ security_contact: ""
+ website: ""
+ jailed: false
+ min_self_delegation: "1"
+ operator_address: cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj
+ status: BOND_STATUS_BONDED
+ tokens: "32948270000"
+ unbonding_height: "0"
+ unbonding_time: "1970-01-01T00:00:00Z"
+- commission:
+ commission_rates:
+ max_change_rate: "0.100000000000000000"
+ max_rate: "0.200000000000000000"
+ rate: "0.050000000000000000"
+ update_time: "2021-10-04T18:02:21.446645619Z"
+ consensus_pubkey:
+ '@type': /cosmos.crypto.ed25519.PubKey
+ key: GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA=
+ delegator_shares: "559343421.000000000000000000"
+ description:
+ details: Noderunners is a professional validator in POS networks. We have a huge
+ node running experience, reliable soft and hardware. Our commissions are always
+ low, our support to delegators is always full. Stake with us and start receiving
+ your Cosmos rewards now!
+ identity: 812E82D12FEA3493
+ moniker: Noderunners
+ security_contact: info@noderunners.biz
+ website: http://noderunners.biz
+ jailed: false
+ min_self_delegation: "1"
+ operator_address: cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7
+ status: BOND_STATUS_BONDED
+ tokens: "559343421"
+ unbonding_height: "0"
+ unbonding_time: "1970-01-01T00:00:00Z"
+```
+
+#### Transactions
+
+The `tx` commands allows users to interact with the `staking` module.
+
+```bash
+simd tx staking --help
+```
+
+##### create-validator
+
+The command `create-validator` allows users to create new validator initialized with a self-delegation to it.
+
+Usage:
+
+```bash
+simd tx staking create-validator [path/to/validator.json] [flags]
+```
+
+Example:
+
+```bash
+simd tx staking create-validator /path/to/validator.json \
+ --chain-id="name_of_chain_id" \
+ --gas="auto" \
+ --gas-adjustment="1.2" \
+ --gas-prices="0.025stake" \
+ --from=mykey
+```
+
+where `validator.json` contains:
+
+```json
+{
+ "pubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"BnbwFpeONLqvWqJb3qaUbL5aoIcW3fSuAp9nT3z5f20="},
+ "amount": "1000000stake",
+ "moniker": "my-moniker",
+ "website": "https://myweb.site",
+ "security": "security-contact@gmail.com",
+ "details": "description of your validator",
+ "commission-rate": "0.10",
+ "commission-max-rate": "0.20",
+ "commission-max-change-rate": "0.01",
+ "min-self-delegation": "1"
+}
+```
+
+and pubkey can be obtained by using `simd tendermint show-validator` command.
+
+##### delegate
+
+The command `delegate` allows users to delegate liquid tokens to a validator.
+
+Usage:
+
+```bash
+simd tx staking delegate [validator-addr] [amount] [flags]
+```
+
+Example:
+
+```bash
+simd tx staking delegate cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 1000stake --from mykey
+```
+
+##### edit-validator
+
+The command `edit-validator` allows users to edit an existing validator account.
+
+Usage:
+
+```bash
+simd tx staking edit-validator [flags]
+```
+
+Example:
+
+```bash
+simd tx staking edit-validator --moniker "new_moniker_name" --website "new_website_url" --from mykey
+```
+
+##### redelegate
+
+The command `redelegate` allows users to redelegate illiquid tokens from one validator to another.
+
+Usage:
+
+```bash
+simd tx staking redelegate [src-validator-addr] [dst-validator-addr] [amount] [flags]
+```
+
+Example:
+
+```bash
+simd tx staking redelegate cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj cosmosvaloper1l2rsakp388kuv9k8qzq6lrm9taddae7fpx59wm 100stake --from mykey
+```
+
+##### unbond
+
+The command `unbond` allows users to unbond shares from a validator.
+
+Usage:
+
+```bash
+simd tx staking unbond [validator-addr] [amount] [flags]
+```
+
+Example:
+
+```bash
+simd tx staking unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake --from mykey
+```
+
+##### cancel unbond
+
+The command `cancel-unbond` allow users to cancel the unbonding delegation entry and delegate back to the original validator.
+
+Usage:
+
+```bash
+simd tx staking cancel-unbond [validator-addr] [amount] [creation-height]
+```
+
+Example:
+
+```bash
+simd tx staking cancel-unbond cosmosvaloper1gghjut3ccd8ay0zduzj64hwre2fxs9ldmqhffj 100stake 123123 --from mykey
+```
+
+
+### gRPC
+
+A user can query the `staking` module using gRPC endpoints.
+
+#### Validators
+
+The `Validators` endpoint queries all validators that match the given status.
+
+```bash
+cosmos.staking.v1beta1.Query/Validators
+```
+
+Example:
+
+```bash
+grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Validators
+```
+
+Example Output:
+
+```bash
+{
+ "validators": [
+ {
+ "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="},
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "10000000",
+ "delegatorShares": "10000000000000000000000000",
+ "description": {
+ "moniker": "myvalidator"
+ },
+ "unbondingTime": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commissionRates": {
+ "rate": "100000000000000000",
+ "maxRate": "200000000000000000",
+ "maxChangeRate": "10000000000000000"
+ },
+ "updateTime": "2021-10-01T05:52:50.380144238Z"
+ },
+ "minSelfDelegation": "1"
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### Validator
+
+The `Validator` endpoint queries validator information for given validator address.
+
+```bash
+cosmos.staking.v1beta1.Query/Validator
+```
+
+Example:
+
+```bash
+grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/Validator
+```
+
+Example Output:
+
+```bash
+{
+ "validator": {
+ "operatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "consensusPubkey": {"@type":"/cosmos.crypto.ed25519.PubKey","key":"Auxs3865HpB/EfssYOzfqNhEJjzys2Fo6jD5B8tPgC8="},
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "10000000",
+ "delegatorShares": "10000000000000000000000000",
+ "description": {
+ "moniker": "myvalidator"
+ },
+ "unbondingTime": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commissionRates": {
+ "rate": "100000000000000000",
+ "maxRate": "200000000000000000",
+ "maxChangeRate": "10000000000000000"
+ },
+ "updateTime": "2021-10-01T05:52:50.380144238Z"
+ },
+ "minSelfDelegation": "1"
+ }
+}
+```
+
+#### ValidatorDelegations
+
+The `ValidatorDelegations` endpoint queries delegate information for given validator.
+
+```bash
+cosmos.staking.v1beta1.Query/ValidatorDelegations
+```
+
+Example:
+
+```bash
+grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/ValidatorDelegations
+```
+
+Example Output:
+
+```bash
+{
+ "delegationResponses": [
+ {
+ "delegation": {
+ "delegatorAddress": "cosmos1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgy3ua5t",
+ "validatorAddress": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "shares": "10000000000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "10000000"
+ }
+ }
+ ],
+ "pagination": {
+ "total": "1"
+ }
+}
+```
+
+#### ValidatorUnbondingDelegations
+
+The `ValidatorUnbondingDelegations` endpoint queries delegate information for given validator.
+
+```bash
+cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations
+```
+
+Example:
+
+```bash
+grpcurl -plaintext -d '{"validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/ValidatorUnbondingDelegations
+```
+
+Example Output:
+
+```bash
+{
+ "unbonding_responses": [
+ {
+ "delegator_address": "cosmos1z3pzzw84d6xn00pw9dy3yapqypfde7vg6965fy",
+ "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "entries": [
+ {
+ "creation_height": "25325",
+ "completion_time": "2021-10-31T09:24:36.797320636Z",
+ "initial_balance": "20000000",
+ "balance": "20000000"
+ }
+ ]
+ },
+ {
+ "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77",
+ "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "entries": [
+ {
+ "creation_height": "13100",
+ "completion_time": "2021-10-30T12:53:02.272266791Z",
+ "initial_balance": "1000000",
+ "balance": "1000000"
+ }
+ ]
+ },
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "8"
+ }
+}
+```
+
+#### Delegation
+
+The `Delegation` endpoint queries delegate information for given validator delegator pair.
+
+```bash
+cosmos.staking.v1beta1.Query/Delegation
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/Delegation
+```
+
+Example Output:
+
+```bash
+{
+ "delegation_response":
+ {
+ "delegation":
+ {
+ "delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77",
+ "validator_address":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "shares":"25083119936.000000000000000000"
+ },
+ "balance":
+ {
+ "denom":"stake",
+ "amount":"25083119936"
+ }
+ }
+}
+```
+
+#### UnbondingDelegation
+
+The `UnbondingDelegation` endpoint queries unbonding information for given validator delegator.
+
+```bash
+cosmos.staking.v1beta1.Query/UnbondingDelegation
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77", validator_addr":"cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/UnbondingDelegation
+```
+
+Example Output:
+
+```bash
+{
+ "unbond": {
+ "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77",
+ "validator_address": "cosmosvaloper1rne8lgs98p0jqe82sgt0qr4rdn4hgvmgp9ggcc",
+ "entries": [
+ {
+ "creation_height": "136984",
+ "completion_time": "2021-11-08T05:38:47.505593891Z",
+ "initial_balance": "400000000",
+ "balance": "400000000"
+ },
+ {
+ "creation_height": "137005",
+ "completion_time": "2021-11-08T05:40:53.526196312Z",
+ "initial_balance": "385000000",
+ "balance": "385000000"
+ }
+ ]
+ }
+}
+```
+
+#### DelegatorDelegations
+
+The `DelegatorDelegations` endpoint queries all delegations of a given delegator address.
+
+```bash
+cosmos.staking.v1beta1.Query/DelegatorDelegations
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/DelegatorDelegations
+```
+
+Example Output:
+
+```bash
+{
+ "delegation_responses": [
+ {"delegation":{"delegator_address":"cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77","validator_address":"cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8","shares":"25083339023.000000000000000000"},"balance":{"denom":"stake","amount":"25083339023"}}
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### DelegatorUnbondingDelegations
+
+The `DelegatorUnbondingDelegations` endpoint queries all unbonding delegations of a given delegator address.
+
+```bash
+cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/DelegatorUnbondingDelegations
+```
+
+Example Output:
+
+```bash
+{
+ "unbonding_responses": [
+ {
+ "delegator_address": "cosmos1y8nyfvmqh50p6ldpzljk3yrglppdv3t8phju77",
+ "validator_address": "cosmosvaloper1sjllsnramtg3ewxqwwrwjxfgc4n4ef9uxyejze",
+ "entries": [
+ {
+ "creation_height": "136984",
+ "completion_time": "2021-11-08T05:38:47.505593891Z",
+ "initial_balance": "400000000",
+ "balance": "400000000"
+ },
+ {
+ "creation_height": "137005",
+ "completion_time": "2021-11-08T05:40:53.526196312Z",
+ "initial_balance": "385000000",
+ "balance": "385000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### Redelegations
+
+The `Redelegations` endpoint queries redelegations of given address.
+
+```bash
+cosmos.staking.v1beta1.Query/Redelegations
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf", "src_validator_addr" : "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g", "dst_validator_addr" : "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/Redelegations
+```
+
+Example Output:
+
+```bash
+{
+ "redelegation_responses": [
+ {
+ "redelegation": {
+ "delegator_address": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf",
+ "validator_src_address": "cosmosvaloper1j7euyj85fv2jugejrktj540emh9353ltgppc3g",
+ "validator_dst_address": "cosmosvaloper1yy3tnegzmkdcm7czzcy3flw5z0zyr9vkkxrfse",
+ "entries": null
+ },
+ "entries": [
+ {
+ "redelegation_entry": {
+ "creation_height": 135932,
+ "completion_time": "2021-11-08T03:52:55.299147901Z",
+ "initial_balance": "2900000",
+ "shares_dst": "2900000.000000000000000000"
+ },
+ "balance": "2900000"
+ }
+ ]
+ }
+ ],
+ "pagination": null
+}
+```
+
+#### DelegatorValidators
+
+The `DelegatorValidators` endpoint queries all validators information for given delegator.
+
+```bash
+cosmos.staking.v1beta1.Query/DelegatorValidators
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1ld5p7hn43yuh8ht28gm9pfjgj2fctujp2tgwvf"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidators
+```
+
+Example Output:
+
+```bash
+{
+ "validators": [
+ {
+ "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "347260647559",
+ "delegator_shares": "347260647559.000000000000000000",
+ "description": {
+ "moniker": "BouBouNode",
+ "identity": "",
+ "website": "https://boubounode.com",
+ "security_contact": "",
+ "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love."
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.061000000000000000",
+ "max_rate": "0.300000000000000000",
+ "max_change_rate": "0.150000000000000000"
+ },
+ "update_time": "2021-10-01T15:00:00Z"
+ },
+ "min_self_delegation": "1"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### DelegatorValidator
+
+The `DelegatorValidator` endpoint queries validator information for given delegator validator
+
+```bash
+cosmos.staking.v1beta1.Query/DelegatorValidator
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+-d '{"delegator_addr": "cosmos1eh5mwu044gd5ntkkc2xgfg8247mgc56f3n8rr7", "validator_addr": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8"}' \
+localhost:9090 cosmos.staking.v1beta1.Query/DelegatorValidator
+```
+
+Example Output:
+
+```bash
+{
+ "validator": {
+ "operator_address": "cosmosvaloper1eh5mwu044gd5ntkkc2xgfg8247mgc56fww3vc8",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "UPwHWxH1zHJWGOa/m6JB3f5YjHMvPQPkVbDqqi+U7Uw="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "347262754841",
+ "delegator_shares": "347262754841.000000000000000000",
+ "description": {
+ "moniker": "BouBouNode",
+ "identity": "",
+ "website": "https://boubounode.com",
+ "security_contact": "",
+ "details": "AI-based Validator. #1 AI Validator on Game of Stakes. Fairly priced. Don't trust (humans), verify. Made with BouBou love."
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.061000000000000000",
+ "max_rate": "0.300000000000000000",
+ "max_change_rate": "0.150000000000000000"
+ },
+ "update_time": "2021-10-01T15:00:00Z"
+ },
+ "min_self_delegation": "1"
+ }
+}
+```
+
+#### HistoricalInfo
+
+```bash
+cosmos.staking.v1beta1.Query/HistoricalInfo
+```
+
+Example:
+
+```bash
+grpcurl -plaintext -d '{"height" : 1}' localhost:9090 cosmos.staking.v1beta1.Query/HistoricalInfo
+```
+
+Example Output:
+
+```bash
+{
+ "hist": {
+ "header": {
+ "version": {
+ "block": "11",
+ "app": "0"
+ },
+ "chain_id": "simd-1",
+ "height": "140142",
+ "time": "2021-10-11T10:56:29.720079569Z",
+ "last_block_id": {
+ "hash": "9gri/4LLJUBFqioQ3NzZIP9/7YHR9QqaM6B2aJNQA7o=",
+ "part_set_header": {
+ "total": 1,
+ "hash": "Hk1+C864uQkl9+I6Zn7IurBZBKUevqlVtU7VqaZl1tc="
+ }
+ },
+ "last_commit_hash": "VxrcS27GtvGruS3I9+AlpT7udxIT1F0OrRklrVFSSKc=",
+ "data_hash": "80BjOrqNYUOkTnmgWyz9AQ8n7SoEmPVi4QmAe8RbQBY=",
+ "validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=",
+ "next_validators_hash": "95W49n2hw8RWpr1GPTAO5MSPi6w6Wjr3JjjS7AjpBho=",
+ "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=",
+ "app_hash": "ZZaxnSY3E6Ex5Bvkm+RigYCK82g8SSUL53NymPITeOE=",
+ "last_results_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
+ "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
+ "proposer_address": "aH6dO428B+ItuoqPq70efFHrSMY="
+ },
+ "valset": [
+ {
+ "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "1426045203613",
+ "delegator_shares": "1426045203613.000000000000000000",
+ "description": {
+ "moniker": "SG-1",
+ "identity": "48608633F99D1B60",
+ "website": "https://sg-1.online",
+ "security_contact": "",
+ "details": "SG-1 - your favorite validator on Witval. We offer 100% Soft Slash protection."
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.037500000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.030000000000000000"
+ },
+ "update_time": "2021-10-01T15:00:00Z"
+ },
+ "min_self_delegation": "1"
+ }
+ ]
+ }
+}
+
+```
+
+#### Pool
+
+The `Pool` endpoint queries the pool information.
+
+```bash
+cosmos.staking.v1beta1.Query/Pool
+```
+
+Example:
+
+```bash
+grpcurl -plaintext -d localhost:9090 cosmos.staking.v1beta1.Query/Pool
+```
+
+Example Output:
+
+```bash
+{
+ "pool": {
+ "not_bonded_tokens": "369054400189",
+ "bonded_tokens": "15657192425623"
+ }
+}
+```
+
+#### Params
+
+The `Params` endpoint queries the pool information.
+
+```bash
+cosmos.staking.v1beta1.Query/Params
+```
+
+Example:
+
+```bash
+grpcurl -plaintext localhost:9090 cosmos.staking.v1beta1.Query/Params
+```
+
+Example Output:
+
+```bash
+{
+ "params": {
+ "unbondingTime": "1814400s",
+ "maxValidators": 100,
+ "maxEntries": 7,
+ "historicalEntries": 10000,
+ "bondDenom": "stake"
+ }
+}
+```
+
+### REST
+
+A user can query the `staking` module using REST endpoints.
+
+#### DelegatorDelegations
+
+The `DelegatorDelegations` REST endpoint queries all delegations of a given delegator address.
+
+```bash
+/cosmos/staking/v1beta1/delegations/{delegatorAddr}
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/delegations/cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "delegation_responses": [
+ {
+ "delegation": {
+ "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5",
+ "validator_address": "cosmosvaloper1quqxfrxkycr0uzt4yk0d57tcq3zk7srm7sm6r8",
+ "shares": "256250000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "256250000"
+ }
+ },
+ {
+ "delegation": {
+ "delegator_address": "cosmos1vcs68xf2tnqes5tg0khr0vyevm40ff6zdxatp5",
+ "validator_address": "cosmosvaloper194v8uwee2fvs2s8fa5k7j03ktwc87h5ym39jfv",
+ "shares": "255150000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "255150000"
+ }
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
+
+#### Redelegations
+
+The `Redelegations` REST endpoint queries redelegations of given address.
+
+```bash
+/cosmos/staking/v1beta1/delegators/{delegatorAddr}/redelegations
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e/redelegations?srcValidatorAddr=cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf&dstValidatorAddr=cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "redelegation_responses": [
+ {
+ "redelegation": {
+ "delegator_address": "cosmos1thfntksw0d35n2tkr0k8v54fr8wxtxwxl2c56e",
+ "validator_src_address": "cosmosvaloper1lzhlnpahvznwfv4jmay2tgaha5kmz5qx4cuznf",
+ "validator_dst_address": "cosmosvaloper1vq8tw77kp8lvxq9u3c8eeln9zymn68rng8pgt4",
+ "entries": null
+ },
+ "entries": [
+ {
+ "redelegation_entry": {
+ "creation_height": 151523,
+ "completion_time": "2021-11-09T06:03:25.640682116Z",
+ "initial_balance": "200000000",
+ "shares_dst": "200000000.000000000000000000"
+ },
+ "balance": "200000000"
+ }
+ ]
+ }
+ ],
+ "pagination": null
+}
+```
+
+#### DelegatorUnbondingDelegations
+
+The `DelegatorUnbondingDelegations` REST endpoint queries all unbonding delegations of a given delegator address.
+
+```bash
+/cosmos/staking/v1beta1/delegators/{delegatorAddr}/unbonding_delegations
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll/unbonding_delegations" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "unbonding_responses": [
+ {
+ "delegator_address": "cosmos1nxv42u3lv642q0fuzu2qmrku27zgut3n3z7lll",
+ "validator_address": "cosmosvaloper1e7mvqlz50ch6gw4yjfemsc069wfre4qwmw53kq",
+ "entries": [
+ {
+ "creation_height": "2442278",
+ "completion_time": "2021-10-12T10:59:03.797335857Z",
+ "initial_balance": "50000000000",
+ "balance": "50000000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### DelegatorValidators
+
+The `DelegatorValidators` REST endpoint queries all validators information for given delegator address.
+
+```bash
+/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "validators": [
+ {
+ "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "21592843799",
+ "delegator_shares": "21592843799.000000000000000000",
+ "description": {
+ "moniker": "jabbey",
+ "identity": "",
+ "website": "https://twitter.com/JoeAbbey",
+ "security_contact": "",
+ "details": "just another dad in the cosmos"
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.100000000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.100000000000000000"
+ },
+ "update_time": "2021-10-09T19:03:54.984821705Z"
+ },
+ "min_self_delegation": "1"
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "1"
+ }
+}
+```
+
+#### DelegatorValidator
+
+The `DelegatorValidator` REST endpoint queries validator information for given delegator validator pair.
+
+```bash
+/cosmos/staking/v1beta1/delegators/{delegatorAddr}/validators/{validatorAddr}
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/delegators/cosmos1xwazl8ftks4gn00y5x3c47auquc62ssune9ppv/validators/cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "validator": {
+ "operator_address": "cosmosvaloper1xwazl8ftks4gn00y5x3c47auquc62ssuvynw64",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "5v4n3px3PkfNnKflSgepDnsMQR1hiNXnqOC11Y72/PQ="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "21592843799",
+ "delegator_shares": "21592843799.000000000000000000",
+ "description": {
+ "moniker": "jabbey",
+ "identity": "",
+ "website": "https://twitter.com/JoeAbbey",
+ "security_contact": "",
+ "details": "just another dad in the cosmos"
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.100000000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.100000000000000000"
+ },
+ "update_time": "2021-10-09T19:03:54.984821705Z"
+ },
+ "min_self_delegation": "1"
+ }
+}
+```
+
+#### HistoricalInfo
+
+The `HistoricalInfo` REST endpoint queries the historical information for given height.
+
+```bash
+/cosmos/staking/v1beta1/historical_info/{height}
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/historical_info/153332" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "hist": {
+ "header": {
+ "version": {
+ "block": "11",
+ "app": "0"
+ },
+ "chain_id": "cosmos-1",
+ "height": "153332",
+ "time": "2021-10-12T09:05:35.062230221Z",
+ "last_block_id": {
+ "hash": "NX8HevR5khb7H6NGKva+jVz7cyf0skF1CrcY9A0s+d8=",
+ "part_set_header": {
+ "total": 1,
+ "hash": "zLQ2FiKM5tooL3BInt+VVfgzjlBXfq0Hc8Iux/xrhdg="
+ }
+ },
+ "last_commit_hash": "P6IJrK8vSqU3dGEyRHnAFocoDGja0bn9euLuy09s350=",
+ "data_hash": "eUd+6acHWrNXYju8Js449RJ99lOYOs16KpqQl4SMrEM=",
+ "validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=",
+ "next_validators_hash": "mB4pravvMsJKgi+g8aYdSeNlt0kPjnRFyvtAQtaxcfw=",
+ "consensus_hash": "BICRvH3cKD93v7+R1zxE2ljD34qcvIZ0Bdi389qtoi8=",
+ "app_hash": "fuELArKRK+CptnZ8tu54h6xEleSWenHNmqC84W866fU=",
+ "last_results_hash": "p/BPexV4LxAzlVcPRvW+lomgXb6Yze8YLIQUo/4Kdgc=",
+ "evidence_hash": "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=",
+ "proposer_address": "G0MeY8xQx7ooOsni8KE/3R/Ib3Q="
+ },
+ "valset": [
+ {
+ "operator_address": "cosmosvaloper196ax4vc0lwpxndu9dyhvca7jhxp70rmcqcnylw",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "/O7BtNW0pafwfvomgR4ZnfldwPXiFfJs9mHg3gwfv5Q="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "1416521659632",
+ "delegator_shares": "1416521659632.000000000000000000",
+ "description": {
+ "moniker": "SG-1",
+ "identity": "48608633F99D1B60",
+ "website": "https://sg-1.online",
+ "security_contact": "",
+ "details": "SG-1 - your favorite validator on cosmos. We offer 100% Soft Slash protection."
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.037500000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.030000000000000000"
+ },
+ "update_time": "2021-10-01T15:00:00Z"
+ },
+ "min_self_delegation": "1"
+ },
+ {
+ "operator_address": "cosmosvaloper1t8ehvswxjfn3ejzkjtntcyrqwvmvuknzmvtaaa",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "uExZyjNLtr2+FFIhNDAMcQ8+yTrqE7ygYTsI7khkA5Y="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "1348298958808",
+ "delegator_shares": "1348298958808.000000000000000000",
+ "description": {
+ "moniker": "Cosmostation",
+ "identity": "AE4C403A6E7AA1AC",
+ "website": "https://www.cosmostation.io",
+ "security_contact": "admin@stamper.network",
+ "details": "Cosmostation validator node. Delegate your tokens and Start Earning Staking Rewards"
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.050000000000000000",
+ "max_rate": "1.000000000000000000",
+ "max_change_rate": "0.200000000000000000"
+ },
+ "update_time": "2021-10-01T15:06:38.821314287Z"
+ },
+ "min_self_delegation": "1"
+ }
+ ]
+ }
+}
+```
+
+#### Parameters
+
+The `Parameters` REST endpoint queries the staking parameters.
+
+```bash
+/cosmos/staking/v1beta1/params
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/params" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "params": {
+ "unbonding_time": "2419200s",
+ "max_validators": 100,
+ "max_entries": 7,
+ "historical_entries": 10000,
+ "bond_denom": "stake"
+ }
+}
+```
+
+#### Pool
+
+The `Pool` REST endpoint queries the pool information.
+
+```bash
+/cosmos/staking/v1beta1/pool
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/pool" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "pool": {
+ "not_bonded_tokens": "432805737458",
+ "bonded_tokens": "15783637712645"
+ }
+}
+```
+
+#### Validators
+
+The `Validators` REST endpoint queries all validators that match the given status.
+
+```bash
+/cosmos/staking/v1beta1/validators
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "validators": [
+ {
+ "operator_address": "cosmosvaloper1q3jsx9dpfhtyqqgetwpe5tmk8f0ms5qywje8tw",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "N7BPyek2aKuNZ0N/8YsrqSDhGZmgVaYUBuddY8pwKaE="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "383301887799",
+ "delegator_shares": "383301887799.000000000000000000",
+ "description": {
+ "moniker": "SmartNodes",
+ "identity": "D372724899D1EDC8",
+ "website": "https://smartnodes.co",
+ "security_contact": "",
+ "details": "Earn Rewards with Crypto Staking & Node Deployment"
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.050000000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.100000000000000000"
+ },
+ "update_time": "2021-10-01T15:51:31.596618510Z"
+ },
+ "min_self_delegation": "1"
+ },
+ {
+ "operator_address": "cosmosvaloper1q5ku90atkhktze83j9xjaks2p7uruag5zp6wt7",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "GDNpuKDmCg9GnhnsiU4fCWktuGUemjNfvpCZiqoRIYA="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_UNBONDING",
+ "tokens": "1017819654",
+ "delegator_shares": "1017819654.000000000000000000",
+ "description": {
+ "moniker": "Noderunners",
+ "identity": "812E82D12FEA3493",
+ "website": "http://noderunners.biz",
+ "security_contact": "info@noderunners.biz",
+ "details": "Noderunners is a professional validator in POS networks. We have a huge node running experience, reliable soft and hardware. Our commissions are always low, our support to delegators is always full. Stake with us and start receiving your cosmos rewards now!"
+ },
+ "unbonding_height": "147302",
+ "unbonding_time": "2021-11-08T22:58:53.718662452Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.050000000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.100000000000000000"
+ },
+ "update_time": "2021-10-04T18:02:21.446645619Z"
+ },
+ "min_self_delegation": "1"
+ }
+ ],
+ "pagination": {
+ "next_key": "FONDBFkE4tEEf7yxWWKOD49jC2NK",
+ "total": "2"
+ }
+}
+```
+
+#### Validator
+
+The `Validator` REST endpoint queries validator information for given validator address.
+
+```bash
+/cosmos/staking/v1beta1/validators/{validatorAddr}
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "validator": {
+ "operator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "consensus_pubkey": {
+ "@type": "/cosmos.crypto.ed25519.PubKey",
+ "key": "sIiexdJdYWn27+7iUHQJDnkp63gq/rzUq1Y+fxoGjXc="
+ },
+ "jailed": false,
+ "status": "BOND_STATUS_BONDED",
+ "tokens": "33027900000",
+ "delegator_shares": "33027900000.000000000000000000",
+ "description": {
+ "moniker": "Witval",
+ "identity": "51468B615127273A",
+ "website": "",
+ "security_contact": "",
+ "details": "Witval is the validator arm from Vitwit. Vitwit is into software consulting and services business since 2015. We are working closely with Cosmos ecosystem since 2018. We are also building tools for the ecosystem, Aneka is our explorer for the cosmos ecosystem."
+ },
+ "unbonding_height": "0",
+ "unbonding_time": "1970-01-01T00:00:00Z",
+ "commission": {
+ "commission_rates": {
+ "rate": "0.050000000000000000",
+ "max_rate": "0.200000000000000000",
+ "max_change_rate": "0.020000000000000000"
+ },
+ "update_time": "2021-10-01T19:24:52.663191049Z"
+ },
+ "min_self_delegation": "1"
+ }
+}
+```
+
+#### ValidatorDelegations
+
+The `ValidatorDelegations` REST endpoint queries delegate information for given validator.
+
+```bash
+/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "delegation_responses": [
+ {
+ "delegation": {
+ "delegator_address": "cosmos190g5j8aszqhvtg7cprmev8xcxs6csra7xnk3n3",
+ "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "shares": "31000000000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "31000000000"
+ }
+ },
+ {
+ "delegation": {
+ "delegator_address": "cosmos1ddle9tczl87gsvmeva3c48nenyng4n56qwq4ee",
+ "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "shares": "628470000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "628470000"
+ }
+ },
+ {
+ "delegation": {
+ "delegator_address": "cosmos10fdvkczl76m040smd33lh9xn9j0cf26kk4s2nw",
+ "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "shares": "838120000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "838120000"
+ }
+ },
+ {
+ "delegation": {
+ "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8",
+ "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "shares": "500000000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "500000000"
+ }
+ },
+ {
+ "delegation": {
+ "delegator_address": "cosmos16msryt3fqlxtvsy8u5ay7wv2p8mglfg9hrek2e",
+ "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "shares": "61310000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "61310000"
+ }
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "5"
+ }
+}
+```
+
+#### Delegation
+
+The `Delegation` REST endpoint queries delegate information for given validator delegator pair.
+
+```bash
+/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q/delegations/cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "delegation_response": {
+ "delegation": {
+ "delegator_address": "cosmos1n8f5fknsv2yt7a8u6nrx30zqy7lu9jfm0t5lq8",
+ "validator_address": "cosmosvaloper16msryt3fqlxtvsy8u5ay7wv2p8mglfg9g70e3q",
+ "shares": "500000000.000000000000000000"
+ },
+ "balance": {
+ "denom": "stake",
+ "amount": "500000000"
+ }
+ }
+}
+```
+
+#### UnbondingDelegation
+
+The `UnbondingDelegation` REST endpoint queries unbonding information for given validator delegator pair.
+
+```bash
+/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations/{delegatorAddr}/unbonding_delegation
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/delegations/cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm/unbonding_delegation" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "unbond": {
+ "delegator_address": "cosmos1ze2ye5u5k3qdlexvt2e0nn0508p04094ya0qpm",
+ "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu",
+ "entries": [
+ {
+ "creation_height": "153687",
+ "completion_time": "2021-11-09T09:41:18.352401903Z",
+ "initial_balance": "525111",
+ "balance": "525111"
+ }
+ ]
+ }
+}
+```
+
+#### ValidatorUnbondingDelegations
+
+The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator.
+
+```bash
+/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations
+```
+
+Example:
+
+```bash
+curl -X GET \
+"http://localhost:1317/cosmos/staking/v1beta1/validators/cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu/unbonding_delegations" \
+-H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "unbonding_responses": [
+ {
+ "delegator_address": "cosmos1q9snn84jfrd9ge8t46kdcggpe58dua82vnj7uy",
+ "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu",
+ "entries": [
+ {
+ "creation_height": "90998",
+ "completion_time": "2021-11-05T00:14:37.005841058Z",
+ "initial_balance": "24000000",
+ "balance": "24000000"
+ }
+ ]
+ },
+ {
+ "delegator_address": "cosmos1qf36e6wmq9h4twhdvs6pyq9qcaeu7ye0s3dqq2",
+ "validator_address": "cosmosvaloper13v4spsah85ps4vtrw07vzea37gq5la5gktlkeu",
+ "entries": [
+ {
+ "creation_height": "47478",
+ "completion_time": "2021-11-01T22:47:26.714116854Z",
+ "initial_balance": "8000000",
+ "balance": "8000000"
+ }
+ ]
+ }
+ ],
+ "pagination": {
+ "next_key": null,
+ "total": "2"
+ }
+}
+```
diff --git a/copy-of-sdk-docs/build/modules/upgrade/README.md b/copy-of-sdk-docs/build/modules/upgrade/README.md
new file mode 100644
index 00000000..0ff5ad01
--- /dev/null
+++ b/copy-of-sdk-docs/build/modules/upgrade/README.md
@@ -0,0 +1,609 @@
+---
+sidebar_position: 1
+---
+
+# `x/upgrade`
+
+## Abstract
+
+`x/upgrade` is an implementation of a Cosmos SDK module that facilitates smoothly
+upgrading a live Cosmos chain to a new (breaking) software version. It accomplishes this by
+providing a `PreBlocker` hook that prevents the blockchain state machine from
+proceeding once a pre-defined upgrade block height has been reached.
+
+The module does not prescribe anything regarding how governance decides to do an
+upgrade, but just the mechanism for coordinating the upgrade safely. Without software
+support for upgrades, upgrading a live chain is risky because all of the validators
+need to pause their state machines at exactly the same point in the process. If
+this is not done correctly, there can be state inconsistencies which are hard to
+recover from.
+
+* [Concepts](#concepts)
+* [State](#state)
+* [Events](#events)
+* [Client](#client)
+ * [CLI](#cli)
+ * [REST](#rest)
+ * [gRPC](#grpc)
+* [Resources](#resources)
+
+## Concepts
+
+### Plan
+
+The `x/upgrade` module defines a `Plan` type in which a live upgrade is scheduled
+to occur. A `Plan` can be scheduled at a specific block height.
+A `Plan` is created once a (frozen) release candidate along with an appropriate upgrade
+`Handler` (see below) is agreed upon, where the `Name` of a `Plan` corresponds to a
+specific `Handler`. Typically, a `Plan` is created through a governance proposal
+process, where if voted upon and passed, will be scheduled. The `Info` of a `Plan`
+may contain various metadata about the upgrade, typically application specific
+upgrade info to be included on-chain such as a git commit that validators could
+automatically upgrade to.
+
+```go
+type Plan struct {
+ Name string
+ Height int64
+ Info string
+}
+```
+
+#### Sidecar Process
+
+If an operator running the application binary also runs a sidecar process to assist
+in the automatic download and upgrade of a binary, the `Info` allows this process to
+be seamless. This tool is [Cosmovisor](https://github.com/cosmos/cosmos-sdk/tree/main/tools/cosmovisor#readme).
+
+### Handler
+
+The `x/upgrade` module facilitates upgrading from major version X to major version Y. To
+accomplish this, node operators must first upgrade their current binary to a new
+binary that has a corresponding `Handler` for the new version Y. It is assumed that
+this version has fully been tested and approved by the community at large. This
+`Handler` defines what state migrations need to occur before the new binary Y
+can successfully run the chain. Naturally, this `Handler` is application specific
+and not defined on a per-module basis. Registering a `Handler` is done via
+`Keeper#SetUpgradeHandler` in the application.
+
+```go
+type UpgradeHandler func(Context, Plan, VersionMap) (VersionMap, error)
+```
+
+During each `EndBlock` execution, the `x/upgrade` module checks if there exists a
+`Plan` that should execute (is scheduled at that height). If so, the corresponding
+`Handler` is executed. If the `Plan` is expected to execute but no `Handler` is registered
+or if the binary was upgraded too early, the node will gracefully panic and exit.
+
+### StoreLoader
+
+The `x/upgrade` module also facilitates store migrations as part of the upgrade. The
+`StoreLoader` sets the migrations that need to occur before the new binary can
+successfully run the chain. This `StoreLoader` is also application specific and
+not defined on a per-module basis. Registering this `StoreLoader` is done via
+`app#SetStoreLoader` in the application.
+
+```go
+func UpgradeStoreLoader (upgradeHeight int64, storeUpgrades *store.StoreUpgrades) baseapp.StoreLoader
+```
+
+If there's a planned upgrade and the upgrade height is reached, the old binary writes `Plan` to the disk before panicking.
+
+This information is critical to ensure the `StoreUpgrades` happens smoothly at correct height and
+expected upgrade. It eliminates the chances for the new binary to execute `StoreUpgrades` multiple
+times every time on restart. Also if there are multiple upgrades planned on same height, the `Name`
+will ensure these `StoreUpgrades` takes place only in planned upgrade handler.
+
+### Proposal
+
+Typically, a `Plan` is proposed and submitted through governance via a proposal
+containing a `MsgSoftwareUpgrade` message.
+This proposal prescribes to the standard governance process. If the proposal passes,
+the `Plan`, which targets a specific `Handler`, is persisted and scheduled. The
+upgrade can be delayed or hastened by updating the `Plan.Height` in a new proposal.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/upgrade/v1beta1/tx.proto#L29-L41
+```
+
+#### Cancelling Upgrade Proposals
+
+Upgrade proposals can be cancelled. There exists a gov-enabled `MsgCancelUpgrade`
+message type, which can be embedded in a proposal, voted on and, if passed, will
+remove the scheduled upgrade `Plan`.
+Of course this requires that the upgrade was known to be a bad idea well before the
+upgrade itself, to allow time for a vote.
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.47.0-rc1/proto/cosmos/upgrade/v1beta1/tx.proto#L48-L57
+```
+
+If such a possibility is desired, the upgrade height is to be
+`2 * (VotingPeriod + DepositPeriod) + (SafetyDelta)` from the beginning of the
+upgrade proposal. The `SafetyDelta` is the time available from the success of an
+upgrade proposal and the realization it was a bad idea (due to external social consensus).
+
+A `MsgCancelUpgrade` proposal can also be made while the original
+`MsgSoftwareUpgrade` proposal is still being voted upon, as long as the `VotingPeriod`
+ends after the `MsgSoftwareUpgrade` proposal.
+
+## State
+
+The internal state of the `x/upgrade` module is relatively minimal and simple. The
+state contains the currently active upgrade `Plan` (if one exists) by key
+`0x0` and if a `Plan` is marked as "done" by key `0x1`. The state
+contains the consensus versions of all app modules in the application. The versions
+are stored as big endian `uint64`, and can be accessed with prefix `0x2` appended
+by the corresponding module name of type `string`. The state maintains a
+`Protocol Version` which can be accessed by key `0x3`.
+
+* Plan: `0x0 -> Plan`
+* Done: `0x1 | byte(plan name) -> BigEndian(Block Height)`
+* ConsensusVersion: `0x2 | byte(module name) -> BigEndian(Module Consensus Version)`
+* ProtocolVersion: `0x3 -> BigEndian(Protocol Version)`
+
+The `x/upgrade` module contains no genesis state.
+
+## Events
+
+The `x/upgrade` does not emit any events by itself. Any and all proposal related
+events are emitted through the `x/gov` module.
+
+## Client
+
+### CLI
+
+A user can query and interact with the `upgrade` module using the CLI.
+
+#### Query
+
+The `query` commands allow users to query `upgrade` state.
+
+```bash
+simd query upgrade --help
+```
+
+##### applied
+
+The `applied` command allows users to query the block header for height at which a completed upgrade was applied.
+
+```bash
+simd query upgrade applied [upgrade-name] [flags]
+```
+
+If upgrade-name was previously executed on the chain, this returns the header for the block at which it was applied.
+This helps a client determine which binary was valid over a given range of blocks, as well as more context to understand past migrations.
+
+Example:
+
+```bash
+simd query upgrade applied "test-upgrade"
+```
+
+Example Output:
+
+```bash
+"block_id": {
+ "hash": "A769136351786B9034A5F196DC53F7E50FCEB53B48FA0786E1BFC45A0BB646B5",
+ "parts": {
+ "total": 1,
+ "hash": "B13CBD23011C7480E6F11BE4594EE316548648E6A666B3575409F8F16EC6939E"
+ }
+ },
+ "block_size": "7213",
+ "header": {
+ "version": {
+ "block": "11"
+ },
+ "chain_id": "testnet-2",
+ "height": "455200",
+ "time": "2021-04-10T04:37:57.085493838Z",
+ "last_block_id": {
+ "hash": "0E8AD9309C2DC411DF98217AF59E044A0E1CCEAE7C0338417A70338DF50F4783",
+ "parts": {
+ "total": 1,
+ "hash": "8FE572A48CD10BC2CBB02653CA04CA247A0F6830FF19DC972F64D339A355E77D"
+ }
+ },
+ "last_commit_hash": "DE890239416A19E6164C2076B837CC1D7F7822FC214F305616725F11D2533140",
+ "data_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855",
+ "validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582",
+ "next_validators_hash": "A31047ADE54AE9072EE2A12FF260A8990BA4C39F903EAF5636B50D58DBA72582",
+ "consensus_hash": "048091BC7DDC283F77BFBF91D73C44DA58C3DF8A9CBC867405D8B7F3DAADA22F",
+ "app_hash": "28ECC486AFC332BA6CC976706DBDE87E7D32441375E3F10FD084CD4BAF0DA021",
+ "last_results_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855",
+ "evidence_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855",
+ "proposer_address": "2ABC4854B1A1C5AA8403C4EA853A81ACA901CC76"
+ },
+ "num_txs": "0"
+}
+```
+
+##### module versions
+
+The `module_versions` command gets a list of module names and their respective consensus versions.
+
+Following the command with a specific module name will return only
+that module's information.
+
+```bash
+simd query upgrade module_versions [optional module_name] [flags]
+```
+
+Example:
+
+```bash
+simd query upgrade module_versions
+```
+
+Example Output:
+
+```bash
+module_versions:
+- name: auth
+ version: "2"
+- name: authz
+ version: "1"
+- name: bank
+ version: "2"
+- name: distribution
+ version: "2"
+- name: evidence
+ version: "1"
+- name: feegrant
+ version: "1"
+- name: genutil
+ version: "1"
+- name: gov
+ version: "2"
+- name: ibc
+ version: "2"
+- name: mint
+ version: "1"
+- name: params
+ version: "1"
+- name: slashing
+ version: "2"
+- name: staking
+ version: "2"
+- name: transfer
+ version: "1"
+- name: upgrade
+ version: "1"
+- name: vesting
+ version: "1"
+```
+
+Example:
+
+```bash
+regen query upgrade module_versions ibc
+```
+
+Example Output:
+
+```bash
+module_versions:
+- name: ibc
+ version: "2"
+```
+
+##### plan
+
+The `plan` command gets the currently scheduled upgrade plan, if one exists.
+
+```bash
+regen query upgrade plan [flags]
+```
+
+Example:
+
+```bash
+simd query upgrade plan
+```
+
+Example Output:
+
+```bash
+height: "130"
+info: ""
+name: test-upgrade
+time: "0001-01-01T00:00:00Z"
+upgraded_client_state: null
+```
+
+#### Transactions
+
+The upgrade module supports the following transactions:
+
+* `software-proposal` - submits an upgrade proposal:
+
+```bash
+simd tx upgrade software-upgrade v2 --title="Test Proposal" --summary="testing" --deposit="100000000stake" --upgrade-height 1000000 \
+--upgrade-info '{ "binaries": { "linux/amd64":"https://example.com/simd.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f" } }' --from cosmos1..
+```
+
+* `cancel-software-upgrade` - cancels a previously submitted upgrade proposal:
+
+```bash
+simd tx upgrade cancel-software-upgrade --title="Test Proposal" --summary="testing" --deposit="100000000stake" --from cosmos1..
+```
+
+### REST
+
+A user can query the `upgrade` module using REST endpoints.
+
+#### Applied Plan
+
+`AppliedPlan` queries a previously applied upgrade plan by its name.
+
+```bash
+/cosmos/upgrade/v1beta1/applied_plan/{name}
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/applied_plan/v2.0-upgrade" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "height": "30"
+}
+```
+
+#### Current Plan
+
+`CurrentPlan` queries the current upgrade plan.
+
+```bash
+/cosmos/upgrade/v1beta1/current_plan
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/current_plan" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "plan": "v2.1-upgrade"
+}
+```
+
+#### Module versions
+
+`ModuleVersions` queries the list of module versions from state.
+
+```bash
+/cosmos/upgrade/v1beta1/module_versions
+```
+
+Example:
+
+```bash
+curl -X GET "http://localhost:1317/cosmos/upgrade/v1beta1/module_versions" -H "accept: application/json"
+```
+
+Example Output:
+
+```bash
+{
+ "module_versions": [
+ {
+ "name": "auth",
+ "version": "2"
+ },
+ {
+ "name": "authz",
+ "version": "1"
+ },
+ {
+ "name": "bank",
+ "version": "2"
+ },
+ {
+ "name": "distribution",
+ "version": "2"
+ },
+ {
+ "name": "evidence",
+ "version": "1"
+ },
+ {
+ "name": "feegrant",
+ "version": "1"
+ },
+ {
+ "name": "genutil",
+ "version": "1"
+ },
+ {
+ "name": "gov",
+ "version": "2"
+ },
+ {
+ "name": "ibc",
+ "version": "2"
+ },
+ {
+ "name": "mint",
+ "version": "1"
+ },
+ {
+ "name": "params",
+ "version": "1"
+ },
+ {
+ "name": "slashing",
+ "version": "2"
+ },
+ {
+ "name": "staking",
+ "version": "2"
+ },
+ {
+ "name": "transfer",
+ "version": "1"
+ },
+ {
+ "name": "upgrade",
+ "version": "1"
+ },
+ {
+ "name": "vesting",
+ "version": "1"
+ }
+ ]
+}
+```
+
+### gRPC
+
+A user can query the `upgrade` module using gRPC endpoints.
+
+#### Applied Plan
+
+`AppliedPlan` queries a previously applied upgrade plan by its name.
+
+```bash
+cosmos.upgrade.v1beta1.Query/AppliedPlan
+```
+
+Example:
+
+```bash
+grpcurl -plaintext \
+ -d '{"name":"v2.0-upgrade"}' \
+ localhost:9090 \
+ cosmos.upgrade.v1beta1.Query/AppliedPlan
+```
+
+Example Output:
+
+```bash
+{
+ "height": "30"
+}
+```
+
+#### Current Plan
+
+`CurrentPlan` queries the current upgrade plan.
+
+```bash
+cosmos.upgrade.v1beta1.Query/CurrentPlan
+```
+
+Example:
+
+```bash
+grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/CurrentPlan
+```
+
+Example Output:
+
+```bash
+{
+ "plan": "v2.1-upgrade"
+}
+```
+
+#### Module versions
+
+`ModuleVersions` queries the list of module versions from state.
+
+```bash
+cosmos.upgrade.v1beta1.Query/ModuleVersions
+```
+
+Example:
+
+```bash
+grpcurl -plaintext localhost:9090 cosmos.slashing.v1beta1.Query/ModuleVersions
+```
+
+Example Output:
+
+```bash
+{
+ "module_versions": [
+ {
+ "name": "auth",
+ "version": "2"
+ },
+ {
+ "name": "authz",
+ "version": "1"
+ },
+ {
+ "name": "bank",
+ "version": "2"
+ },
+ {
+ "name": "distribution",
+ "version": "2"
+ },
+ {
+ "name": "evidence",
+ "version": "1"
+ },
+ {
+ "name": "feegrant",
+ "version": "1"
+ },
+ {
+ "name": "genutil",
+ "version": "1"
+ },
+ {
+ "name": "gov",
+ "version": "2"
+ },
+ {
+ "name": "ibc",
+ "version": "2"
+ },
+ {
+ "name": "mint",
+ "version": "1"
+ },
+ {
+ "name": "params",
+ "version": "1"
+ },
+ {
+ "name": "slashing",
+ "version": "2"
+ },
+ {
+ "name": "staking",
+ "version": "2"
+ },
+ {
+ "name": "transfer",
+ "version": "1"
+ },
+ {
+ "name": "upgrade",
+ "version": "1"
+ },
+ {
+ "name": "vesting",
+ "version": "1"
+ }
+ ]
+}
+```
+
+## Resources
+
+A list of (external) resources to learn more about the `x/upgrade` module.
+
+* [Cosmos Dev Series: Cosmos Blockchain Upgrade](https://medium.com/web3-surfers/cosmos-dev-series-cosmos-sdk-based-blockchain-upgrade-b5e99181554c) - The blog post that explains how software upgrades work in detail.
diff --git a/copy-of-sdk-docs/build/packages/01-depinject.md b/copy-of-sdk-docs/build/packages/01-depinject.md
new file mode 100644
index 00000000..4fa96325
--- /dev/null
+++ b/copy-of-sdk-docs/build/packages/01-depinject.md
@@ -0,0 +1,205 @@
+---
+sidebar_position: 1
+---
+
+# Depinject
+
+> **DISCLAIMER**: This is a **beta** package. The SDK team is actively working on this feature and we are looking for feedback from the community. Please try it out and let us know what you think.
+
+## Overview
+
+`depinject` is a dependency injection (DI) framework for the Cosmos SDK, designed to streamline the process of building and configuring blockchain applications. It works in conjunction with the `core/appconfig` module to replace the majority of boilerplate code in `app.go` with a configuration file in Go, YAML, or JSON format.
+
+`depinject` is particularly useful for developing blockchain applications:
+
+* With multiple interdependent components, modules, or services. Helping manage their dependencies effectively.
+* That require decoupling of these components, making it easier to test, modify, or replace individual parts without affecting the entire system.
+* That are wanting to simplify the setup and initialisation of modules and their dependencies by reducing boilerplate code and automating dependency management.
+
+By using `depinject`, developers can achieve:
+
+* Cleaner and more organised code.
+* Improved modularity and maintainability.
+* A more maintainable and modular structure for their blockchain applications, ultimately enhancing development velocity and code quality.
+
+* [Go Doc](https://pkg.go.dev/cosmossdk.io/depinject)
+
+## Usage
+
+The `depinject` framework, based on dependency injection concepts, streamlines the management of dependencies within your blockchain application using its Configuration API. This API offers a set of functions and methods to create easy to use configurations, making it simple to define, modify, and access dependencies and their relationships.
+
+A core component of the [Configuration API](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/depinject#Config) is the `Provide` function, which allows you to register provider functions that supply dependencies. Inspired by constructor injection, these provider functions form the basis of the dependency tree, enabling the management and resolution of dependencies in a structured and maintainable manner. Additionally, `depinject` supports interface types as inputs to provider functions, offering flexibility and decoupling between components, similar to interface injection concepts.
+
+By leveraging `depinject` and its Configuration API, you can efficiently handle dependencies in your blockchain application, ensuring a clean, modular, and well-organised codebase.
+
+Example:
+
+```go
+package main
+
+import (
+ "fmt"
+
+ "cosmossdk.io/depinject"
+)
+
+type AnotherInt int
+
+func GetInt() int { return 1 }
+func GetAnotherInt() AnotherInt { return 2 }
+
+func main() {
+ var (
+ x int
+ y AnotherInt
+ )
+
+ fmt.Printf("Before (%v, %v)\n", x, y)
+ depinject.Inject(
+ depinject.Provide(
+ GetInt,
+ GetAnotherInt,
+ ),
+ &x,
+ &y,
+ )
+ fmt.Printf("After (%v, %v)\n", x, y)
+}
+```
+
+In this example, `depinject.Provide` registers two provider functions that return `int` and `AnotherInt` values. The `depinject.Inject` function is then used to inject these values into the variables `x` and `y`.
+
+Provider functions serve as the basis for the dependency tree. They are analysed to identify their inputs as dependencies and their outputs as dependents. These dependents can either be used by another provider function or be stored outside the DI container (e.g., `&x` and `&y` in the example above). Provider functions must be exported.
+
+### Interface type resolution
+
+`depinject` supports the use of interface types as inputs to provider functions, which helps decouple dependencies between modules. This approach is particularly useful for managing complex systems with multiple modules, such as the Cosmos SDK, where dependencies need to be flexible and maintainable.
+
+For example, `x/bank` expects an [AccountKeeper](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/x/bank/types#AccountKeeper) interface as [input to ProvideModule](https://github.com/cosmos/cosmos-sdk/tree/release/v0.50.x/x/bank/module.go#L208-L260). `SimApp` uses the implementation in `x/auth`, but the modular design allows for easy changes to the implementation if needed.
+
+Consider the following example:
+
+```go
+package duck
+
+type Duck interface {
+ quack()
+}
+
+type AlsoDuck interface {
+ quack()
+}
+
+type Mallard struct{}
+type Canvasback struct{}
+
+func (duck Mallard) quack() {}
+func (duck Canvasback) quack() {}
+
+type Pond struct {
+ Duck AlsoDuck
+}
+```
+
+And the following provider functions:
+
+```go
+func GetMallard() duck.Mallard {
+ return Mallard{}
+}
+
+func GetPond(duck Duck) Pond {
+ return Pond{Duck: duck}
+}
+
+func GetCanvasback() Canvasback {
+ return Canvasback{}
+}
+```
+
+In this example, there's a `Pond` struct that has a `Duck` field of type `AlsoDuck`. The `depinject` framework can automatically resolve the appropriate implementation when there's only one available, as shown below:
+
+```go
+var pond Pond
+
+depinject.Inject(
+ depinject.Provide(
+ GetMallard,
+ GetPond,
+ ),
+ &pond)
+```
+
+This code snippet results in the `Duck` field of `Pond` being implicitly bound to the `Mallard` implementation because it's the only implementation of the `Duck` interface in the container.
+
+However, if there are multiple implementations of the `Duck` interface, as in the following example, you'll encounter an error:
+
+```go
+var pond Pond
+
+depinject.Inject(
+ depinject.Provide(
+ GetMallard,
+ GetCanvasback,
+ GetPond,
+ ),
+ &pond)
+```
+
+A specific binding preference for `Duck` is required.
+
+#### `BindInterface` API
+
+In the above situation registering a binding for a given interface binding may look like:
+
+```go
+depinject.Inject(
+ depinject.Configs(
+ depinject.BindInterface(
+ "duck/duck.Duck",
+ "duck/duck.Mallard",
+ ),
+ depinject.Provide(
+ GetMallard,
+ GetCanvasback,
+ GetPond,
+ ),
+ ),
+ &pond)
+```
+
+Now `depinject` has enough information to provide `Mallard` as an input to `APond`.
+
+### Full example in real app
+
+:::warning
+When using `depinject.Inject`, the injected types must be pointers.
+:::
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app_di.go#L165-L188
+```
+
+## Debugging
+
+Issues with resolving dependencies in the container can be done with logs and [Graphviz](https://graphviz.org) renderings of the container tree.
+By default, whenever there is an error, logs will be printed to stderr and a rendering of the dependency graph in Graphviz DOT format will be saved to `debug_container.dot`.
+
+Here is an example Graphviz rendering of a successful build of a dependency graph:
+
+
+Rectangles represent functions, ovals represent types, rounded rectangles represent modules and the single hexagon
+represents the function which called `Build`. Black-colored shapes mark functions and types that were called/resolved
+without an error. Gray-colored nodes mark functions and types that could have been called/resolved in the container but
+were left unused.
+
+Here is an example Graphviz rendering of a dependency graph build which failed:
+
+
+Graphviz DOT files can be converted into SVG's for viewing in a web browser using the `dot` command-line tool, ex:
+
+```txt
+dot -Tsvg debug_container.dot > debug_container.svg
+```
+
+Many other tools including some IDEs support working with DOT files.
diff --git a/copy-of-sdk-docs/build/packages/02-collections.md b/copy-of-sdk-docs/build/packages/02-collections.md
new file mode 100644
index 00000000..d8f9c17e
--- /dev/null
+++ b/copy-of-sdk-docs/build/packages/02-collections.md
@@ -0,0 +1,1210 @@
+# Collections
+
+Collections is a library meant to simplify the experience with respect to module state handling.
+
+Cosmos SDK modules handle their state using the `KVStore` interface. The problem with working with
+`KVStore` is that it forces you to think of state as a bytes KV pairings when in reality the majority of
+state comes from complex concrete golang objects (strings, ints, structs, etc.).
+
+Collections allows you to work with state as if they were normal golang objects and removes the need
+for you to think of your state as raw bytes in your code.
+
+It also allows you to migrate your existing state without causing any state breakage that forces you into
+tedious and complex chain state migrations.
+
+## Installation
+
+To install collections in your cosmos-sdk chain project, run the following command:
+
+```shell
+go get cosmossdk.io/collections
+```
+
+## Core types
+
+Collections offers 5 different APIs to work with state, which will be explored in the next sections, these APIs are:
+
+* ``Map``: to work with typed arbitrary KV pairings.
+* ``KeySet``: to work with just typed keys
+* ``Item``: to work with just one typed value
+* ``Sequence``: which is a monotonically increasing number.
+* ``IndexedMap``: which combines ``Map`` and `KeySet` to provide a `Map` with indexing capabilities.
+
+## Preliminary components
+
+Before exploring the different collections types and their capability it is necessary to introduce
+the three components that every collection shares. In fact when instantiating a collection type by doing, for example,
+```collections.NewMap/collections.NewItem/...``` you will find yourself having to pass them some common arguments.
+
+For example, in code:
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+)
+
+var AllowListPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ AllowList collections.KeySet[string]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+
+ return Keeper{
+ AllowList: collections.NewKeySet(sb, AllowListPrefix, "allow_list", collections.StringKey),
+ }
+}
+
+```
+
+Let's analyse the shared arguments, what they do, and why we need them.
+
+### SchemaBuilder
+
+The first argument passed is the ``SchemaBuilder``
+
+`SchemaBuilder` is a structure that keeps track of all the state of a module, it is not required by the collections
+ to deal with state but it offers a dynamic and reflective way for clients to explore a module's state.
+
+We instantiate a ``SchemaBuilder`` by passing it a function that given the modules store key returns the module's specific store.
+
+We then need to pass the schema builder to every collection type we instantiate in our keeper, in our case the `AllowList`.
+
+### Prefix
+
+The second argument passed to our ``KeySet`` is a `collections.Prefix`, a prefix represents a partition of the module's `KVStore`
+where all the state of a specific collection will be saved.
+
+Since a module can have multiple collections, the following is expected:
+
+* module params will become a `collections.Item`
+* the `AllowList` is a `collections.KeySet`
+
+We don't want a collection to write over the state of the other collection so we pass it a prefix, which defines a storage
+partition owned by the collection.
+
+If you already built modules, the prefix translates to the items you were creating in your ``types/keys.go`` file, example: https://github.com/cosmos/cosmos-sdk/blob/v0.52.0-rc.1/x/feegrant/key.go#L16~L22
+
+your old:
+
+```go
+var (
+ // FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data
+ // - 0x00: allowance
+ FeeAllowanceKeyPrefix = []byte{0x00}
+
+ // FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data
+ // - 0x01:
+ FeeAllowanceQueueKeyPrefix = []byte{0x01}
+)
+```
+
+becomes:
+
+```go
+var (
+ // FeeAllowanceKeyPrefix is the set of the kvstore for fee allowance data
+ // - 0x00: allowance
+ FeeAllowanceKeyPrefix = collections.NewPrefix(0)
+
+ // FeeAllowanceQueueKeyPrefix is the set of the kvstore for fee allowance keys data
+ // - 0x01:
+ FeeAllowanceQueueKeyPrefix = collections.NewPrefix(1)
+)
+```
+
+#### Rules
+
+``collections.NewPrefix`` accepts either `uint8`, `string` or `[]bytes` it's good practice to use an always increasing `uint8`for disk space efficiency.
+
+A collection **MUST NOT** share the same prefix as another collection in the same module, and a collection prefix **MUST NEVER** start with the same prefix as another, examples:
+
+```go
+prefix1 := collections.NewPrefix("prefix")
+prefix2 := collections.NewPrefix("prefix") // THIS IS BAD!
+```
+
+```go
+prefix1 := collections.NewPrefix("a")
+prefix2 := collections.NewPrefix("aa") // prefix2 starts with the same as prefix1: BAD!!!
+```
+
+### Human-Readable Name
+
+The third parameter we pass to a collection is a string, which is a human-readable name.
+It is needed to make the role of a collection understandable by clients who have no clue about
+what a module is storing in state.
+
+#### Rules
+
+Each collection in a module **MUST** have a unique humanised name.
+
+## Key and Value Codecs
+
+A collection is generic over the type you can use as keys or values.
+This makes collections dumb, but also means that hypothetically we can store everything
+that can be a go type into a collection. We are not bounded to any type of encoding (be it proto, json or whatever)
+
+So a collection needs to be given a way to understand how to convert your keys and values to bytes.
+This is achieved through ``KeyCodec`` and `ValueCodec`, which are arguments that you pass to your
+collections when you're instantiating them using the ```collections.NewMap/collections.NewItem/...```
+instantiation functions.
+
+NOTE: Generally speaking you will never be required to implement your own ``Key/ValueCodec`` as
+the SDK and collections libraries already come with default, safe and fast implementation of those.
+You might need to implement them only if you're migrating to collections and there are state layout incompatibilities.
+
+Let's explore an example:
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+)
+
+var IDsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ IDs collections.Map[string, uint64]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+
+ return Keeper{
+ IDs: collections.NewMap(sb, IDsPrefix, "ids", collections.StringKey, collections.Uint64Value),
+ }
+}
+```
+
+We're now instantiating a map where the key is string and the value is `uint64`.
+We already know the first three arguments of the ``NewMap`` function.
+
+The fourth parameter is our `KeyCodec`, we know that the ``Map`` has `string` as key so we pass it a `KeyCodec` that handles strings as keys.
+
+The fifth parameter is our `ValueCodec`, we know that the `Map` has a `uint64` as value so we pass it a `ValueCodec` that handles uint64.
+
+Collections already comes with all the required implementations for golang primitive types.
+
+Let's make another example, this falls closer to what we build using cosmos SDK, let's say we want
+to create a `collections.Map` that maps account addresses to their base account. So we want to map an `sdk.AccAddress` to an `auth.BaseAccount` (which is a proto):
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+)
+
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewMap(sb, AccountsPrefix, "accounts",
+ sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc)),
+ }
+}
+```
+
+As we can see here since our `collections.Map` maps `sdk.AccAddress` to `authtypes.BaseAccount`,
+we use the `sdk.AccAddressKey` which is the `KeyCodec` implementation for `AccAddress` and we use `codec.CollValue` to
+encode our proto type `BaseAccount`.
+
+Generally speaking you will always find the respective key and value codecs for types in the `go.mod` path you're using
+to import that type. If you want to encode proto values refer to the codec `codec.CollValue` function, which allows you
+to encode any type implement the `proto.Message` interface.
+
+## Map
+
+We analyse the first and most important collection type, the ``collections.Map``.
+This is the type that everything else builds on top of.
+
+### Use case
+
+A `collections.Map` is used to map arbitrary keys with arbitrary values.
+
+### Example
+
+It's easier to explain a `collections.Map` capabilities through an example:
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "fmt"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+)
+
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts collections.Map[sdk.AccAddress, authtypes.BaseAccount]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewMap(sb, AccountsPrefix, "accounts",
+ sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc)),
+ }
+}
+
+func (k Keeper) CreateAccount(ctx sdk.Context, addr sdk.AccAddress, account authtypes.BaseAccount) error {
+ has, err := k.Accounts.Has(ctx, addr)
+ if err != nil {
+ return err
+ }
+ if has {
+ return fmt.Errorf("account already exists: %s", addr)
+ }
+
+ err = k.Accounts.Set(ctx, addr, account)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func (k Keeper) GetAccount(ctx sdk.Context, addr sdk.AccAddress) (authtypes.BaseAccount, error) {
+ acc, err := k.Accounts.Get(ctx, addr)
+ if err != nil {
+ return authtypes.BaseAccount{}, err
+ }
+
+ return acc, nil
+}
+
+func (k Keeper) RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) error {
+ err := k.Accounts.Remove(ctx, addr)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+```
+
+#### Set method
+
+Set maps with the provided `AccAddress` (the key) to the `auth.BaseAccount` (the value).
+
+Under the hood the `collections.Map` will convert the key and value to bytes using the [key and value codec](README.md#key-and-value-codecs).
+It will prepend to our bytes key the [prefix](README.md#prefix) and store it in the KVStore of the module.
+
+#### Has method
+
+The has method reports if the provided key exists in the store.
+
+#### Get method
+
+The get method accepts the `AccAddress` and returns the associated `auth.BaseAccount` if it exists, otherwise it errors.
+
+#### Remove method
+
+The remove method accepts the `AccAddress` and removes it from the store. It won't report errors
+if it does not exist, to check for existence before removal use the ``Has`` method.
+
+#### Iteration
+
+Iteration has a separate section.
+
+## KeySet
+
+The second type of collection is `collections.KeySet`, as the word suggests it maintains
+only a set of keys without values.
+
+#### Implementation curiosity
+
+A `collections.KeySet` is just a `collections.Map` with a `key` but no value.
+The value internally is always the same and is represented as an empty byte slice ```[]byte{}```.
+
+### Example
+
+As always we explore the collection type through an example:
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "fmt"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+)
+
+var ValidatorsSetPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ ValidatorsSet collections.KeySet[sdk.ValAddress]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ ValidatorsSet: collections.NewKeySet(sb, ValidatorsSetPrefix, "validators_set", sdk.ValAddressKey),
+ }
+}
+
+func (k Keeper) AddValidator(ctx sdk.Context, validator sdk.ValAddress) error {
+ has, err := k.ValidatorsSet.Has(ctx, validator)
+ if err != nil {
+ return err
+ }
+ if has {
+ return fmt.Errorf("validator already in set: %s", validator)
+ }
+
+ err = k.ValidatorsSet.Set(ctx, validator)
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (k Keeper) RemoveValidator(ctx sdk.Context, validator sdk.ValAddress) error {
+ err := k.ValidatorsSet.Remove(ctx, validator)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+```
+
+The first difference we notice is that `KeySet` needs use to specify only one type parameter: the key (`sdk.ValAddress` in this case).
+The second difference we notice is that `KeySet` in its `NewKeySet` function does not require
+us to specify a `ValueCodec` but only a `KeyCodec`. This is because a `KeySet` only saves keys and not values.
+
+Let's explore the methods.
+
+#### Has method
+
+Has allows us to understand if a key is present in the `collections.KeySet` or not, functions in the same way as `collections.Map.Has
+`
+
+#### Set method
+
+Set inserts the provided key in the `KeySet`.
+
+#### Remove method
+
+Remove removes the provided key from the `KeySet`, it does not error if the key does not exist,
+if existence check before removal is required it needs to be coupled with the `Has` method.
+
+## Item
+
+The third type of collection is the `collections.Item`.
+It stores only one single item, it's useful for example for parameters, there's only one instance
+of parameters in state always.
+
+### implementation curiosity
+
+A `collections.Item` is just a `collections.Map` with no key but just a value.
+The key is the prefix of the collection!
+
+### Example
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ stakingtypes "cosmossdk.io/x/staking/types"
+)
+
+var ParamsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Params collections.Item[stakingtypes.Params]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Params: collections.NewItem(sb, ParamsPrefix, "params", codec.CollValue[stakingtypes.Params](cdc)),
+ }
+}
+
+func (k Keeper) UpdateParams(ctx sdk.Context, params stakingtypes.Params) error {
+ err := k.Params.Set(ctx, params)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func (k Keeper) GetParams(ctx sdk.Context) (stakingtypes.Params, error) {
+ return k.Params.Get(ctx)
+}
+```
+
+The first key difference we notice is that we specify only one type parameter, which is the value we're storing.
+The second key difference is that we don't specify the `KeyCodec`, since we store only one item we already know the key
+and the fact that it is constant.
+
+## Iteration
+
+One of the key features of the ``KVStore`` is iterating over keys.
+
+Collections which deal with keys (so `Map`, `KeySet` and `IndexedMap`) allow you to iterate
+over keys in a safe and typed way. They all share the same API, the only difference being
+that ``KeySet`` returns a different type of `Iterator` because `KeySet` only deals with keys.
+
+:::note
+
+Every collection shares the same `Iterator` semantics.
+
+:::
+
+Let's have a look at the `Map.Iterate` method:
+
+```go
+func (m Map[K, V]) Iterate(ctx context.Context, ranger Ranger[K]) (Iterator[K, V], error)
+```
+
+It accepts a `collections.Ranger[K]`, which is an API that instructs map on how to iterate over keys.
+As always we don't need to implement anything here as `collections` already provides some generic `Ranger` implementers
+that expose all you need to work with ranges.
+
+### Example
+
+We have a `collections.Map` that maps accounts using `uint64` IDs.
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+)
+
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts collections.Map[uint64, authtypes.BaseAccount]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewMap(sb, AccountsPrefix, "accounts", collections.Uint64Key, codec.CollValue[authtypes.BaseAccount](cdc)),
+ }
+}
+
+func (k Keeper) GetAllAccounts(ctx sdk.Context) ([]authtypes.BaseAccount, error) {
+ // passing a nil Ranger equals to: iterate over every possible key
+ iter, err := k.Accounts.Iterate(ctx, nil)
+ if err != nil {
+ return nil, err
+ }
+ accounts, err := iter.Values()
+ if err != nil {
+ return nil, err
+ }
+
+ return accounts, err
+}
+
+func (k Keeper) IterateAccountsBetween(ctx sdk.Context, start, end uint64) ([]authtypes.BaseAccount, error) {
+ // The collections.Range API offers a lot of capabilities
+ // like defining where the iteration starts or ends.
+ rng := new(collections.Range[uint64]).
+ StartInclusive(start).
+ EndExclusive(end).
+ Descending()
+
+ iter, err := k.Accounts.Iterate(ctx, rng)
+ if err != nil {
+ return nil, err
+ }
+ accounts, err := iter.Values()
+ if err != nil {
+ return nil, err
+ }
+
+ return accounts, nil
+}
+
+func (k Keeper) IterateAccounts(ctx sdk.Context, do func(id uint64, acc authtypes.BaseAccount) (stop bool)) error {
+ iter, err := k.Accounts.Iterate(ctx, nil)
+ if err != nil {
+ return err
+ }
+ defer iter.Close()
+
+ for ; iter.Valid(); iter.Next() {
+ kv, err := iter.KeyValue()
+ if err != nil {
+ return err
+ }
+
+ if do(kv.Key, kv.Value) {
+ break
+ }
+ }
+ return nil
+}
+```
+
+Let's analyse each method in the example and how it makes use of the `Iterate` and the returned `Iterator` API.
+
+#### GetAllAccounts
+
+In `GetAllAccounts` we pass to our `Iterate` a nil `Ranger`. This means that the returned `Iterator` will include
+all the existing keys within the collection.
+
+Then we use the `Values` method from the returned `Iterator` API to collect all the values into a slice.
+
+`Iterator` offers other methods such as `Keys()` to collect only the keys and not the values and `KeyValues` to collect
+all the keys and values.
+
+
+#### IterateAccountsBetween
+
+Here we make use of the `collections.Range` helper to specialise our range.
+We make it start in a point through `StartInclusive` and end in the other with `EndExclusive`, then
+we instruct it to report us results in reverse order through `Descending`
+
+Then we pass the range instruction to `Iterate` and get an `Iterator`, which will contain only the results
+we specified in the range.
+
+Then we use again the `Values` method of the `Iterator` to collect all the results.
+
+`collections.Range` also offers a `Prefix` API which is not applicable to all keys types,
+for example uint64 cannot be prefix because it is of constant size, but a `string` key
+can be prefixed.
+
+#### IterateAccounts
+
+Here we showcase how to lazily collect values from an Iterator.
+
+:::note
+
+`Keys/Values/KeyValues` fully consume and close the `Iterator`, here we need to explicitly do a `defer iterator.Close()` call.
+
+:::
+
+`Iterator` also exposes a `Value` and `Key` method to collect only the current value or key, if collecting both is not needed.
+
+:::note
+
+For this `callback` pattern, collections expose a `Walk` API.
+
+:::
+
+## Composite keys
+
+So far we've worked only with simple keys, like `uint64`, the account address, etc.
+There are some more complex cases in, which we need to deal with composite keys.
+
+A key is composite when it is composed of multiple keys, for example bank balances as stored as the composite key
+`(AccAddress, string)` where the first part is the address holding the coins and the second part is the denom.
+
+Example, let's say address `BOB` holds `10atom,15osmo`, this is how it is stored in state:
+
+```
+(bob, atom) => 10
+(bob, osmos) => 15
+```
+
+Now this allows to efficiently get a specific denom balance of an address, by simply `getting` `(address, denom)`, or getting all the balances
+of an address by prefixing over `(address)`.
+
+Let's see now how we can work with composite keys using collections.
+
+### Example
+
+In our example we will show-case how we can use collections when we are dealing with balances, similar to bank,
+a balance is a mapping between `(address, denom) => math.Int` the composite key in our case is `(address, denom)`.
+
+## Instantiation of a composite key collection
+
+```go
+package collections
+
+import (
+ "cosmossdk.io/collections"
+ "cosmossdk.io/math"
+ storetypes "cosmossdk.io/store/types"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+)
+
+
+var BalancesPrefix = collections.NewPrefix(1)
+
+type Keeper struct {
+ Schema collections.Schema
+ Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Balances: collections.NewMap(
+ sb, BalancesPrefix, "balances",
+ collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey),
+ sdk.IntValue,
+ ),
+ }
+}
+```
+
+### The Map Key definition
+
+First of all we can see that in order to define a composite key of two elements we use the `collections.Pair` type:
+
+```go
+collections.Map[collections.Pair[sdk.AccAddress, string], math.Int]
+```
+
+`collections.Pair` defines a key composed of two other keys, in our case the first part is `sdk.AccAddress`, the second
+part is `string`.
+
+#### The Key Codec instantiation
+
+The arguments to instantiate are always the same, the only thing that changes is how we instantiate
+the ``KeyCodec``, since this key is composed of two keys we use `collections.PairKeyCodec`, which generates
+a `KeyCodec` composed of two key codecs. The first one will encode the first part of the key, the second one will
+encode the second part of the key.
+
+
+### Working with composite key collections
+
+Let's expand on the example we used before:
+
+```go
+var BalancesPrefix = collections.NewPrefix(1)
+
+type Keeper struct {
+ Schema collections.Schema
+ Balances collections.Map[collections.Pair[sdk.AccAddress, string], math.Int]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Balances: collections.NewMap(
+ sb, BalancesPrefix, "balances",
+ collections.PairKeyCodec(sdk.AccAddressKey, collections.StringKey),
+ sdk.IntValue,
+ ),
+ }
+}
+
+func (k Keeper) SetBalance(ctx sdk.Context, address sdk.AccAddress, denom string, amount math.Int) error {
+ key := collections.Join(address, denom)
+ return k.Balances.Set(ctx, key, amount)
+}
+
+func (k Keeper) GetBalance(ctx sdk.Context, address sdk.AccAddress, denom string) (math.Int, error) {
+ return k.Balances.Get(ctx, collections.Join(address, denom))
+}
+
+func (k Keeper) GetAllAddressBalances(ctx sdk.Context, address sdk.AccAddress) (sdk.Coins, error) {
+ balances := sdk.NewCoins()
+
+ rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](address)
+
+ iter, err := k.Balances.Iterate(ctx, rng)
+ if err != nil {
+ return nil, err
+ }
+
+ kvs, err := iter.KeyValues()
+ if err != nil {
+ return nil, err
+ }
+
+ for _, kv := range kvs {
+ balances = balances.Add(sdk.NewCoin(kv.Key.K2(), kv.Value))
+ }
+ return balances, nil
+}
+
+func (k Keeper) GetAllAddressBalancesBetween(ctx sdk.Context, address sdk.AccAddress, startDenom, endDenom string) (sdk.Coins, error) {
+ rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](address).
+ StartInclusive(startDenom).
+ EndInclusive(endDenom)
+
+ iter, err := k.Balances.Iterate(ctx, rng)
+ if err != nil {
+ return nil, err
+ }
+ ...
+}
+```
+
+#### SetBalance
+
+As we can see here we're setting the balance of an address for a specific denom.
+We use the `collections.Join` function to generate the composite key.
+`collections.Join` returns a `collections.Pair` (which is the key of our `collections.Map`)
+
+`collections.Pair` contains the two keys we have joined, it also exposes two methods: `K1` to fetch the 1st part of the
+key and `K2` to fetch the second part.
+
+As always, we use the `collections.Map.Set` method to map the composite key to our value (`math.Int` in this case)
+
+#### GetBalance
+
+To get a value in composite key collection, we simply use `collections.Join` to compose the key.
+
+#### GetAllAddressBalances
+
+We use `collections.PrefixedPairRange` to iterate over all the keys starting with the provided address.
+Concretely the iteration will report all the balances belonging to the provided address.
+
+The first part is that we instantiate a `PrefixedPairRange`, which is a `Ranger` implementer aimed to help
+in `Pair` keys iterations.
+
+```go
+ rng := collections.NewPrefixedPairRange[sdk.AccAddress, string](address)
+```
+
+As we can see here we're passing the type parameters of the `collections.Pair` because golang type inference
+with respect to generics is not as permissive as other languages, so we need to explicitly say what are the types of the pair key.
+
+#### GetAllAddressesBalancesBetween
+
+This showcases how we can further specialise our range to limit the results further, by specifying
+the range between the second part of the key (in our case the denoms, which are strings).
+
+## IndexedMap
+
+`collections.IndexedMap` is a collection that uses under the hood a `collections.Map`, and has a struct, which contains the indexes that we need to define.
+
+### Example
+
+Let's say we have an `auth.BaseAccount` struct which looks like the following:
+
+```go
+type BaseAccount struct {
+ AccountNumber uint64 `protobuf:"varint,3,opt,name=account_number,json=accountNumber,proto3" json:"account_number,omitempty"`
+ Sequence uint64 `protobuf:"varint,4,opt,name=sequence,proto3" json:"sequence,omitempty"`
+}
+```
+
+First of all, when we save our accounts in state we map them using a primary key `sdk.AccAddress`.
+If it were to be a `collections.Map` it would be `collections.Map[sdk.AccAddress, authtypes.BaseAccount]`.
+
+Then we also want to be able to get an account not only by its `sdk.AccAddress`, but also by its `AccountNumber`.
+
+So we can say we want to create an `Index` that maps our `BaseAccount` to its `AccountNumber`.
+
+We also know that this `Index` is unique. Unique means that there can only be one `BaseAccount` that maps to a specific
+`AccountNumber`.
+
+First of all, we start by defining the object that contains our index:
+
+```go
+var AccountsNumberIndexPrefix = collections.NewPrefix(1)
+
+type AccountsIndexes struct {
+ Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount]
+}
+
+func NewAccountIndexes(sb *collections.SchemaBuilder) AccountsIndexes {
+ return AccountsIndexes{
+ Number: indexes.NewUnique(
+ sb, AccountsNumberIndexPrefix, "accounts_by_number",
+ collections.Uint64Key, sdk.AccAddressKey,
+ func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) {
+ return v.AccountNumber, nil
+ },
+ ),
+ }
+}
+```
+
+We create an `AccountIndexes` struct which contains a field: `Number`. This field represents our `AccountNumber` index.
+`AccountNumber` is a field of `authtypes.BaseAccount` and it's a `uint64`.
+
+Then we can see in our `AccountIndexes` struct the `Number` field is defined as:
+
+```go
+*indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount]
+```
+
+Where the first type parameter is `uint64`, which is the field type of our index.
+The second type parameter is the primary key `sdk.AccAddress`.
+And the third type parameter is the actual object we're storing `authtypes.BaseAccount`.
+
+Then we create a `NewAccountIndexes` function that instantiates and returns the `AccountsIndexes` struct.
+
+The function takes a `SchemaBuilder`. Then we instantiate our `indexes.Unique`, let's analyse the arguments we pass to
+`indexes.NewUnique`.
+
+#### NOTE: indexes list
+
+The `AccountsIndexes` struct contains the indexes, the `NewIndexedMap` function will infer the indexes form that struct
+using reflection, this happens only at init and is not computationally expensive. In case you want to explicitly declare
+indexes: implement the `Indexes` interface in the `AccountsIndexes` struct:
+
+```go
+func (a AccountsIndexes) IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] {
+ return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{a.Number}
+}
+```
+
+#### Instantiating a `indexes.Unique`
+
+The first three arguments, we already know them, they are: `SchemaBuilder`, `Prefix` which is our index prefix (the partition
+where index keys relationship for the `Number` index will be maintained), and the human name for the `Number` index.
+
+The second argument is a `collections.Uint64Key` which is a key codec to deal with `uint64` keys, we pass that because
+the key we're trying to index is a `uint64` key (the account number), and then we pass as fifth argument the primary key codec,
+which in our case is `sdk.AccAddress` (remember: we're mapping `sdk.AccAddress` => `BaseAccount`).
+
+Then as last parameter we pass a function that: given the `BaseAccount` returns its `AccountNumber`.
+
+After this we can proceed instantiating our `IndexedMap`.
+
+```go
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewIndexedMap(
+ sb, AccountsPrefix, "accounts",
+ sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc),
+ NewAccountIndexes(sb),
+ ),
+ }
+}
+```
+
+As we can see here what we do, for now, is the same thing as we did for `collections.Map`.
+We pass it the `SchemaBuilder`, the `Prefix` where we plan to store the mapping between `sdk.AccAddress` and `authtypes.BaseAccount`,
+the human name and the respective `sdk.AccAddress` key codec and `authtypes.BaseAccount` value codec.
+
+Then we pass the instantiation of our `AccountIndexes` through `NewAccountIndexes`.
+
+Full example:
+
+```go
+package docs
+
+import (
+ "cosmossdk.io/collections"
+ "cosmossdk.io/collections/indexes"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+)
+
+var AccountsNumberIndexPrefix = collections.NewPrefix(1)
+
+type AccountsIndexes struct {
+ Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount]
+}
+
+func (a AccountsIndexes) IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] {
+ return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{a.Number}
+}
+
+func NewAccountIndexes(sb *collections.SchemaBuilder) AccountsIndexes {
+ return AccountsIndexes{
+ Number: indexes.NewUnique(
+ sb, AccountsNumberIndexPrefix, "accounts_by_number",
+ collections.Uint64Key, sdk.AccAddressKey,
+ func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) {
+ return v.AccountNumber, nil
+ },
+ ),
+ }
+}
+
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewIndexedMap(
+ sb, AccountsPrefix, "accounts",
+ sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc),
+ NewAccountIndexes(sb),
+ ),
+ }
+}
+```
+
+### Working with IndexedMaps
+
+Whilst instantiating `collections.IndexedMap` is tedious, working with them is extremely smooth.
+
+Let's take the full example, and expand it with some use-cases.
+
+```go
+package docs
+
+import (
+ "cosmossdk.io/collections"
+ "cosmossdk.io/collections/indexes"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+)
+
+var AccountsNumberIndexPrefix = collections.NewPrefix(1)
+
+type AccountsIndexes struct {
+ Number *indexes.Unique[uint64, sdk.AccAddress, authtypes.BaseAccount]
+}
+
+func (a AccountsIndexes) IndexesList() []collections.Index[sdk.AccAddress, authtypes.BaseAccount] {
+ return []collections.Index[sdk.AccAddress, authtypes.BaseAccount]{a.Number}
+}
+
+func NewAccountIndexes(sb *collections.SchemaBuilder) AccountsIndexes {
+ return AccountsIndexes{
+ Number: indexes.NewUnique(
+ sb, AccountsNumberIndexPrefix, "accounts_by_number",
+ collections.Uint64Key, sdk.AccAddressKey,
+ func(_ sdk.AccAddress, v authtypes.BaseAccount) (uint64, error) {
+ return v.AccountNumber, nil
+ },
+ ),
+ }
+}
+
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts *collections.IndexedMap[sdk.AccAddress, authtypes.BaseAccount, AccountsIndexes]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey, cdc codec.BinaryCodec) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewIndexedMap(
+ sb, AccountsPrefix, "accounts",
+ sdk.AccAddressKey, codec.CollValue[authtypes.BaseAccount](cdc),
+ NewAccountIndexes(sb),
+ ),
+ }
+}
+
+func (k Keeper) CreateAccount(ctx sdk.Context, addr sdk.AccAddress) error {
+ nextAccountNumber := k.getNextAccountNumber()
+
+ newAcc := authtypes.BaseAccount{
+ AccountNumber: nextAccountNumber,
+ Sequence: 0,
+ }
+
+ return k.Accounts.Set(ctx, addr, newAcc)
+}
+
+func (k Keeper) RemoveAccount(ctx sdk.Context, addr sdk.AccAddress) error {
+ return k.Accounts.Remove(ctx, addr)
+}
+
+func (k Keeper) GetAccountByNumber(ctx sdk.Context, accNumber uint64) (sdk.AccAddress, authtypes.BaseAccount, error) {
+ accAddress, err := k.Accounts.Indexes.Number.MatchExact(ctx, accNumber)
+ if err != nil {
+ return nil, authtypes.BaseAccount{}, err
+ }
+
+ acc, err := k.Accounts.Get(ctx, accAddress)
+ return accAddress, acc, nil
+}
+
+func (k Keeper) GetAccountsByNumber(ctx sdk.Context, startAccNum, endAccNum uint64) ([]authtypes.BaseAccount, error) {
+ rng := new(collections.Range[uint64]).
+ StartInclusive(startAccNum).
+ EndInclusive(endAccNum)
+
+ iter, err := k.Accounts.Indexes.Number.Iterate(ctx, rng)
+ if err != nil {
+ return nil, err
+ }
+
+ return indexes.CollectValues(ctx, k.Accounts, iter)
+}
+
+
+func (k Keeper) getNextAccountNumber() uint64 {
+ return 0
+}
+```
+
+## Collections with interfaces as values
+
+Although cosmos-sdk is shifting away from the usage of interface registry, there are still some places where it is used.
+In order to support old code, we have to support collections with interface values.
+
+The generic `codec.CollValue` is not able to handle interface values, so we need to use a special type `codec.CollValueInterface`.
+`codec.CollValueInterface` takes a `codec.BinaryCodec` as an argument, and uses it to marshal and unmarshal values as interfaces.
+The `codec.CollValueInterface` lives in the `codec` package, whose import path is `github.com/cosmos/cosmos-sdk/codec`.
+
+### Instantiating Collections with interface values
+
+In order to instantiate a collection with interface values, we need to use `codec.CollValueInterface` instead of `codec.CollValue`.
+
+```go
+package example
+
+import (
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+)
+
+var AccountsPrefix = collections.NewPrefix(0)
+
+type Keeper struct {
+ Schema collections.Schema
+ Accounts *collections.Map[sdk.AccAddress, sdk.AccountI]
+}
+
+func NewKeeper(cdc codec.BinaryCodec, storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Accounts: collections.NewMap(
+ sb, AccountsPrefix, "accounts",
+ sdk.AccAddressKey, codec.CollInterfaceValue[sdk.AccountI](cdc),
+ ),
+ }
+}
+
+func (k Keeper) SaveBaseAccount(ctx sdk.Context, account authtypes.BaseAccount) error {
+ return k.Accounts.Set(ctx, account.GetAddress(), account)
+}
+
+func (k Keeper) SaveModuleAccount(ctx sdk.Context, account authtypes.ModuleAccount) error {
+ return k.Accounts.Set(ctx, account.GetAddress(), account)
+}
+
+func (k Keeper) GetAccount(ctx sdk.context, addr sdk.AccAddress) (sdk.AccountI, error) {
+ return k.Accounts.Get(ctx, addr)
+}
+```
+
+## Triple key
+
+The `collections.Triple` is a special type of key composed of three keys, it's identical to `collections.Pair`.
+
+Let's see an example.
+
+```go
+package example
+
+import (
+ "context"
+
+ "cosmossdk.io/collections"
+ storetypes "cosmossdk.io/store/types"
+ "github.com/cosmos/cosmos-sdk/codec"
+)
+
+type AccAddress = string
+type ValAddress = string
+
+type Keeper struct {
+ // let's simulate we have redelegations which are stored as a triple key composed of
+ // the delegator, the source validator and the destination validator.
+ Redelegations collections.KeySet[collections.Triple[AccAddress, ValAddress, ValAddress]]
+}
+
+func NewKeeper(storeKey *storetypes.KVStoreKey) Keeper {
+ sb := collections.NewSchemaBuilder(sdk.OpenKVStore(storeKey))
+ return Keeper{
+ Redelegations: collections.NewKeySet(sb, collections.NewPrefix(0), "redelegations", collections.TripleKeyCodec(collections.StringKey, collections.StringKey, collections.StringKey)
+ }
+}
+
+// RedelegationsByDelegator iterates over all the redelegations of a given delegator and calls onResult providing
+// each redelegation from source validator towards the destination validator.
+func (k Keeper) RedelegationsByDelegator(ctx context.Context, delegator AccAddress, onResult func(src, dst ValAddress) (stop bool, err error)) error {
+ rng := collections.NewPrefixedTripleRange[AccAddress, ValAddress, ValAddress](delegator)
+ return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) {
+ return onResult(key.K2(), key.K3())
+ })
+}
+
+// RedelegationsByDelegatorAndValidator iterates over all the redelegations of a given delegator and its source validator and calls onResult for each
+// destination validator.
+func (k Keeper) RedelegationsByDelegatorAndValidator(ctx context.Context, delegator AccAddress, validator ValAddress, onResult func(dst ValAddress) (stop bool, err error)) error {
+ rng := collections.NewSuperPrefixedTripleRange[AccAddress, ValAddress, ValAddress](delegator, validator)
+ return k.Redelegations.Walk(ctx, rng, func(key collections.Triple[AccAddress, ValAddress, ValAddress]) (stop bool, err error) {
+ return onResult(key.K3())
+ })
+}
+```
+
+## Advanced Usages
+
+### Alternative Value Codec
+
+The `codec.AltValueCodec` allows a collection to decode values using a different codec than the one used to encode them.
+Basically it enables to decode two different byte representations of the same concrete value.
+It can be used to lazily migrate values from one bytes representation to another, as long as the new representation is
+not able to decode the old one.
+
+A concrete example can be found in `x/bank` where the balance was initially stored as `Coin` and then migrated to `Int`.
+
+```go
+
+var BankBalanceValueCodec = codec.NewAltValueCodec(sdk.IntValue, func(b []byte) (sdk.Int, error) {
+ coin := sdk.Coin{}
+ err := coin.Unmarshal(b)
+ if err != nil {
+ return sdk.Int{}, err
+ }
+ return coin.Amount, nil
+})
+```
+
+The above example shows how to create an `AltValueCodec` that can decode both `sdk.Int` and `sdk.Coin` values. The provided
+decoder function will be used as a fallback in case the default decoder fails. When the value will be encoded back into state
+it will use the default encoder. This allows to lazily migrate values to a new bytes representation.
diff --git a/copy-of-sdk-docs/build/packages/README.md b/copy-of-sdk-docs/build/packages/README.md
new file mode 100644
index 00000000..e6dbeeb2
--- /dev/null
+++ b/copy-of-sdk-docs/build/packages/README.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 0
+---
+
+# Packages
+
+The Cosmos SDK is a collection of Go modules. This section provides documentation on various packages that can be used when developing a Cosmos SDK chain.
+It lists all standalone Go modules that are part of the Cosmos SDK.
+
+:::tip
+For more information on SDK modules, see the [SDK Modules](https://docs.cosmos.network/main/modules) section.
+For more information on SDK tooling, see the [Tooling](https://docs.cosmos.network/main/tooling) section.
+:::
+
+## Core
+
+* [Core](https://pkg.go.dev/cosmossdk.io/core) - Core library defining SDK interfaces ([ADR-063](https://docs.cosmos.network/main/architecture/adr-063-core-module-api))
+* [API](https://pkg.go.dev/cosmossdk.io/api) - API library containing generated SDK Pulsar API
+* [Store](https://pkg.go.dev/cosmossdk.io/store) - Implementation of the Cosmos SDK store
+
+## State Management
+
+* [Collections](./02-collections.md) - State management library
+
+## Automation
+
+* [Depinject](./01-depinject.md) - Dependency injection framework
+* [Client/v2](https://pkg.go.dev/cosmossdk.io/client/v2) - Library powering [AutoCLI](https://docs.cosmos.network/main/core/autocli)
+
+## Utilities
+
+* [Log](https://pkg.go.dev/cosmossdk.io/log) - Logging library
+* [Errors](https://pkg.go.dev/cosmossdk.io/errors) - Error handling library
+* [Math](https://pkg.go.dev/cosmossdk.io/math) - Math library for SDK arithmetic operations
+
+## Example
+
+* [SimApp](https://pkg.go.dev/cosmossdk.io/simapp) - SimApp is **the** sample Cosmos SDK chain. This package should not be imported in your application.
diff --git a/copy-of-sdk-docs/build/packages/_category_.json b/copy-of-sdk-docs/build/packages/_category_.json
new file mode 100644
index 00000000..5ed885eb
--- /dev/null
+++ b/copy-of-sdk-docs/build/packages/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Packages",
+ "position": 4,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/rfc/PROCESS.md b/copy-of-sdk-docs/build/rfc/PROCESS.md
new file mode 100644
index 00000000..a34af226
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/PROCESS.md
@@ -0,0 +1,62 @@
+# RFC Creation Process
+
+1. Copy the `rfc-template.md` file. Use the following filename pattern: `rfc-next_number-title.md`
+2. Create a draft Pull Request if you want to get an early feedback.
+3. Make sure the context and a solution is clear and well documented.
+4. Add an entry to a list in the [README](./README.md) file.
+5. Create a Pull Request to propose a new ADR.
+
+## What is an RFC?
+
+An RFC is a sort of async whiteboarding session. It is meant to replace the need for a distributed team to come together to make a decision. Currently, the Cosmos SDK team and contributors are distributed around the world. The team conducts working groups to have a synchronous discussion and an RFC can be used to capture the discussion for a wider audience to better understand the changes that are coming to the software.
+
+The main difference the Cosmos SDK is defining as a differentiation between RFC and ADRs is that one is to come to consensus and circulate information about a potential change or feature. An ADR is used if there is already consensus on a feature or change and there is not a need to articulate the change coming to the software. An ADR will articulate the changes and have a lower amount of communication .
+
+## RFC life cycle
+
+RFC creation is an **iterative** process. An RFC is meant as a distributed colloboration session, it may have many comments and is usually the bi-product of no working group or synchornous communication
+
+1. Proposals could start with a new GitHub Issue, be a result of existing Issues or a discussion.
+
+2. An RFC doesn't have to arrive to `main` with an _accepted_ status in a single PR. If the motivation is clear and the solution is sound, we SHOULD be able to merge it and keep a _proposed_ status. It's preferable to have an iterative approach rather than long, not merged Pull Requests.
+
+3. If a _proposed_ RFC is merged, then it should clearly document outstanding issues either in the RFC document notes or in a GitHub Issue.
+
+4. The PR SHOULD always be merged. In the case of a faulty RFC, we still prefer to merge it with a _rejected_ status. The only time the RFC SHOULD NOT be merged is if the author abandons it.
+
+5. Merged RFCs SHOULD NOT be pruned.
+
+6. If there is consensus and enough feedback then the RFC can be accepted.
+
+> Note: An RFC is written when there is no working group or team session on the problem. RFC's are meant as a distributed white boarding session. If there is a working group on the proposal there is no need to have an RFC as there is synchornous whiteboarding going on.
+
+### RFC status
+
+Status has two components:
+
+```text
+{CONSENSUS STATUS}
+```
+
+#### Consensus Status
+
+```text
+DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEDED by ADR-xxx
+ \ |
+ \ |
+ v v
+ ABANDONED
+```
+
+* `DRAFT`: [optional] an ADR which is work in progress, not being ready for a general review. This is to present an early work and get an early feedback in a Draft Pull Request form.
+* `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached an agreed yet.
+* `LAST CALL `: [optional] clear notify that we are close to accept updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached and we still want to give it a time to let the community react or analyze.
+* `ACCEPTED`: ADR which will represent a currently implemented or to be implemented architecture design.
+* `REJECTED`: ADR can go from PROPOSED or ACCEPTED to rejected if the consensus among project stakeholders will decide so.
+* `SUPERSEEDED by ADR-xxx`: ADR which has been superseded by a new ADR.
+* `ABANDONED`: the ADR is no longer pursued by the original authors.
+
+## Language used in RFC
+
+* The background/goal should be written in the present tense.
+* Avoid using a first, personal form.
diff --git a/copy-of-sdk-docs/build/rfc/README.md b/copy-of-sdk-docs/build/rfc/README.md
new file mode 100644
index 00000000..8b8ead24
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/README.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 1
+---
+
+# Requests for Comments
+
+A Request for Comments (RFC) is a record of discussion on an open-ended topic
+related to the design and implementation of the Cosmos SDK, for which no
+immediate decision is required.
+
+The purpose of an RFC is to serve as a historical record of a high-level
+discussion that might otherwise only be recorded in an ad-hoc way (for example,
+via gists or Google docs) that are difficult to discover for someone after the
+fact. An RFC _may_ give rise to more specific architectural _decisions_ for
+the Cosmos SDK, but those decisions must be recorded separately in
+[Architecture Decision Records (ADR)](../architecture).
+
+As a rule of thumb, if you can articulate a specific question that needs to be
+answered, write an ADR. If you need to explore the topic and get input from
+others to know what questions need to be answered, an RFC may be appropriate.
+
+## RFC Content
+
+An RFC should provide:
+
+* A **changelog**, documenting when and how the RFC has changed.
+* An **abstract**, briefly summarizing the topic so the reader can quickly tell
+ whether it is relevant to their interest.
+* Any **background** a reader will need to understand and participate in the
+ substance of the discussion (links to other documents are fine here).
+* The **discussion**, the primary content of the document.
+
+The [rfc-template.md](./rfc-template.md) file includes placeholders for these
+sections.
+
+## Table of Contents
+
+* [RFC-001: Tx Validation](./rfc-001-tx-validation.md)
diff --git a/copy-of-sdk-docs/build/rfc/_category_.json b/copy-of-sdk-docs/build/rfc/_category_.json
new file mode 100644
index 00000000..a5712bda
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "RFC",
+ "position": 7,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/rfc/rfc-001-tx-validation.md b/copy-of-sdk-docs/build/rfc/rfc-001-tx-validation.md
new file mode 100644
index 00000000..923e1c72
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc-001-tx-validation.md
@@ -0,0 +1,25 @@
+# RFC 001: Transaction Validation
+
+## Changelog
+
+* 2023-03-12: Proposed
+
+## Background
+
+Transation Validation is crucial to a functioning state machine. Within the Cosmos SDK there are two validation flows, one is outside the message server and the other within. The flow outside of the message server is the `ValidateBasic` function. It is called in the antehandler on both `CheckTx` and `DeliverTx`. There is an overhead and sometimes duplication of validation within these two flows. This extra validation provides an additional check before entering the mempool.
+
+With the deprecation of [`GetSigners`](https://github.com/cosmos/cosmos-sdk/issues/11275) we have the optionality to remove [sdk.Msg](https://github.com/cosmos/cosmos-sdk/blob/16a5404f8e00ddcf8857c8a55dca2f7c109c29bc/types/tx_msg.go#L16) and the `ValidateBasic` function.
+
+With the separation of CometBFT and Cosmos-SDK, there is a lack of control of what transactions get broadcasted and included in a block. This extra validation in the antehandler is meant to help in this case. In most cases the transaction is or should be simulated against a node for validation. With this flow transactions will be treated the same.
+
+## Proposal
+
+The acceptance of this RFC would move validation within `ValidateBasic` to the message server in modules, update tutorials and docs to remove mention of using `ValidateBasic` in favour of handling all validation for a message where it is executed.
+
+We can and will still support the `Validatebasic` function for users and provide an extension interface of the function once `sdk.Msg` is depreacted.
+
+> Note: This is how messages are handled in VMs like Ethereum and CosmWasm.
+
+### Consequences
+
+The consequence of updating the transaction flow is that transaction that may have failed before with the `ValidateBasic` flow will now be included in a block and fees charged.
diff --git a/copy-of-sdk-docs/build/rfc/rfc-template.md b/copy-of-sdk-docs/build/rfc/rfc-template.md
new file mode 100644
index 00000000..417a795d
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc-template.md
@@ -0,0 +1,83 @@
+# RFC {RFC-NUMBER}: {TITLE}
+
+## Changelog
+
+* {date}: {changelog}
+
+## Background
+
+> The next section is the "Background" section. This section should be at least two paragraphs and can take up to a whole
+> page in some cases. The guiding goal of the background section is: as a newcomer to this project (new employee, team
+> transfer), can I read the background section and follow any links to get the full context of why this change is
+> necessary?
+>
+> If you can't show a random engineer the background section and have them acquire nearly full context on the necessity
+> for the RFC, then the background section is not full enough. To help achieve this, link to prior RFCs, discussions, and
+> more here as necessary to provide context so you don't have to simply repeat yourself.
+
+
+## Proposal
+
+> The next required section is "Proposal" or "Goal". Given the background above, this section proposes a solution.
+> This should be an overview of the "how" for the solution, but for details further sections will be used.
+
+
+## Abandoned Ideas (Optional)
+
+> As RFCs evolve, it is common that there are ideas that are abandoned. Rather than simply deleting them from the
+> document, you should try to organize them into sections that make it clear they're abandoned while explaining why they
+> were abandoned.
+>
+> When sharing your RFC with others or having someone look back on your RFC in the future, it is common to walk the same
+> path and fall into the same pitfalls that we've since matured from. Abandoned ideas are a way to recognize that path
+> and explain the pitfalls and why they were abandoned.
+
+## Descision
+
+> This section describes alternative designs to the chosen design. This section
+> is important and if an adr does not have any alternatives then it should be
+> considered that the ADR was not thought through.
+
+## Consequences (optional)
+
+> This section describes the resulting context, after applying the decision. All
+> consequences should be listed here, not just the "positive" ones. A particular
+> decision may have positive, negative, and neutral consequences, but all of them
+> affect the team and project in the future.
+
+### Backwards Compatibility
+
+> All ADRs that introduce backwards incompatibilities must include a section
+> describing these incompatibilities and their severity. The ADR must explain
+> how the author proposes to deal with these incompatibilities. ADR submissions
+> without a sufficient backwards compatibility treatise may be rejected outright.
+
+### Positive
+
+> {positive consequences}
+
+### Negative
+
+> {negative consequences}
+
+### Neutral
+
+> {neutral consequences}
+
+
+
+### References
+
+> Links to external materials needed to follow the discussion may be added here.
+>
+> In addition, if the discussion in a request for comments leads to any design
+> decisions, it may be helpful to add links to the ADR documents here after the
+> discussion has settled.
+
+## Discussion
+
+> This section contains the core of the discussion.
+>
+> There is no fixed format for this section, but ideally changes to this
+> section should be updated before merging to reflect any discussion that took
+> place on the PR that made those changes.
diff --git a/copy-of-sdk-docs/build/rfc/rfc/PROCESS.md b/copy-of-sdk-docs/build/rfc/rfc/PROCESS.md
new file mode 100644
index 00000000..20f08a6e
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc/PROCESS.md
@@ -0,0 +1,62 @@
+# RFC Creation Process
+
+1. Copy the `rfc-template.md` file. Use the following filename pattern: `rfc-next_number-title.md`
+2. Create a draft Pull Request if you want to get an early feedback.
+3. Make sure the context and a solution is clear and well documented.
+4. Add an entry to a list in the [README](./README.md) file.
+5. Create a Pull Request to propose a new ADR.
+
+## What is an RFC?
+
+An RFC is a sort of async whiteboarding session. It is meant to replace the need for a distributed team to come together to make a decision. Currently, the Cosmos SDK team and contributors are distributed around the world. The team conducts working groups to have a synchronous discussion and an RFC can be used to capture the discussion for a wider audience to better understand the changes that are coming to the software.
+
+The main difference the Cosmos SDK is defining as a differentiation between RFC and ADRs is that one is to come to consensus and circulate information about a potential change or feature. An ADR is used if there is already consensus on a feature or change and there is not a need to articulate the change coming to the software. An ADR will articulate the changes and have a lower amount of communication.
+
+## RFC life cycle
+
+RFC creation is an **iterative** process. An RFC is meant as a distributed collaboration session, it may have many comments and is usually the by-product of no working group or synchronous communication
+
+1. Proposals could start with a new GitHub Issue, be a result of existing Issues or a discussion.
+
+2. An RFC doesn't have to arrive to `main` with an _accepted_ status in a single PR. If the motivation is clear and the solution is sound, we SHOULD be able to merge it and keep a _proposed_ status. It's preferable to have an iterative approach rather than long, not merged Pull Requests.
+
+3. If a _proposed_ RFC is merged, then it should clearly document outstanding issues either in the RFC document notes or in a GitHub Issue.
+
+4. The PR SHOULD always be merged. In the case of a faulty RFC, we still prefer to merge it with a _rejected_ status. The only time the RFC SHOULD NOT be merged is if the author abandons it.
+
+5. Merged RFCs SHOULD NOT be pruned.
+
+6. If there is consensus and enough feedback then the RFC can be accepted.
+
+> Note: An RFC is written when there is no working group or team session on the problem. RFC's are meant as a distributed white boarding session. If there is a working group on the proposal there is no need to have an RFC as there is synchronous whiteboarding going on.
+
+### RFC status
+
+Status has two components:
+
+```text
+{CONSENSUS STATUS}
+```
+
+#### Consensus Status
+
+```text
+DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEDED by ADR-xxx
+ \ |
+ \ |
+ v v
+ ABANDONED
+```
+
+* `DRAFT`: [optional] an ADR which is work in progress, not being ready for a general review. This is to present an early work and get an early feedback in a Draft Pull Request form.
+* `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached an agreed yet.
+* `LAST CALL `: [optional] clear notify that we are close to accept updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached and we still want to give it a time to let the community react or analyze.
+* `ACCEPTED`: ADR which will represent a currently implemented or to be implemented architecture design.
+* `REJECTED`: ADR can go from PROPOSED or ACCEPTED to rejected if the consensus among project stakeholders will decide so.
+* `SUPERSEDED by ADR-xxx`: ADR which has been superseded by a new ADR.
+* `ABANDONED`: the ADR is no longer pursued by the original authors.
+
+## Language used in RFC
+
+* The background/goal should be written in the present tense.
+* Avoid using a first, personal form.
diff --git a/copy-of-sdk-docs/build/rfc/rfc/README.md b/copy-of-sdk-docs/build/rfc/rfc/README.md
new file mode 100644
index 00000000..8b8ead24
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc/README.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 1
+---
+
+# Requests for Comments
+
+A Request for Comments (RFC) is a record of discussion on an open-ended topic
+related to the design and implementation of the Cosmos SDK, for which no
+immediate decision is required.
+
+The purpose of an RFC is to serve as a historical record of a high-level
+discussion that might otherwise only be recorded in an ad-hoc way (for example,
+via gists or Google docs) that are difficult to discover for someone after the
+fact. An RFC _may_ give rise to more specific architectural _decisions_ for
+the Cosmos SDK, but those decisions must be recorded separately in
+[Architecture Decision Records (ADR)](../architecture).
+
+As a rule of thumb, if you can articulate a specific question that needs to be
+answered, write an ADR. If you need to explore the topic and get input from
+others to know what questions need to be answered, an RFC may be appropriate.
+
+## RFC Content
+
+An RFC should provide:
+
+* A **changelog**, documenting when and how the RFC has changed.
+* An **abstract**, briefly summarizing the topic so the reader can quickly tell
+ whether it is relevant to their interest.
+* Any **background** a reader will need to understand and participate in the
+ substance of the discussion (links to other documents are fine here).
+* The **discussion**, the primary content of the document.
+
+The [rfc-template.md](./rfc-template.md) file includes placeholders for these
+sections.
+
+## Table of Contents
+
+* [RFC-001: Tx Validation](./rfc-001-tx-validation.md)
diff --git a/copy-of-sdk-docs/build/rfc/rfc/_category_.json b/copy-of-sdk-docs/build/rfc/rfc/_category_.json
new file mode 100644
index 00000000..a5712bda
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "RFC",
+ "position": 7,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/rfc/rfc/rfc-001-tx-validation.md b/copy-of-sdk-docs/build/rfc/rfc/rfc-001-tx-validation.md
new file mode 100644
index 00000000..80dc8e1f
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc/rfc-001-tx-validation.md
@@ -0,0 +1,25 @@
+# RFC 001: Transaction Validation
+
+## Changelog
+
+* 2023-03-12: Proposed
+
+## Background
+
+Transaction Validation is crucial to a functioning state machine. Within the Cosmos SDK there are two validation flows, one is outside the message server and the other within. The flow outside of the message server is the `ValidateBasic` function. It is called in the antehandler on both `CheckTx` and `DeliverTx`. There is an overhead and sometimes duplication of validation within these two flows. This extra validation provides an additional check before entering the mempool.
+
+With the deprecation of [`GetSigners`](https://github.com/cosmos/cosmos-sdk/issues/11275) we have the optionality to remove [sdk.Msg](https://github.com/cosmos/cosmos-sdk/blob/16a5404f8e00ddcf8857c8a55dca2f7c109c29bc/types/tx_msg.go#L16) and the `ValidateBasic` function.
+
+With the separation of CometBFT and Cosmos-SDK, there is a lack of control of what transactions get broadcasted and included in a block. This extra validation in the antehandler is meant to help in this case. In most cases the transaction is or should be simulated against a node for validation. With this flow transactions will be treated the same.
+
+## Proposal
+
+The acceptance of this RFC would move validation within `ValidateBasic` to the message server in modules, update tutorials and docs to remove mention of using `ValidateBasic` in favour of handling all validation for a message where it is executed.
+
+We can and will still support the `ValidateBasic` function for users and provide an extension interface of the function once `sdk.Msg` is deprecated.
+
+> Note: This is how messages are handled in VMs like Ethereum and CosmWasm.
+
+### Consequences
+
+The consequence of updating the transaction flow is that transaction that may have failed before with the `ValidateBasic` flow will now be included in a block and the fees charged.
diff --git a/copy-of-sdk-docs/build/rfc/rfc/rfc-template.md b/copy-of-sdk-docs/build/rfc/rfc/rfc-template.md
new file mode 100644
index 00000000..f4e79fbb
--- /dev/null
+++ b/copy-of-sdk-docs/build/rfc/rfc/rfc-template.md
@@ -0,0 +1,83 @@
+# RFC {RFC-NUMBER}: {TITLE}
+
+## Changelog
+
+* {date}: {changelog}
+
+## Background
+
+> The next section is the "Background" section. This section should be at least two paragraphs and can take up to a whole
+> page in some cases. The guiding goal of the background section is: as a newcomer to this project (new employee, team
+> transfer), can I read the background section and follow any links to get the full context of why this change is
+> necessary?
+>
+> If you can't show a random engineer the background section and have them acquire nearly full context on the necessity
+> for the RFC, then the background section is not full enough. To help achieve this, link to prior RFCs, discussions, and
+> more here as necessary to provide context so you don't have to simply repeat yourself.
+
+
+## Proposal
+
+> The next required section is "Proposal" or "Goal". Given the background above, this section proposes a solution.
+> This should be an overview of the "how" for the solution, but for details further sections will be used.
+
+
+## Abandoned Ideas (Optional)
+
+> As RFCs evolve, it is common that there are ideas that are abandoned. Rather than simply deleting them from the
+> document, you should try to organize them into sections that make it clear they're abandoned while explaining why they
+> were abandoned.
+>
+> When sharing your RFC with others or having someone look back on your RFC in the future, it is common to walk the same
+> path and fall into the same pitfalls that we've since matured from. Abandoned ideas are a way to recognize that path
+> and explain the pitfalls and why they were abandoned.
+
+## Decision
+
+> This section describes alternative designs to the chosen design. This section
+> is important and if an adr does not have any alternatives then it should be
+> considered that the ADR was not thought through.
+
+## Consequences (optional)
+
+> This section describes the resulting context, after applying the decision. All
+> consequences should be listed here, not just the "positive" ones. A particular
+> decision may have positive, negative, and neutral consequences, but all of them
+> affect the team and project in the future.
+
+### Backwards Compatibility
+
+> All ADRs that introduce backwards incompatibilities must include a section
+> describing these incompatibilities and their severity. The ADR must explain
+> how the author proposes to deal with these incompatibilities. ADR submissions
+> without a sufficient backwards compatibility treatise may be rejected outright.
+
+### Positive
+
+> {positive consequences}
+
+### Negative
+
+> {negative consequences}
+
+### Neutral
+
+> {neutral consequences}
+
+
+
+### References
+
+> Links to external materials needed to follow the discussion may be added here.
+>
+> In addition, if the discussion in a request for comments leads to any design
+> decisions, it may be helpful to add links to the ADR documents here after the
+> discussion has settled.
+
+## Discussion
+
+> This section contains the core of the discussion.
+>
+> There is no fixed format for this section, but ideally changes to this
+> section should be updated before merging to reflect any discussion that took
+> place on the PR that made those changes.
diff --git a/copy-of-sdk-docs/build/spec/README.md b/copy-of-sdk-docs/build/spec/README.md
new file mode 100644
index 00000000..cca186ad
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/README.md
@@ -0,0 +1,25 @@
+---
+sidebar_position: 1
+---
+
+# Specifications
+
+This directory contains specifications for the modules of the Cosmos SDK as well as Interchain Standards (ICS) and other specifications.
+
+Cosmos SDK applications hold this state in a Merkle store. Updates to
+the store may be made during transactions and at the beginning and end of every
+block.
+
+## Cosmos SDK specifications
+
+* [Store](./store) - The core Merkle store that holds the state.
+* [Bech32](./addresses/bech32.md) - Address format for Cosmos SDK applications.
+
+## Modules specifications
+
+Go to the [module directory](https://docs.cosmos.network/main/modules)
+
+## CometBFT
+
+For details on the underlying blockchain and p2p protocols, see
+the [CometBFT specification](https://github.com/cometbft/cometbft/tree/main/spec).
diff --git a/copy-of-sdk-docs/build/spec/SPEC_MODULE.md b/copy-of-sdk-docs/build/spec/SPEC_MODULE.md
new file mode 100644
index 00000000..bb9ee251
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/SPEC_MODULE.md
@@ -0,0 +1,60 @@
+# Specification of Modules
+
+This file intends to outline the common structure for specifications within
+this directory.
+
+## Tense
+
+For consistency, specs should be written in passive present tense.
+
+## Pseudo-Code
+
+Generally, pseudo-code should be minimized throughout the spec. Often, simple
+bulleted-lists which describe a function's operations are sufficient and should
+be considered preferable. In certain instances, due to the complex nature of
+the functionality being described pseudo-code may the most suitable form of
+specification. In these cases use of pseudo-code is permissible, but should be
+presented in a concise manner, ideally restricted to only the complex
+element as a part of a larger description.
+
+## Common Layout
+
+The following generalized `README` structure should be used to breakdown
+specifications for modules. The following list is nonbinding and all sections are optional.
+
+* `# {Module Name}` - overview of the module
+* `## Concepts` - describe specialized concepts and definitions used throughout the spec
+* `## State` - specify and describe structures expected to be marshaled into the store, and their keys
+* `## State Transitions` - standard state transition operations triggered by hooks, messages, etc.
+* `## Messages` - specify message structure(s) and expected state machine behavior(s)
+* `## Begin Block` - specify any begin-block operations
+* `## End Block` - specify any end-block operations
+* `## Hooks` - describe available hooks to be called by/from this module
+* `## Events` - list and describe event tags used
+* `## Client` - list and describe CLI commands and gRPC and REST endpoints
+* `## Params` - list all module parameters, their types (in JSON) and examples
+* `## Future Improvements` - describe future improvements of this module
+* `## Tests` - acceptance tests
+* `## Appendix` - supplementary details referenced elsewhere within the spec
+
+### Notation for key-value mapping
+
+Within `## State` the following notation `->` should be used to describe key to
+value mapping:
+
+```text
+key -> value
+```
+
+to represent byte concatenation the `|` may be used. In addition, encoding
+type may be specified, for example:
+
+```text
+0x00 | addressBytes | address2Bytes -> amino(value_object)
+```
+
+Additionally, index mappings may be specified by mapping to the `nil` value, for example:
+
+```text
+0x01 | address2Bytes | addressBytes -> nil
+```
diff --git a/copy-of-sdk-docs/build/spec/SPEC_STANDARD.md b/copy-of-sdk-docs/build/spec/SPEC_STANDARD.md
new file mode 100644
index 00000000..c08fbf04
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/SPEC_STANDARD.md
@@ -0,0 +1,121 @@
+# What is an SDK standard?
+
+An SDK standard is a design document describing a particular protocol, standard, or feature expected to be used by the Cosmos SDK. An SDK standard should list the desired properties of the standard, explain the design rationale, and provide a concise but comprehensive technical specification. The primary author is responsible for pushing the proposal through the standardization process, soliciting input and support from the community, and communicating with relevant stakeholders to ensure (social) consensus.
+
+## Sections
+
+An SDK standard consists of:
+
+* a synopsis,
+* overview and basic concepts,
+* technical specification,
+* history log, and
+* copyright notice.
+
+All top-level sections are required. References should be included inline as links, or tabulated at the bottom of the section if necessary. Included subsections should be listed in the order specified below.
+
+### Table Of Contents
+
+Provide a table of contents at the top of the file to help readers.
+
+### Synopsis
+
+The document should include a brief (~200 word) synopsis providing a high-level description of and rationale for the specification.
+
+### Overview and basic concepts
+
+This section should include a motivation subsection and a definition subsection if required:
+
+* *Motivation* - A rationale for the existence of the proposed feature, or the proposed changes to an existing feature.
+* *Definitions* - A list of new terms or concepts used in the document or required to understand it.
+
+### System model and properties
+
+This section should include an assumption subsection if any, the mandatory properties subsection, and a dependency subsection. Note that the first two subsections are tightly coupled: how to enforce a property will depend directly on the assumptions made. This subsection is important to capture the interactions of the specified feature with the "rest-of-the-world," i.e., with other features of the ecosystem.
+
+* *Assumptions* - A list of any assumptions made by the feature designer. It should capture which features are used by the feature under specification, and what do we expect from them.
+* *Properties* - A list of the desired properties or characteristics of the feature specified, and expected effects or failures when the properties are violated. In case it is relevant, it can also include a list of properties that the feature does not guarantee.
+* *Dependencies* - A list of the features that use the feature under specification and how.
+
+### Technical specification
+
+This is the main section of the document, and should contain protocol documentation, design rationale, required references, and technical details where appropriate.
+The section may have any or all of the following subsections, as appropriate to the particular specification. The API subsection is especially encouraged when appropriate.
+
+* *API* - A detailed description of the feature's API.
+* *Technical Details* - All technical details including syntax, diagrams, semantics, protocols, data structures, algorithms, and pseudocode as appropriate. The technical specification should be detailed enough such that separate correct implementations of the specification without knowledge of each other are compatible.
+* *Backwards Compatibility* - A discussion of compatibility (or lack thereof) with previous feature or protocol versions.
+* *Known Issues* - A list of known issues. This subsection is specially important for specifications of already in-use features.
+* *Example Implementation* - A concrete example implementation or description of an expected implementation to serve as the primary reference for implementers.
+
+### History
+
+A specification should include a history section, listing any inspiring documents and a plaintext log of significant changes.
+
+See an example history section [below](#history-1).
+
+### Copyright
+
+A specification should include a copyright section waiving rights via [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
+
+## Formatting
+
+### General
+
+Specifications must be written in GitHub-flavored Markdown.
+
+For a GitHub-flavored Markdown cheat sheet, see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet). For a local Markdown renderer, see [here](https://github.com/joeyespo/grip).
+
+### Language
+
+Specifications should be written in Simple English, avoiding obscure terminology and unnecessary jargon. For excellent examples of Simple English, please see the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page).
+
+The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in specifications are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
+
+### Pseudocode
+
+Pseudocode in specifications should be language-agnostic and formatted in a simple imperative standard, with line numbers, variables, simple conditional blocks, for loops, and
+English fragments where necessary to explain further functionality such as scheduling timeouts. LaTeX images should be avoided because they are challenging to review in diff form.
+
+Pseudocode for structs can be written in a simple language like TypeScript or golang, as interfaces.
+
+Example Golang pseudocode struct:
+
+```go
+type CacheKVStore interface {
+ cache: map[Key]Value
+ parent: KVStore
+ deleted: Key
+}
+```
+
+Pseudocode for algorithms should be written in simple Golang, as functions.
+
+Example pseudocode algorithm:
+
+```go
+func get(
+ store CacheKVStore,
+ key Key) Value {
+
+ value = store.cache.get(Key)
+ if (value !== null) {
+ return value
+ } else {
+ value = store.parent.get(key)
+ store.cache.set(key, value)
+ return value
+ }
+}
+```
+
+## History
+
+This specification was significantly inspired by and derived from IBC's [ICS](https://github.com/cosmos/ibc/blob/main/spec/ics-001-ics-standard/README.md), which
+was in turn derived from Ethereum's [EIP 1](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md).
+
+Nov 24, 2022 - Initial draft finished and submitted as a PR
+
+## Copyright
+
+All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
diff --git a/copy-of-sdk-docs/build/spec/_category_.json b/copy-of-sdk-docs/build/spec/_category_.json
new file mode 100644
index 00000000..5c2ccf7d
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Specifications",
+ "position": 8,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/build/spec/_ics/README.md b/copy-of-sdk-docs/build/spec/_ics/README.md
new file mode 100644
index 00000000..803e0c89
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/_ics/README.md
@@ -0,0 +1,3 @@
+# Cosmos ICS
+
+* [ICS030 - Signed Messages](./ics-030-signed-messages.md)
diff --git a/copy-of-sdk-docs/build/spec/_ics/ics-030-signed-messages.md b/copy-of-sdk-docs/build/spec/_ics/ics-030-signed-messages.md
new file mode 100644
index 00000000..a7c56715
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/_ics/ics-030-signed-messages.md
@@ -0,0 +1,192 @@
+# ICS 030: Cosmos Signed Messages
+
+>TODO: Replace with valid ICS number and possibly move to new location.
+
+* [Changelog](#changelog)
+* [Abstract](#abstract)
+* [Preliminary](#preliminary)
+* [Specification](#specification)
+* [Future Adaptations](#future-adaptations)
+* [API](#api)
+* [References](#references)
+
+## Status
+
+Proposed.
+
+## Changelog
+
+## Abstract
+
+Having the ability to sign messages off-chain has proven to be a fundamental aspect
+of nearly any blockchain. The notion of signing messages off-chain has many
+added benefits such as saving on computational costs and reducing transaction
+throughput and overhead. Within the context of the Cosmos, some of the major
+applications of signing such data includes, but is not limited to, providing a
+cryptographic secure and verifiable means of proving validator identity and
+possibly associating it with some other framework or organization. In addition,
+having the ability to sign Cosmos messages with a Ledger or similar HSM device.
+
+A standardized protocol for hashing, signing, and verifying messages that can be
+implemented by the Cosmos SDK and other third-party organizations is needed. Such a
+standardized protocol subscribes to the following:
+
+* Contains a specification of human-readable and machine-verifiable typed structured data
+* Contains a framework for deterministic and injective encoding of structured data
+* Utilizes cryptographic secure hashing and signing algorithms
+* A framework for supporting extensions and domain separation
+* Is invulnerable to chosen ciphertext attacks
+* Has protection against potentially signing transactions a user did not intend to
+
+This specification is only concerned with the rationale and the standardized
+implementation of Cosmos signed messages. It does **not** concern itself with the
+concept of replay attacks as that will be left up to the higher-level application
+implementation. If you view signed messages in the means of authorizing some
+action or data, then such an application would have to either treat this as
+idempotent or have mechanisms in place to reject known signed messages.
+
+## Preliminary
+
+The Cosmos message signing protocol will be parameterized with a cryptographic
+secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains
+the operations `sign` and `verify` which provide a digital signature over a set
+of bytes and verification of a signature respectively.
+
+Note, our goal here is not to provide context and reasoning about why necessarily
+these algorithms were chosen apart from the fact they are the defacto algorithms
+used in CometBFT and the Cosmos SDK and that they satisfy our needs for such
+cryptographic algorithms such as having resistance to collision and second
+pre-image attacks, as well as being [deterministic](https://en.wikipedia.org/wiki/Hash_function#Determinism) and [uniform](https://en.wikipedia.org/wiki/Hash_function#Uniformity).
+
+## Specification
+
+CometBFT has a well established protocol for signing messages using a canonical
+JSON representation as defined [here](https://github.com/cometbft/cometbft/blob/master/types/canonical.go).
+
+An example of such a canonical JSON structure is CometBFT's vote structure:
+
+```go
+type CanonicalJSONVote struct {
+ ChainID string `json:"@chain_id"`
+ Type string `json:"@type"`
+ BlockID CanonicalJSONBlockID `json:"block_id"`
+ Height int64 `json:"height"`
+ Round int `json:"round"`
+ Timestamp string `json:"timestamp"`
+ VoteType byte `json:"type"`
+}
+```
+
+With such canonical JSON structures, the specification requires that they include
+meta fields: `@chain_id` and `@type`. These meta fields are reserved and must be
+included. They are both of type `string`. In addition, fields must be ordered
+in lexicographically ascending order.
+
+For the purposes of signing Cosmos messages, the `@chain_id` field must correspond
+to the Cosmos chain identifier. The user-agent should **refuse** signing if the
+`@chain_id` field does not match the currently active chain! The `@type` field
+must equal the constant `"message"`. The `@type` field corresponds to the type of
+structure the user will be signing in an application. For now, a user is only
+allowed to sign bytes of valid ASCII text ([see here](https://github.com/cometbft/cometbft/blob/v0.37.0/libs/strings/string.go#L35-L64)).
+However, this will change and evolve to support additional application-specific
+structures that are human-readable and machine-verifiable ([see Future Adaptations](#future-adaptations)).
+
+Thus, we can have a canonical JSON structure for signing Cosmos messages using
+the [JSON schema](http://json-schema.org/) specification as such:
+
+```json
+{
+ "$schema": "http://json-schema.org/draft-04/schema#",
+ "$id": "cosmos/signing/typeData/schema",
+ "title": "The Cosmos signed message typed data schema.",
+ "type": "object",
+ "properties": {
+ "@chain_id": {
+ "type": "string",
+ "description": "The corresponding Cosmos chain identifier.",
+ "minLength": 1
+ },
+ "@type": {
+ "type": "string",
+ "description": "The message type. It must be 'message'.",
+ "enum": [
+ "message"
+ ]
+ },
+ "text": {
+ "type": "string",
+ "description": "The valid ASCII text to sign.",
+ "pattern": "^[\\x20-\\x7E]+$",
+ "minLength": 1
+ }
+ },
+ "required": [
+ "@chain_id",
+ "@type",
+ "text"
+ ]
+}
+```
+
+e.g.
+
+```json
+{
+ "@chain_id": "1",
+ "@type": "message",
+ "text": "Hello, you can identify me as XYZ on keybase."
+}
+```
+
+## Future Adaptations
+
+As applications can vary greatly in domain, it will be vital to support both
+domain separation and human-readable and machine-verifiable structures.
+
+Domain separation will allow for application developers to prevent collisions of
+otherwise identical structures. It should be designed to be unique per application
+use and should directly be used in the signature encoding itself.
+
+Human-readable and machine-verifiable structures will allow end users to sign
+more complex structures, apart from just string messages, and still be able to
+know exactly what they are signing (opposed to signing a bunch of arbitrary bytes).
+
+Thus, in the future, the Cosmos signing message specification will be expected
+to expand upon it's canonical JSON structure to include such functionality.
+
+## API
+
+Application developers and designers should formalize a standard set of APIs that
+adhere to the following specification:
+
+-----
+
+### **cosmosSignBytes**
+
+Params:
+
+* `data`: the Cosmos signed message canonical JSON structure
+* `address`: the Bech32 Cosmos account address to sign data with
+
+Returns:
+
+* `signature`: the Cosmos signature derived using signing algorithm `S`
+
+-----
+
+### Examples
+
+Using the `secp256k1` as the DSA, `S`:
+
+```javascript
+data = {
+ "@chain_id": "1",
+ "@type": "message",
+ "text": "I hereby claim I am ABC on Keybase!"
+}
+
+cosmosSignBytes(data, "cosmos1pvsch6cddahhrn5e8ekw0us50dpnugwnlfngt3")
+> "0x7fc4a495473045022100dec81a9820df0102381cdbf7e8b0f1e2cb64c58e0ecda1324543742e0388e41a02200df37905a6505c1b56a404e23b7473d2c0bc5bcda96771d2dda59df6ed2b98f8"
+```
+
+## References
diff --git a/copy-of-sdk-docs/build/spec/addresses/README.md b/copy-of-sdk-docs/build/spec/addresses/README.md
new file mode 100644
index 00000000..61db3aa9
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/addresses/README.md
@@ -0,0 +1,3 @@
+# Addresses spec
+
+* [Bech32](./bech32.md)
diff --git a/copy-of-sdk-docs/build/spec/addresses/bech32.md b/copy-of-sdk-docs/build/spec/addresses/bech32.md
new file mode 100644
index 00000000..dcf8349b
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/addresses/bech32.md
@@ -0,0 +1,21 @@
+# Bech32 on Cosmos
+
+The Cosmos network prefers to use the Bech32 address format wherever users must handle binary data. Bech32 encoding provides robust integrity checks on data and the human readable part (HRP) provides contextual hints that can assist UI developers with providing informative error messages.
+
+In the Cosmos network, keys and addresses may refer to a number of different roles in the network like accounts, validators etc.
+
+## HRP table
+
+| HRP | Definition |
+| ---------------- | ------------------------------------- |
+| cosmos | Cosmos Account Address |
+| cosmosvalcons | Cosmos Validator Consensus Address |
+| cosmosvaloper | Cosmos Validator Operator Address |
+
+## Encoding
+
+While all user facing interfaces to Cosmos software should exposed Bech32 interfaces, many internal interfaces encode binary value in hex or base64 encoded form.
+
+To convert between other binary representation of addresses and keys, it is important to first apply the Amino encoding process before Bech32 encoding.
+
+A complete implementation of the Amino serialization format is unnecessary in most cases. Simply prepending bytes from this [table](https://github.com/cometbft/cometbft/blob/main/spec/blockchain/encoding.md) to the byte string payload before Bech32 encoding will be sufficient for compatible representation.
diff --git a/copy-of-sdk-docs/build/spec/fee_distribution/f1_fee_distr.pdf b/copy-of-sdk-docs/build/spec/fee_distribution/f1_fee_distr.pdf
new file mode 100644
index 00000000..b9995386
Binary files /dev/null and b/copy-of-sdk-docs/build/spec/fee_distribution/f1_fee_distr.pdf differ
diff --git a/copy-of-sdk-docs/build/spec/fee_distribution/f1_fee_distr.tex b/copy-of-sdk-docs/build/spec/fee_distribution/f1_fee_distr.tex
new file mode 100644
index 00000000..f704e52a
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/fee_distribution/f1_fee_distr.tex
@@ -0,0 +1,245 @@
+\documentclass[]{article}
+\usepackage{hyperref}
+
+%opening
+\title{F1 Fee Distribution Draft-02}
+\author{Dev Ojha}
+
+\begin{document}
+
+\maketitle
+
+\begin{abstract}
+ In a proof of stake blockchain, validators need to split the rewards gained from transaction fees each block. Furthermore, these fees must be fairly distributed to each of a validator's constituent delegators. They accrue this reward throughout the entire time they are delegated, and they have a special operation to withdraw accrued rewards.
+
+ The F1 fee distribution scheme works for any algorithm to split funds between validators each block, with minimal iteration, and the only approximations being due to finite decimal precision. Per block there is a single iteration over the validator set, to enable reward algorithms that differ by validator. No iteration is required to delegate, or withdraw. The state usage is one state update per validator per block, and one state entry per active delegation. It can optionally handle arbitrary inflation schemes, and auto-bonding of rewards.
+\end{abstract}
+
+\section{F1 Fee Distribution}
+
+\subsection{Context}
+In a proof of stake blockchain, each validator has an associated stake.
+Transaction fees get rewarded to validators based on the incentive scheme of the underlying proof of stake model.
+The fee distribution problem occurs in proof of stake blockchains supporting delegation, as there is a need to distribute a validator's fee rewards to its delegators.
+The trivial solution of just giving the rewards to each delegator every block is too expensive to perform on-chain.
+So instead fee distribution algorithms have delegators perform a withdraw action, which when performed yields the same total amount of fees as if they had received them at every block.
+
+This details F1, an approximation-free, slash-tolerant fee distribution algorithm which allows validator commission-rates, inflation rates, and fee proportions, which can all efficiently change per validator, every block.
+The algorithm requires iterating over the bonded validators every block, and withdraws require no iteration.
+This is cheap, due to staking logic already requiring iteration over all validators, which causes the expensive state-reads to be cached.
+
+The key point of how F1 works is that it tracks how much rewards a delegator with 1 stake for a given validator would be entitled to if it had bonded at block 0 until the latest block.
+When a delegator bonds at block $b$, the amount of rewards a delegator with 1 stake would have if bonded at block 0 until block $b$ is also persisted to state.
+When the delegator withdraws, they receive the difference of these two values.
+Since rewards are distributed according to stake-weighting, this amount of rewards can be scaled by the amount of stake a delegator had.
+Section 1.2 describes this in more detail, with an argument for it being approximation free.
+Section 2 details how to adapt this algorithm to handle commission rates, slashing, and inflation.
+
+\subsection{Base algorithm}
+In this section, we show that the F1 base algorithm gives each delegator rewards identical to that which they'd receive in the naive and correct fee distribution algorithm that iterated over all delegators every block.
+
+Even distribution of a validators rewards amongst its validators weighted by stake means the following:
+Suppose a delegator delegates $x$ stake to a validator $v$ at block $h$.
+Let the amount of stake the validator has at block $i$ be $s_i$ and the amount of fees they receive at this height be $f_i$.
+Then if a delegator contributing $x$ stake decides to withdraw at block $n$, the rewards they receive are
+$$\sum_{i = h}^{n} \frac{x}{s_i}f_i = x \sum_{i = h}^{n} \frac{f_i}{s_i}$$
+
+Note that $s_i$ does not change every block,
+it only changes if the validator gets slashed,
+or if any delegator alters the amount they have delegated.
+We'll relegate handling of slashes to \autoref{ssec:slashing},
+and only consider the case with no slashing here.
+We can change the iteration from being over every block, to instead being over the set of blocks between two changes in validator $v$'s total stake.
+Let each of these set of blocks be called a period.
+A new period begins every time that validator's total stake changes.
+Let the total amount of stake for the validator in period $p$ be $n_p$.
+Let $T_p$ be the total fees that validator $v$ accrued in period $p$.
+Let $h$ be the start of period $p_{init}$, and height $n$ be the end of $p_{final}$.
+It follows that
+$$x \sum_{i = h}^{n} \frac{f_i}{s_i} = x \sum_{p = p_{init}}^{p_{final}} \frac{T_p}{n_p}$$
+
+Let $p_0$ represent the period which begins when the validator first bonds.
+The central idea to the F1 model is that at the end of the $k$th period,
+the following is stored at a state location indexable by $k$: $\sum_{i=0}^{k}\frac{T_i}{n_i}$.
+Let the index of the current period be $f$.
+When a delegator wants to delegate or withdraw their reward, they first create a new entry in state to end the current period.
+Then this entry is created using the previous entry as follows:
+$$Entry_f = \sum_{i=0}^{f}\frac{T_i}{n_i} = \sum_{i=0}^{f-1}\frac{T_i}{n_i} + \frac{T_f}{n_f} = Entry_{f-1} + \frac{T_f}{n_f}$$
+Where $T_f$ is the fees the validator has accrued in period $f$, and $n_f$ is the validators total amount of stake in period $f$.
+
+The withdrawer's delegation object has the index $k$ for the period which they ended by bonding. (They start receiving rewards for period $k + 1$)
+The reward they should receive when withdrawing is:
+
+$$x \sum_{i = k + 1}^{f} \frac{T_i}{n_i} = x\left(\left(\sum_{i=0}^{f}\frac{T_i}{n_i}\right) - \left(\sum_{i=0}^{k}\frac{T_i}{n_i}\right)\right) = x\left(Entry_f - Entry_k\right)$$
+
+It is clear from the equations that this payout mechanism maintains correctness, and requires no iterations. It just needed the two state reads for these entries.
+
+$T_f$ is a separate variable in state for the amount of fees this validator has accrued since the last update to its power.
+This variable is incremented at every block by however much fees this validator received that block.
+On the update to the validators power, this variable is used to create the entry in state at $f$, and is then reset to 0.
+
+This fee distribution proposal is agnostic to how all of the blocks fees are divided up between validators.
+This creates many nice properties, for example it is possible to only rewarding validators who signed that block.
+
+\section{Additional add-ons}
+\subsection{Commission Rates}
+Commission rates are the idea that a validator can take a fixed $x\%$ cut of all of their received fees, before redistributing evenly to the constituent delegators.
+This can easily be done as follows:
+
+In block $h$ a validator receives $f_h$ fees.
+Instead of incrementing that validators ``total accrued fees this period variable" by $f_h$, it is instead incremented by $(1 - commission\_rate) * f_p$.
+Then $commission\_rate * f_p$ is deposited directly to the validator's account.
+This allows for efficient updates to a validator's commission rate every block if desired.
+More generally, each validator could have a function which takes their fees as input, and outputs a set of outputs to pay these fees too. (i.e. x\% going to themselves, y\% to delegators, z\% burnt)
+
+\subsection{Slashing}
+\label{ssec:slashing}
+Slashing is distinct from withdrawals, since it lowers the stake of all of the delegator's by a fixed percentage.
+Since no one is charged gas for slashes, a slash cannot iterate over all delegators.
+Thus we can no longer just multiply by $x$ over the difference in stake.
+This section describes a simple solution that should suffice for most chains needs. An asymptotically optimal solution is provided in section 2.4.
+TODO: Consider removing this section in favor of just using the current section 2.4?
+
+The solution here is to instead store each period created by a slash in the validators state.
+Then when withdrawing, you must iterate over all slashes between when you started and ended.
+Suppose you delegated at period $0$, a y\% slash occurred at period $2$, and your withdrawal creates period $4$.
+Then you receive funds from periods $0$ to $2$ as normal.
+The equations for funds you receive for periods $2$ to $4$ now use $(1 - y)x$ for your stake instead of just $x$ stake.
+When there are multiple slashes, you just account for the accumulated slash factor.
+
+In practice this will not really be an efficiency hit, as the number of slashes is expected to be 0 or 1 for most validators.
+Validators that get slashed more will naturally lose their delegators.
+A malicious validator that gets itself slashed many times would increase the gas to withdraw linearly, but the economic loss of funds due to the slashes is expected to far out-weigh the extra overhead the honest withdrawer must pay for due to the gas.
+(TODO: frame that above sentence in terms of griefing factors, as that's more correct)
+
+\subsection{Inflation}
+Inflation is the idea that we want every staked coin to create more staking tokens as time progresses.
+The purpose being to drive down the relative worth of unstaked tokens.
+Each block, every staked token should produce $x$ staking tokens as inflation, where $x$ is calculated from a function $inflation$ which takes state and the block information as input.
+Let $x_i$ represent the evaluation of $inflation$ in the $i$th block.
+The goal of this section is to auto-bond inflation in the fee distribution model without iteration.
+This is done by preserving the invariant that every state entry contains the rewards one would have if they had bonded one stake at genesis until that corresponding block.
+
+In state a variable should be kept for the number of tokens one would have now due to inflation,
+given that they bonded one token at genesis.
+This is $\prod_{0}^{now} (1 + x_i)$.
+Each period now stores this total inflation product along with what it already stores per-period.
+
+Let $R_i$ be the fee rewards in block $i$, and $n_i$ be the total amount bonded to that validator in that block.
+The correct amount of rewards which 1 token at genesis should have now is:
+$$Reward(now) = \sum_{i = 0}^{now}\left(\prod_{j = 0}^{i} 1 + x_j \right) * \frac{R_i}{n_i}$$
+The term in the sum is the amount of stake one stake becomes due to inflation, multiplied by the amount of fees per stake.
+
+Now we cast this into the period frame of view.
+Recall that we build the rewards by creating a state entry for the rewards of the previous period, and keeping track of the rewards within this period.
+Thus we first define the correct amount of rewards for each successive period, proving correctness of this via induction.
+We then show that the state entry that gets efficiently built up block by block is equal to this value for the latest period.
+
+Let $start, end$ denote the start/end of a period.
+
+Suppose that $\forall f > 0$, $Reward(end(f))$ is correctly constructed as
+$$Reward(end(f)) = Reward(end(f-1)) + \sum_{i = start(f)}^{end(f)}\left(\prod_{j = 0}^{i} 1 + x_j \right) \frac{R_i}{n_i}$$
+and that for $f = 0$, $Reward(end(0)) = 0$.
+(With period 1 being defined as the period that has the first bond into it)
+It must be shown that assuming the supposition $\forall f \leq f_0$, $$Reward(end(f_0 + 1)) = Reward(end(f_0)) + \sum_{i = start(f_0 + 1)}^{end(f_0 + 1)}\left(\prod_{j = 0}^{i} 1 + x_j \right) \frac{R_i}{n_i}$$
+Using the definition of $Reward$, it follows that:
+$$\sum_{i = 0}^{end(f_0 + 1)}\left(\prod_{j = 0}^{i} 1 + x_j \right) * \frac{R_i}{n_i} = \sum_{i = 0}^{end(f_0)}\left(\prod_{j = 0}^{i} 1 + x_j \right) * \frac{R_i}{n_i} + \sum_{i = start(f_0 + 1)}^{end(f_0 + 1)}\left(\prod_{j = 0}^{i} 1 + x_j \right) \frac{R_i}{n_i}$$
+
+Since the first summation on the right hand side is $Reward(end(f_0))$, the supposition is proven true.
+Consequently, the reward for just period $f$ adjusted for the amount of inflation 1 token at genesis would produce, is:
+$$\sum_{i = start(f)}^{end(f)}\left(\prod_{j = 0}^{i} 1 + x_j \right) \frac{R_i}{n_i}$$
+
+TODO: make this proof + pre-amble less verbose, and just wrap up into a lemma.
+Maybe just leave this proof or the last part to the reader, since it easily follows from summation bounds.
+
+Now note that
+$$\sum_{i = start(f)}^{end(f)}\left(\prod_{j = 0}^{i} 1 + x_j \right) \frac{R_i}{n_i} = \left(\prod_{j = 0}^{end(f - 1)} 1 + x_j \right)\sum_{i = start(f)}^{end(f)}\left(\prod_{j = start(f)}^{i} 1 + x_j \right) \frac{R_i}{n_i}$$
+By definition of period, and inflation being applied every block, \\
+$n_i = n_{start(f)}\left(\prod_{j = start(f)}^{i} 1 + x_j \right)$. This cancels out the product in the summation, therefore
+$$\sum_{i = start(f)}^{end(f)}\left(\prod_{j = 0}^{i} 1 + x_j \right) \frac{R_i}{n_i} = \left(\prod_{j = 0}^{end(f - 1)} 1 + x_j \right)\frac{\sum_{i = start(f)}^{end(f)}R_i}{n_{start(f)}}$$
+
+Thus every block, each validator just has to add the total amount of fees (The $R_i$ term) that goes to delegates to some per-period term.
+When creating a new period, $n_{start(f)}$ can be cached in state, and the product is already stored in the previous periods state entry.
+You then get the next period's $n_{start(f)}$ from the consensus' power entry for this validator.
+This is thus extremely efficient per block.
+
+When withdrawing, you take the difference as before,
+which yields the amount of rewards you would have obtained with $(\prod_0^{begin\ bonding\ period}1 + x)$ stake from the block you began bonding at until now.
+$(\prod_0^{begin\ bonding\ period}1 + x)$ is known, since its included in the state entry for when you bonded.
+You then divide the entitled fees by $(\prod_0^{begin\ bonding\ period}1 + x)$ to normalize it to being the amount of rewards you're entitled to from 1 stake at that block to now.
+Then as before, you multiply by the amount of stake you had initially bonded.
+\\TODO: (Does the difference equating to that make sense, or should it be shown explicitly)
+\\TODO: Does this need to explain how the originally bonded tokens are refunded, or is that clear?
+
+The inflation function could vary per block,
+and per validator if ever a need rose.
+If the inflation rate is the same for everyone then there can be a single global store for the entries corresponding to the product of inflations.
+Inflation creation can trivially be epoched as long as inflation isn't required within the epoch, through changes to the $inflation$ function.
+
+\subsection{Withdrawing with no iteration over slashes}
+Notice that a slash is the same as a negative inflation rate for a validator in one block.
+For example a $20\%$ slash is equivalent to a $-20\%$ inflation for a validator in a block.
+Given correctness of auto-bonding inflation with different inflation rates per-validator,
+it follows that handling slashes can be correctly done by simply subtracting the validators inflation factor in that block to be the negative of the slash factor.
+This significantly simplifies the withdrawal procedure.
+
+\subsection{Auto bonding fees}
+TODO: Fill this out.
+Core idea: you use the same mechanism as previously, but you just don't take that optimization with $n_{i}$ and the $n_{start}$ relation.
+Fairly simple to do.
+
+\subsection{Delegation updates}
+Updating your delegation amount is equivalent to withdrawing earned rewards and a fully independent new delegation occurring in the same block.
+The same applies for redelegation.
+From the view of fee distribution, partial redelegation is the same as a delegation update + a new delegation.
+
+\subsection{Jailing / being kicked out of the validator set}
+This basically requires no change.
+In each block you only iterate over the currently bonded validators.
+So you simply don't update the "total accrued fees this period" variable for jailed / non-bonded validators.
+Withdrawing requires \textit{no} special casing here!
+
+\section{State Requirements}
+State entries can be pruned quite effectively.
+Suppose for the sake of exposition that there is at most one delegation / withdrawal to a particular validator in any given block.
+Then each delegation is responsible for one addition to state.
+Only the next period, and this delegator's withdrawal could depend on this entry. Thus once this delegator withdraws, this state entry can be pruned.
+For the entry created by the delegator's withdrawal, that is only required by the creation of the next period.
+Thus once the next period is created, that withdrawal's period can be deleted.
+
+This can be easily adapted to the case where there are multiple delegations / withdrawals per block, by maintaining a reference count in each period starting state entry.
+
+The slash entries for a validator can only be pruned when all of that validator's delegators have their bonding period starting after the slash.
+This seems ineffective to keep track of, thus it is not worth it.
+Each slash should instead remain in state until the validator unbonds and all delegators have their fees withdrawn.
+
+\section{Implementers Considerations}
+TODO: Convert this section into a proper conclusion
+
+This is an extremely simple scheme with many nice benefits.
+\begin{itemize}
+ \item The overhead per block is a simple iteration over the bonded validator set, which occurs anyway. (Thus it can be implemented ``for-free" with an optimized code-base)
+ \item Withdrawing earned fees only requires iterating over slashes since when you bonded. (Which is a negligible iteration)
+ \item There are no approximations in any of the calculations. (modulo minor errata resulting from fixed precision decimals used in divisions)
+ \item Supports arbitrary inflation models. (Thus could even vary upon block signers)
+ \item Supports arbitrary fee distribution amongst the validator set. (Thus can account for things like only online validators get fees, which has important incentivization impacts)
+ \item The above two can change on a live chain with no issues.
+ \item Validator commission rates can be changed every block
+ \item The simplicity of this scheme lends itself well to implementation
+\end{itemize}
+
+Thus this scheme has efficiency improvements, simplicity improvements, and expressiveness improvements over the currently proposed schemes. With a correct fee distribution amongst the validator set, this solves the existing problem where one could withhold their signature for risk-free gain.
+
+\section{TO DOs}
+
+\begin{itemize}
+ \item A global fee pool can be described.
+ \item Mention storage optimization for how to prune slashing entries in the uniform inflation and iteration over slashing case
+ \item Add equation numbers
+ \item perhaps re-organize so that the no iteration
+ \item Section on decimal precision considerations (would unums help?), and mitigating errors in calculation with floats and decimals. -- This probably belongs in a corollary markdown file in the implementation
+ \item Consider indicating that the withdraw action need not be a tx type and could instead happen 'transparently' when more coins are needed, if a chain desired this for UX / p2p efficiency.
+\end{itemize}
+
+
+\end{document}
diff --git a/copy-of-sdk-docs/build/spec/store/README.md b/copy-of-sdk-docs/build/spec/store/README.md
new file mode 100644
index 00000000..3bf8b0e3
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/store/README.md
@@ -0,0 +1,235 @@
+# Store
+
+The store package defines the interfaces, types and abstractions for Cosmos SDK
+modules to read and write to Merkleized state within a Cosmos SDK application.
+The store package provides many primitives for developers to use in order to
+work with both state storage and state commitment. Below we describe the various
+abstractions.
+
+## Types
+
+### `Store`
+
+The bulk of the store interfaces are defined [here](https://github.com/cosmos/cosmos-sdk/blob/main/store/types/store.go),
+where the base primitive interface, for which other interfaces build off of, is
+the `Store` type. The `Store` interface defines the ability to tell the type of
+the implementing store and the ability to cache wrap via the `CacheWrapper` interface.
+
+### `CacheWrapper` & `CacheWrap`
+
+One of the most important features a store has the ability to perform is the
+ability to cache wrap. Cache wrapping is essentially the underlying store wrapping
+itself within another store type that performs caching for both reads and writes
+with the ability to flush writes via `Write()`.
+
+### `KVStore` & `CacheKVStore`
+
+One of the most important interfaces that both developers and modules interface
+with, which also provides the basis of most state storage and commitment operations,
+is the `KVStore`. The `KVStore` interface provides basic CRUD abilities and
+prefix-based iteration, including reverse iteration.
+
+Typically, each module has it's own dedicated `KVStore` instance, which it can
+get access to via the `sdk.Context` and the use of a pointer-based named key --
+`KVStoreKey`. The `KVStoreKey` provides pseudo-OCAP. How a exactly a `KVStoreKey`
+maps to a `KVStore` will be illustrated below through the `CommitMultiStore`.
+
+Note, a `KVStore` cannot directly commit state. Instead, a `KVStore` can be wrapped
+by a `CacheKVStore` which extends a `KVStore` and provides the ability for the
+caller to execute `Write()` which commits state to the underlying state storage.
+Note, this doesn't actually flush writes to disk as writes are held in memory
+until `Commit()` is called on the `CommitMultiStore`.
+
+### `CommitMultiStore`
+
+The `CommitMultiStore` interface exposes the top-level interface that is used
+to manage state commitment and storage by an SDK application and abstracts the
+concept of multiple `KVStore`s which are used by multiple modules. Specifically,
+it supports the following high-level primitives:
+
+* Allows for a caller to retrieve a `KVStore` by providing a `KVStoreKey`.
+* Exposes pruning mechanisms to remove state pinned against a specific height/version
+ in the past.
+* Allows for loading state storage at a particular height/version in the past to
+ provide current head and historical queries.
+* Provides the ability to rollback state to a previous height/version.
+* Provides the ability to load state storage at a particular height/version
+ while also performing store upgrades, which are used during live hard-fork
+ application state migrations.
+* Provides the ability to commit all current accumulated state to disk and performs
+ Merkle commitment.
+
+## Implementation Details
+
+While there are many interfaces that the `store` package provides, there is
+typically a core implementation for each main interface that modules and
+developers interact with that are defined in the Cosmos SDK.
+
+### `iavl.Store`
+
+The `iavl.Store` provides the core implementation for state storage and commitment
+by implementing the following interfaces:
+
+* `KVStore`
+* `CommitStore`
+* `CommitKVStore`
+* `Queryable`
+* `StoreWithInitialVersion`
+
+It allows for all CRUD operations to be performed along with allowing current
+and historical state queries, prefix iteration, and state commitment along with
+Merkle proof operations. The `iavl.Store` also provides the ability to remove
+historical state from the state commitment layer.
+
+An overview of the IAVL implementation can be found [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md).
+It is important to note that the IAVL store provides both state commitment and
+logical storage operations, which comes with drawbacks as there are various
+performance impacts, some of which are very drastic, when it comes to the
+operations mentioned above.
+
+When dealing with state management in modules and clients, the Cosmos SDK provides
+various layers of abstractions or "store wrapping", where the `iavl.Store` is the
+bottom most layer. When requesting a store to perform reads or writes in a module,
+the typical abstraction layer in order is defined as follows:
+
+```text
+iavl.Store <- cachekv.Store <- gaskv.Store <- cachemulti.Store <- rootmulti.Store
+```
+
+### Concurrent use of IAVL store
+
+The tree under `iavl.Store` is not safe for concurrent use. It is the
+responsibility of the caller to ensure that concurrent access to the store is
+not performed.
+
+The main issue with concurrent use is when data is written at the same time as
+it's being iterated over. Doing so will cause an irrecoverable fatal error because
+of concurrent reads and writes to an internal map.
+
+Although it's not recommended, you can iterate through values while writing to
+it by disabling "FastNode" **without guarantees that the values being written will
+be returned during the iteration** (if you need this, you might want to reconsider
+the design of your application). This is done by setting `iavl-disable-fastnode`
+to `true` in the config TOML file.
+
+### `cachekv.Store`
+
+The `cachekv.Store` store wraps an underlying `KVStore`, typically a `iavl.Store`
+and contains an in-memory cache for storing pending writes to underlying `KVStore`.
+`Set` and `Delete` calls are executed on the in-memory cache, whereas `Has` calls
+are proxied to the underlying `KVStore`.
+
+One of the most important calls to a `cachekv.Store` is `Write()`, which ensures
+that key-value pairs are written to the underlying `KVStore` in a deterministic
+and ordered manner by sorting the keys first. The store keeps track of "dirty"
+keys and uses these to determine what keys to sort. In addition, it also keeps
+track of deleted keys and ensures these are also removed from the underlying
+`KVStore`.
+
+The `cachekv.Store` also provides the ability to perform iteration and reverse
+iteration. Iteration is performed through the `cacheMergeIterator` type and uses
+both the dirty cache and underlying `KVStore` to iterate over key-value pairs.
+
+Note, all calls to CRUD and iteration operations on a `cachekv.Store` are thread-safe.
+
+### `gaskv.Store`
+
+The `gaskv.Store` store provides a simple implementation of a `KVStore`.
+Specifically, it just wraps an existing `KVStore`, such as a cache-wrapped
+`iavl.Store`, and incurs configurable gas costs for CRUD operations via
+`ConsumeGas()` calls defined on the `GasMeter` which exists in a `sdk.Context`
+and then proxies the underlying CRUD call to the underlying store. Note, the
+`GasMeter` is reset on each block.
+
+### `cachemulti.Store` & `rootmulti.Store`
+
+The `rootmulti.Store` acts as an abstraction around a series of stores. Namely,
+it implements the `CommitMultiStore` an `Queryable` interfaces. Through the
+`rootmulti.Store`, an SDK module can request access to a `KVStore` to perform
+state CRUD operations and queries by holding access to a unique `KVStoreKey`.
+
+The `rootmulti.Store` ensures these queries and state operations are performed
+through cached-wrapped instances of `cachekv.Store` which is described above. The
+`rootmulti.Store` implementation is also responsible for committing all accumulated
+state from each `KVStore` to disk and returning an application state Merkle root.
+
+Queries can be performed to return state data along with associated state
+commitment proofs for both previous heights/versions and the current state root.
+Queries are routed based on store name, i.e. a module, along with other parameters
+which are defined in `abci.QueryRequest`.
+
+The `rootmulti.Store` also provides primitives for pruning data at a given
+height/version from state storage. When a height is committed, the `rootmulti.Store`
+will determine if other previous heights should be considered for removal based
+on the operator's pruning settings defined by `PruningOptions`, which defines
+how many recent versions to keep on disk and the interval at which to remove
+"staged" pruned heights from disk. During each interval, the staged heights are
+removed from each `KVStore`. Note, it is up to the underlying `KVStore`
+implementation to determine how pruning is actually performed. The `PruningOptions`
+are defined as follows:
+
+```go
+type PruningOptions struct {
+ // KeepRecent defines how many recent heights to keep on disk.
+ KeepRecent uint64
+
+ // Interval defines when the pruned heights are removed from disk.
+ Interval uint64
+
+ // Strategy defines the kind of pruning strategy. See below for more information on each.
+ Strategy PruningStrategy
+}
+```
+
+The Cosmos SDK defines a preset number of pruning "strategies": `default`, `everything`
+`nothing`, and `custom`.
+
+It is important to note that the `rootmulti.Store` considers each `KVStore` as a
+separate logical store. In other words, they do not share a Merkle tree or
+comparable data structure. This means that when state is committed via
+`rootmulti.Store`, each store is committed in sequence and thus is not atomic.
+
+In terms of store construction and wiring, each Cosmos SDK application contains
+a `BaseApp` instance which internally has a reference to a `CommitMultiStore`
+that is implemented by a `rootmulti.Store`. The application then registers one or
+more `KVStoreKey` that pertain to a unique module and thus a `KVStore`. Through
+the use of an `sdk.Context` and a `KVStoreKey`, each module can get direct access
+to it's respective `KVStore` instance.
+
+Example:
+
+```go
+func NewApp(...) Application {
+ // ...
+
+ bApp := baseapp.NewBaseApp(appName, logger, db, txConfig.TxDecoder(), baseAppOptions...)
+ bApp.SetCommitMultiStoreTracer(traceStore)
+ bApp.SetVersion(version.Version)
+ bApp.SetInterfaceRegistry(interfaceRegistry)
+
+ // ...
+
+ keys := sdk.NewKVStoreKeys(...)
+ transientKeys := sdk.NewTransientStoreKeys(...)
+ memKeys := sdk.NewMemoryStoreKeys(...)
+
+ // ...
+
+ // initialize stores
+ app.MountKVStores(keys)
+ app.MountTransientStores(transientKeys)
+ app.MountMemoryStores(memKeys)
+
+ // ...
+}
+```
+
+The `rootmulti.Store` itself can be cache-wrapped which returns an instance of a
+`cachemulti.Store`. For each block, `BaseApp` ensures that the proper abstractions
+are created on the `CommitMultiStore`, i.e. ensuring that the `rootmulti.Store`
+is cached-wrapped and uses the resulting `cachemulti.Store` to be set on the
+`sdk.Context` which is then used for block and transaction execution. As a result,
+all state mutations due to block and transaction execution are actually held
+ephemerally until `Commit()` is called by the ABCI client. This concept is further
+expanded upon when the AnteHandler is executed per transaction to ensure state
+is not committed for transactions that failed CheckTx.
diff --git a/copy-of-sdk-docs/build/spec/store/interblock-cache.md b/copy-of-sdk-docs/build/spec/store/interblock-cache.md
new file mode 100644
index 00000000..cfa2edb5
--- /dev/null
+++ b/copy-of-sdk-docs/build/spec/store/interblock-cache.md
@@ -0,0 +1,289 @@
+# Inter-block Cache
+
+* [Inter-block Cache](#inter-block-cache)
+ * [Synopsis](#synopsis)
+ * [Overview and basic concepts](#overview-and-basic-concepts)
+ * [Motivation](#motivation)
+ * [Definitions](#definitions)
+ * [System model and properties](#system-model-and-properties)
+ * [Assumptions](#assumptions)
+ * [Properties](#properties)
+ * [Thread safety](#thread-safety)
+ * [Crash recovery](#crash-recovery)
+ * [Iteration](#iteration)
+ * [Technical specification](#technical-specification)
+ * [General design](#general-design)
+ * [API](#api)
+ * [CommitKVCacheManager](#commitkvcachemanager)
+ * [CommitKVStoreCache](#commitkvstorecache)
+ * [Implementation details](#implementation-details)
+ * [History](#history)
+ * [Copyright](#copyright)
+
+## Synopsis
+
+The inter-block cache is an in-memory cache storing (in-most-cases) immutable state that modules need to read in between blocks. When enabled, all sub-stores of a multi store, e.g., `rootmulti`, are wrapped.
+
+## Overview and basic concepts
+
+### Motivation
+
+The goal of the inter-block cache is to allow SDK modules to have fast access to data that it is typically queried during the execution of every block. This is data that do not change often, e.g. module parameters. The inter-block cache wraps each `CommitKVStore` of a multi store such as `rootmulti` with a fixed size, write-through cache. Caches are not cleared after a block is committed, as opposed to other caching layers such as `cachekv`.
+
+### Definitions
+
+* `Store key` uniquely identifies a store.
+* `KVCache` is a `CommitKVStore` wrapped with a cache.
+* `Cache manager` is a key component of the inter-block cache responsible for maintaining a map from `store keys` to `KVCaches`.
+
+## System model and properties
+
+### Assumptions
+
+This specification assumes that there exists a cache implementation accessible to the inter-block cache feature.
+
+> The implementation uses adaptive replacement cache (ARC), an enhancement over the standard last-recently-used (LRU) cache in that tracks both frequency and recency of use.
+
+The inter-block cache requires that the cache implementation to provide methods to create a cache, add a key/value pair, remove a key/value pair and retrieve the value associated to a key. In this specification, we assume that a `Cache` feature offers this functionality through the following methods:
+
+* `NewCache(size int)` creates a new cache with `size` capacity and returns it.
+* `Get(key string)` attempts to retrieve a key/value pair from `Cache.` It returns `(value []byte, success bool)`. If `Cache` contains the key, it `value` contains the associated value and `success=true`. Otherwise, `success=false` and `value` should be ignored.
+* `Add(key string, value []byte)` inserts a key/value pair into the `Cache`.
+* `Remove(key string)` removes the key/value pair identified by `key` from `Cache`.
+
+The specification also assumes that `CommitKVStore` offers the following API:
+
+* `Get(key string)` attempts to retrieve a key/value pair from `CommitKVStore`.
+* `Set(key, string, value []byte)` inserts a key/value pair into the `CommitKVStore`.
+* `Delete(key string)` removes the key/value pair identified by `key` from `CommitKVStore`.
+
+> Ideally, both `Cache` and `CommitKVStore` should be specified in a different document and referenced here.
+
+### Properties
+
+#### Thread safety
+
+Accessing the `cache manager` or a `KVCache` is not thread-safe: no method is guarded with a lock.
+Note that this is true even if the cache implementation is thread-safe.
+
+> For instance, assume that two `Set` operations are executed concurrently on the same key, each writing a different value. After both are executed, the cache and the underlying store may be inconsistent, each storing a different value under the same key.
+
+#### Crash recovery
+
+The inter-block cache transparently delegates `Commit()` to its aggregate `CommitKVStore`. If the
+aggregate `CommitKVStore` supports atomic writes and use them to guarantee that the store is always in a consistent state in disk, the inter-block cache can be transparently moved to a consistent state when a failure occurs.
+
+> Note that this is the case for `IAVLStore`, the preferred `CommitKVStore`. On commit, it calls `SaveVersion()` on the underlying `MutableTree`. `SaveVersion` writes to disk are atomic via batching. This means that only consistent versions of the store (the tree) are written to the disk. Thus, in case of a failure during a `SaveVersion` call, on recovery from disk, the version of the store will be consistent.
+
+#### Iteration
+
+Iteration over each wrapped store is supported via the embedded `CommitKVStore` interface.
+
+## Technical specification
+
+### General design
+
+The inter-block cache feature is composed by two components: `CommitKVCacheManager` and `CommitKVCache`.
+
+`CommitKVCacheManager` implements the cache manager. It maintains a mapping from a store key to a `KVStore`.
+
+```go
+type CommitKVStoreCacheManager interface{
+ cacheSize uint
+ caches map[string]CommitKVStore
+}
+```
+
+`CommitKVStoreCache` implements a `KVStore`: a write-through cache that wraps a `CommitKVStore`. This means that deletes and writes always happen to both the cache and the underlying `CommitKVStore`. Reads on the other hand first hit the internal cache. During a cache miss, the read is delegated to the underlying `CommitKVStore` and cached.
+
+```go
+type CommitKVStoreCache interface{
+ store CommitKVStore
+ cache Cache
+}
+```
+
+To enable inter-block cache on `rootmulti`, one needs to instantiate a `CommitKVCacheManager` and set it by calling `SetInterBlockCache()` before calling one of `LoadLatestVersion()`, `LoadLatestVersionAndUpgrade(...)`, `LoadVersionAndUpgrade(...)` and `LoadVersion(version)`.
+
+### API
+
+#### CommitKVCacheManager
+
+The method `NewCommitKVStoreCacheManager` creates a new cache manager and returns it.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| size | integer | Determines the capacity of each of the KVCache maintained by the manager |
+
+```go
+func NewCommitKVStoreCacheManager(size uint) CommitKVStoreCacheManager {
+ manager = CommitKVStoreCacheManager{size, make(map[string]CommitKVStore)}
+ return manager
+}
+```
+
+`GetStoreCache` returns a cache from the CommitStoreCacheManager for a given store key. If no cache exists for the store key, then one is created and set.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| manager | `CommitKVStoreCacheManager` | The cache manager |
+| storeKey | string | The store key of the store being retrieved |
+| store | `CommitKVStore` | The store that it is cached in case the manager does not have any in its map of caches |
+
+```go
+func GetStoreCache(
+ manager CommitKVStoreCacheManager,
+ storeKey string,
+ store CommitKVStore) CommitKVStore {
+
+ if manager.caches.has(storeKey) {
+ return manager.caches.get(storeKey)
+ } else {
+ cache = CommitKVStoreCacheManager{store, manager.cacheSize}
+ manager.set(storeKey, cache)
+ return cache
+ }
+}
+```
+
+`Unwrap` returns the underlying CommitKVStore for a given store key.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| manager | `CommitKVStoreCacheManager` | The cache manager |
+| storeKey | string | The store key of the store being unwrapped |
+
+```go
+func Unwrap(
+ manager CommitKVStoreCacheManager,
+ storeKey string) CommitKVStore {
+
+ if manager.caches.has(storeKey) {
+ cache = manager.caches.get(storeKey)
+ return cache.store
+ } else {
+ return nil
+ }
+}
+```
+
+`Reset` resets the manager's map of caches.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| manager | `CommitKVStoreCacheManager` | The cache manager |
+
+```go
+function Reset(manager CommitKVStoreCacheManager) {
+
+ for (let storeKey of manager.caches.keys()) {
+ manager.caches.delete(storeKey)
+ }
+}
+```
+
+#### CommitKVStoreCache
+
+`NewCommitKVStoreCache` creates a new `CommitKVStoreCache` and returns it.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| store | CommitKVStore | The store to be cached |
+| size | string | Determines the capacity of the cache being created |
+
+```go
+func NewCommitKVStoreCache(
+ store CommitKVStore,
+ size uint) CommitKVStoreCache {
+ KVCache = CommitKVStoreCache{store, NewCache(size)}
+ return KVCache
+}
+```
+
+`Get` retrieves a value by key. It first looks in the cache. If the key is not in the cache, the query is delegated to the underlying `CommitKVStore`. In the latter case, the key/value pair is cached. The method returns the value.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is retrieved |
+| key | string | Key of the key/value pair being retrieved |
+
+```go
+func Get(
+ KVCache CommitKVStoreCache,
+ key string) []byte {
+ valueCache, success := KVCache.cache.Get(key)
+ if success {
+ // cache hit
+ return valueCache
+ } else {
+ // cache miss
+ valueStore = KVCache.store.Get(key)
+ KVCache.cache.Add(key, valueStore)
+ return valueStore
+ }
+}
+```
+
+`Set` inserts a key/value pair into both the write-through cache and the underlying `CommitKVStore`.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` to which the key/value pair is inserted |
+| key | string | Key of the key/value pair being inserted |
+| value | []byte | Value of the key/value pair being inserted |
+
+```go
+func Set(
+ KVCache CommitKVStoreCache,
+ key string,
+ value []byte) {
+
+ KVCache.cache.Add(key, value)
+ KVCache.store.Set(key, value)
+}
+```
+
+`Delete` removes a key/value pair from both the write-through cache and the underlying `CommitKVStore`.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` from which the key/value pair is deleted |
+| key | string | Key of the key/value pair being deleted |
+
+```go
+func Delete(
+ KVCache CommitKVStoreCache,
+ key string) {
+
+ KVCache.cache.Remove(key)
+ KVCache.store.Delete(key)
+}
+```
+
+`CacheWrap` wraps a `CommitKVStoreCache` with another caching layer (`CacheKV`).
+
+> It is unclear whether there is a use case for `CacheWrap`.
+
+| Name | Type | Description |
+| ------------- | ---------|------- |
+| KVCache | `CommitKVStoreCache` | The `CommitKVStoreCache` being wrapped |
+
+```go
+func CacheWrap(
+ KVCache CommitKVStoreCache) {
+
+ return CacheKV.NewStore(KVCache)
+}
+```
+
+### Implementation details
+
+The inter-block cache implementation uses a fixed-sized adaptive replacement cache (ARC) as cache. [The ARC implementation](https://github.com/hashicorp/golang-lru/blob/main/arc/arc.go) is thread-safe. ARC is an enhancement over the standard LRU cache in that tracks both frequency and recency of use. This avoids a burst in access to new entries from evicting the frequently used older entries. It adds some additional tracking overhead to a standard LRU cache, computationally it is roughly `2x` the cost, and the extra memory overhead is linear with the size of the cache. The default cache size is `1000`.
+
+## History
+
+Dec 20, 2022 - Initial draft finished and submitted as a PR
+
+## Copyright
+
+All content herein is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
diff --git a/copy-of-sdk-docs/build/tooling/00-protobuf.md b/copy-of-sdk-docs/build/tooling/00-protobuf.md
new file mode 100644
index 00000000..128970c0
--- /dev/null
+++ b/copy-of-sdk-docs/build/tooling/00-protobuf.md
@@ -0,0 +1,113 @@
+---
+sidebar_position: 1
+---
+
+# Protocol Buffers
+
+It is known that Cosmos SDK uses protocol buffers extensively, this document is meant to provide a guide on how it is used in the cosmos-sdk.
+
+To generate the proto file, the Cosmos SDK uses a docker image, this image is provided to all to use as well. The latest version is `ghcr.io/cosmos/proto-builder:0.17.0`
+
+Below is the example of the Cosmos SDK's commands for generating, linting, and formatting protobuf files that can be reused in any applications makefile.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/Makefile#L411-L432
+```
+
+The script used to generate the protobuf files can be found in the `scripts/` directory.
+
+```shell reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-alpha.0/scripts/protocgen.sh
+```
+
+## Buf
+
+[Buf](https://buf.build) is a protobuf tool that abstracts the need to use the complicated `protoc` toolchain on top of various other things that ensure you are using protobuf in accordance with the majority of the ecosystem. Within the cosmos-sdk repository there are a few files that have a buf prefix. Lets start with the top level and then dive into the various directories.
+
+### Workspace
+
+At the root level directory a workspace is defined using [buf workspaces](https://docs.buf.build/configuration/v1/buf-work-yaml). This helps if there are one or more protobuf containing directories in your project.
+
+Cosmos SDK example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/buf.work.yaml#L6-L9
+```
+
+### Proto Directory
+
+Next is the `proto/` directory where all of our protobuf files live. In here there are many different buf files defined each serving a different purpose.
+
+```bash
+├── README.md
+├── buf.gen.gogo.yaml
+├── buf.gen.pulsar.yaml
+├── buf.gen.swagger.yaml
+├── buf.lock
+├── buf.md
+├── buf.yaml
+├── cosmos
+└── tendermint
+```
+
+The above diagram shows all the files and directories within the Cosmos SDK `proto/` directory.
+
+#### `buf.gen.gogo.yaml`
+
+`buf.gen.gogo.yaml` defines how the protobuf files should be generated for use with in the module. This file uses [gogoproto](https://github.com/gogo/protobuf), a separate generator from the google go-proto generator that makes working with various objects more ergonomic, and it has more performant encode and decode steps
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.gen.gogo.yaml#L1-L9
+```
+
+:::tip
+Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview)
+:::
+
+#### `buf.gen.pulsar.yaml`
+
+`buf.gen.pulsar.yaml` defines how protobuf files should be generated using the [new golang apiv2 of protobuf](https://go.dev/blog/protobuf-apiv2). This generator is used instead of the google go-proto generator because it has some extra helpers for Cosmos SDK applications and will have more performant encode and decode than the google go-proto generator. You can follow the development of this generator [here](https://github.com/cosmos/cosmos-proto).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.gen.pulsar.yaml#L1-L18
+```
+
+:::tip
+Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview)
+:::
+
+#### `buf.gen.swagger.yaml`
+
+`buf.gen.swagger.yaml` generates the swagger documentation for the query and messages of the chain. This will only define the REST API endpoints that were defined in the query and msg servers. You can find examples of this [here](https://github.com/cosmos/cosmos-sdk/blob/main/proto/cosmos/bank/v1beta1/query.proto#L19)
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.gen.swagger.yaml#L1-L6
+```
+
+:::tip
+Example of how to define `gen` files can be found [here](https://docs.buf.build/generate/overview)
+:::
+
+#### `buf.lock`
+
+This is an autogenerated file based on the dependencies required by the `.gen` files. There is no need to copy the current one. If you depend on cosmos-sdk proto definitions a new entry for the Cosmos SDK will need to be provided. The dependency you will need to use is `buf.build/cosmos/cosmos-sdk`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.lock#L1-L16
+```
+
+#### `buf.yaml`
+
+`buf.yaml` defines the [name of your package](https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L3), which [breakage checker](https://docs.buf.build/breaking/overview) to use and how to [lint your protobuf files](https://buf.build/docs/tutorials/getting-started-with-buf-cli#lint-your-api).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/proto/buf.yaml#L1-L24
+```
+
+We use a variety of linters for the Cosmos SDK protobuf files. The repo also checks this in ci.
+
+A reference to the github actions can be found [here](https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32)
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/main/.github/workflows/proto.yml#L1-L32
+```
diff --git a/copy-of-sdk-docs/build/tooling/01-cosmovisor.md b/copy-of-sdk-docs/build/tooling/01-cosmovisor.md
new file mode 100644
index 00000000..7c70611f
--- /dev/null
+++ b/copy-of-sdk-docs/build/tooling/01-cosmovisor.md
@@ -0,0 +1,411 @@
+---
+sidebar_position: 1
+---
+
+# Cosmovisor
+
+`cosmovisor` is a process manager for Cosmos SDK application binaries that automates application binary switch at chain upgrades.
+It polls the `upgrade-info.json` file that is created by the x/upgrade module at upgrade height, and then can automatically download the new binary, stop the current binary, switch from the old binary to the new one, and finally restart the node with the new binary.
+
+* [Design](#design)
+* [Contributing](#contributing)
+* [Setup](#setup)
+ * [Installation](#installation)
+ * [Command Line Arguments And Environment Variables](#command-line-arguments-and-environment-variables)
+ * [Folder Layout](#folder-layout)
+* [Usage](#usage)
+ * [Initialization](#initialization)
+ * [Detecting Upgrades](#detecting-upgrades)
+ * [Adding Upgrade Binary](#adding-upgrade-binary)
+ * [Auto-Download](#auto-download)
+ * [Preparing for an Upgrade](#preparing-for-an-upgrade)
+* [Example: SimApp Upgrade](#example-simapp-upgrade)
+ * [Chain Setup](#chain-setup)
+ * [Prepare Cosmovisor and Start the Chain](#prepare-cosmovisor-and-start-the-chain)
+ * [Update App](#update-app)
+
+## Design
+
+Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app:
+
+* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable).
+ Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`;
+* it will manage an app by restarting and upgrading if needed;
+* it is configured using environment variables, not positional arguments.
+
+*Note: If new versions of the application are not set up to run in-place store migrations, migrations will need to be run manually before restarting `cosmovisor` with the new binary. For this reason, we recommend applications adopt in-place store migrations.*
+
+:::tip
+Only the latest version of cosmovisor is actively developed/maintained.
+:::
+
+:::warning
+Versions prior to v1.0.0 have a vulnerability that could lead to a DOS. Please upgrade to the latest version.
+:::
+
+## Contributing
+
+Cosmovisor is part of the Cosmos SDK monorepo, but it's a separate module with it's own release schedule.
+
+Release branches have the following format `release/cosmovisor/vA.B.x`, where A and B are a number (e.g. `release/cosmovisor/v1.3.x`). Releases are tagged using the following format: `cosmovisor/vA.B.C`.
+
+## Setup
+
+### Installation
+
+You can download Cosmovisor from the [GitHub releases](https://github.com/cosmos/cosmos-sdk/releases/tag/cosmovisor%2Fv1.5.0).
+
+To install the latest version of `cosmovisor`, run the following command:
+
+```shell
+go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest
+```
+
+To install a specific version, you can specify the version:
+
+```shell
+go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@v1.5.0
+```
+
+Run `cosmovisor version` to check the cosmovisor version.
+
+Alternatively, for building from source, simply run `make cosmovisor`. The binary will be located in `tools/cosmovisor`.
+
+:::warning
+Installing cosmovisor using `go install` will display the correct `cosmovisor` version.
+Building from source (`make cosmovisor`) or installing `cosmovisor` by other means won't display the correct version.
+:::
+
+### Command Line Arguments And Environment Variables
+
+The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are:
+
+* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration.
+* `run` - Run the configured binary using the rest of the provided arguments.
+* `version` - Output the `cosmovisor` version and also run the binary with the `version` argument.
+* `config` - Display the current `cosmovisor` configuration, that means displaying the environment variables value that `cosmovisor` is using.
+* `add-upgrade` - Add an upgrade manually to `cosmovisor`. This command allow you to easily add the binary corresponding to an upgrade in cosmovisor.
+
+All arguments passed to `cosmovisor run` will be passed to the application binary (as a subprocess). `cosmovisor` will return `/dev/stdout` and `/dev/stderr` of the subprocess as its own. For this reason, `cosmovisor run` cannot accept any command-line arguments other than those available to the application binary.
+
+`cosmovisor` reads its configuration from environment variables, or its configuration file (use `--cosmovisor-config `):
+
+* `DAEMON_HOME` is the location where the `cosmovisor/` directory is kept that contains the genesis binary, the upgrade binaries, and any additional auxiliary files associated with each binary (e.g. `$HOME/.gaiad`, `$HOME/.regend`, `$HOME/.simd`, etc.).
+* `DAEMON_NAME` is the name of the binary itself (e.g. `gaiad`, `regend`, `simd`, etc.).
+* `DAEMON_ALLOW_DOWNLOAD_BINARIES` (*optional*), if set to `true`, will enable auto-downloading of new binaries (for security reasons, this is intended for full nodes rather than validators). By default, `cosmovisor` will not auto-download new binaries.
+* `DAEMON_DOWNLOAD_MUST_HAVE_CHECKSUM` (*optional*, default = `false`), if `true` cosmovisor will require that a checksum is provided in the upgrade plan for the binary to be downloaded. If `false`, cosmovisor will not require a checksum to be provided, but still check the checksum if one is provided.
+* `DAEMON_RESTART_AFTER_UPGRADE` (*optional*, default = `true`), if `true`, restarts the subprocess with the same command-line arguments and flags (but with the new binary) after a successful upgrade. Otherwise (`false`), `cosmovisor` stops running after an upgrade and requires the system administrator to manually restart it. Note restart is only after the upgrade and does not auto-restart the subprocess after an error occurs.
+* `DAEMON_RESTART_DELAY` (*optional*, default none), allow a node operator to define a delay between the node halt (for upgrade) and backup by the specified time. The value must be a duration (e.g. `1s`).
+* `DAEMON_SHUTDOWN_GRACE` (*optional*, default none), if set, send interrupt to binary and wait the specified time to allow for cleanup/cache flush to disk before sending the kill signal. The value must be a duration (e.g. `1s`).
+* `DAEMON_POLL_INTERVAL` (*optional*, default 300 milliseconds), is the interval length for polling the upgrade plan file. The value must be a duration (e.g. `1s`).
+* `DAEMON_DATA_BACKUP_DIR` option to set a custom backup directory. If not set, `DAEMON_HOME` is used.
+* `UNSAFE_SKIP_BACKUP` (defaults to `false`), if set to `true`, upgrades directly without performing a backup. Otherwise (`false`, default) backs up the data before trying the upgrade. The default value of false is useful and recommended in case of failures and when a backup needed to rollback. We recommend using the default backup option `UNSAFE_SKIP_BACKUP=false`.
+* `DAEMON_PREUPGRADE_MAX_RETRIES` (defaults to `0`). The maximum number of times to call [`pre-upgrade`](https://docs.cosmos.network/main/build/building-apps/app-upgrade#pre-upgrade-handling) in the application after exit status of `31`. After the maximum number of retries, Cosmovisor fails the upgrade.
+* `COSMOVISOR_DISABLE_LOGS` (defaults to `false`). If set to true, this will disable Cosmovisor logs (but not the underlying process) completely. This may be useful, for example, when a Cosmovisor subcommand you are executing returns a valid JSON you are then parsing, as logs added by Cosmovisor make this output not a valid JSON.
+* `COSMOVISOR_COLOR_LOGS` (defaults to `true`). If set to true, this will colorise Cosmovisor logs (but not the underlying process).
+* `COSMOVISOR_TIMEFORMAT_LOGS` (defaults to `kitchen`). If set to a value (`layout|ansic|unixdate|rubydate|rfc822|rfc822z|rfc850|rfc1123|rfc1123z|rfc3339|rfc3339nano|kitchen`), this will add timestamp prefix to Cosmovisor logs (but not the underlying process).
+* `COSMOVISOR_CUSTOM_PREUPGRADE` (defaults to ``). If set, this will run $DAEMON_HOME/cosmovisor/$COSMOVISOR_CUSTOM_PREUPGRADE prior to upgrade with the arguments [ upgrade.Name, upgrade.Height ]. Executes a custom script (separate and prior to the chain daemon pre-upgrade command)
+* `COSMOVISOR_DISABLE_RECASE` (defaults to `false`). If set to true, the upgrade directory will expected to match the upgrade plan name without any case changes
+
+### Folder Layout
+
+`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and the subprocesses that are controlled by it. The folder content is organized as follows:
+
+```text
+.
+├── current -> genesis or upgrades/
+├── genesis
+│ └── bin
+│ └── $DAEMON_NAME
+└── upgrades
+│ └──
+│ ├── bin
+│ │ └── $DAEMON_NAME
+│ └── upgrade-info.json
+└── preupgrade.sh (optional)
+```
+
+The `cosmovisor/` directory includes a subdirectory for each version of the application (i.e. `genesis` or `upgrades/`). Within each subdirectory is the application binary (i.e. `bin/$DAEMON_NAME`) and any additional auxiliary files associated with each binary. `current` is a symbolic link to the currently active directory (i.e. `genesis` or `upgrades/`). The `name` variable in `upgrades/` is the lowercased URI-encoded name of the upgrade as specified in the upgrade module plan. Note that the upgrade name path are normalized to be lowercased: for instance, `MyUpgrade` is normalized to `myupgrade`, and its path is `upgrades/myupgrade`.
+
+Please note that `$DAEMON_HOME/cosmovisor` only stores the *application binaries*. The `cosmovisor` binary itself can be stored in any typical location (e.g. `/usr/local/bin`). The application will continue to store its data in the default data directory (e.g. `$HOME/.simapp`) or the data directory specified with the `--home` flag. `$DAEMON_HOME` is dependent of the data directory and must be set to the same directory as the data directory, you will end up with a configuration like the following:
+
+```text
+.simapp
+├── config
+├── data
+└── cosmovisor
+```
+
+## Usage
+
+The system administrator is responsible for:
+
+* installing the `cosmovisor` binary
+* configuring the host's init system (e.g. `systemd`, `launchd`, etc.)
+* appropriately setting the environmental variables
+* creating the `/cosmovisor` directory
+* creating the `/cosmovisor/genesis/bin` folder
+* creating the `/cosmovisor/upgrades//bin` folders
+* placing the different versions of the `` executable in the appropriate `bin` folders.
+
+`cosmovisor` will set the `current` link to point to `genesis` at first start (i.e. when no `current` link exists) and then handle switching binaries at the correct points in time so that the system administrator can prepare days in advance and relax at upgrade time.
+
+In order to support downloadable binaries, a tarball for each upgrade binary will need to be packaged up and made available through a canonical URL. Additionally, a tarball that includes the genesis binary and all available upgrade binaries can be packaged up and made available so that all the necessary binaries required to sync a fullnode from start can be easily downloaded.
+
+The `DAEMON` specific code and operations (e.g. cometBFT config, the application db, syncing blocks, etc.) all work as expected. The application binaries' directives such as command-line flags and environment variables also work as expected.
+
+### Initialization
+
+The `cosmovisor init ` command creates the folder structure required for using cosmovisor.
+
+It does the following:
+
+* creates the `/cosmovisor` folder if it doesn't yet exist
+* creates the `/cosmovisor/genesis/bin` folder if it doesn't yet exist
+* copies the provided executable file to `/cosmovisor/genesis/bin/`
+* creates the `current` link, pointing to the `genesis` folder
+
+It uses the `DAEMON_HOME` and `DAEMON_NAME` environment variables for folder location and executable name.
+
+The `cosmovisor init` command is specifically for initializing cosmovisor, and should not be confused with a chain's `init` command (e.g. `cosmovisor run init`).
+
+### Detecting Upgrades
+
+`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height.
+The following heuristic is applied to detect the upgrade:
+
+* When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name.
+* If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade.
+* If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`.
+* Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism.
+
+When the upgrade mechanism is triggered, `cosmovisor` will:
+
+1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor//bin` (where `` is the `upgrade-info.json:name` attribute);
+2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`.
+
+### Adding Upgrade Binary
+
+`cosmovisor` has an `add-upgrade` command that allows to easily link a binary to an upgrade. It creates a new folder in `cosmovisor/upgrades/` and copies the provided executable file to `cosmovisor/upgrades//bin/`.
+
+Using the `--upgrade-height` flag allows to specify at which height the binary should be switched, without going via a governance proposal.
+This enables support for an emergency coordinated upgrades where the binary must be switched at a specific height, but there is no time to go through a governance proposal.
+
+:::warning
+`--upgrade-height` creates an `upgrade-info.json` file. This means if a chain upgrade via governance proposal is executed before the specified height with `--upgrade-height`, the governance proposal will overwrite the `upgrade-info.json` plan created by `add-upgrade --upgrade-height `.
+Take this into consideration when using `--upgrade-height`.
+:::
+
+### Auto-Download
+
+Generally, `cosmovisor` requires that the system administrator place all relevant binaries on disk before the upgrade happens. However, for people who don't need such control and want an automated setup (maybe they are syncing a non-validating fullnode and want to do little maintenance), there is another option.
+
+**NOTE: we don't recommend using auto-download** because it doesn't verify in advance if a binary is available. If there will be any issue with downloading a binary, the cosmovisor will stop and won't restart an App (which could lead to a chain halt).
+
+If `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `true`, and no local binary can be found when an upgrade is triggered, `cosmovisor` will attempt to download and install the binary itself based on the instructions in the `info` attribute in the `data/upgrade-info.json` file. The files is constructed by the x/upgrade module and contains data from the upgrade `Plan` object. The `Plan` has an info field that is expected to have one of the following two valid formats to specify a download:
+
+1. Store an os/architecture -> binary URI map in the upgrade plan info field as JSON under the `"binaries"` key. For example:
+
+ ```json
+ {
+ "binaries": {
+ "linux/amd64":"https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f"
+ }
+ }
+ ```
+
+ You can include multiple binaries at once to ensure more than one environment will receive the correct binaries:
+
+ ```json
+ {
+ "binaries": {
+ "linux/amd64":"https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f",
+ "linux/arm64":"https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f",
+ "darwin/amd64":"https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f"
+ }
+ }
+ ```
+
+ When submitting this as a proposal ensure there are no spaces. An example command using `gaiad` could look like:
+
+ ```shell
+ > gaiad tx upgrade software-upgrade Vega \
+ --title Vega \
+ --deposit 100uatom \
+ --upgrade-height 7368420 \
+ --upgrade-info '{"binaries":{"linux/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-amd64","linux/arm64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-linux-arm64","darwin/amd64":"https://github.com/cosmos/gaia/releases/download/v6.0.0-rc1/gaiad-v6.0.0-rc1-darwin-amd64"}}' \
+ --summary "upgrade to Vega" \
+ --gas 400000 \
+ --from user \
+ --chain-id test \
+ --home test/val2 \
+ --node tcp://localhost:36657 \
+ --yes
+ ```
+
+2. Store a link to a file that contains all information in the above format (e.g. if you want to specify lots of binaries, changelog info, etc. without filling up the blockchain). For example:
+
+ ```text
+ https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e
+ ```
+
+When `cosmovisor` is triggered to download the new binary, `cosmovisor` will parse the `"binaries"` field, download the new binary with [go-getter](https://github.com/hashicorp/go-getter), and unpack the new binary in the `upgrades/` folder so that it can be run as if it was installed manually.
+
+Note that for this mechanism to provide strong security guarantees, all URLs should include a SHA 256/512 checksum. This ensures that no false binary is run, even if someone hacks the server or hijacks the DNS. `go-getter` will always ensure the downloaded file matches the checksum if it is provided. `go-getter` will also handle unpacking archives into directories (in this case the download link should point to a `zip` file of all data in the `bin` directory).
+
+To properly create a sha256 checksum on linux, you can use the `sha256sum` utility. For example:
+
+```shell
+sha256sum ./testdata/repo/zip_directory/autod.zip
+```
+
+The result will look something like the following: `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`.
+
+You can also use `sha512sum` if you would prefer to use longer hashes, or `md5sum` if you would prefer to use broken hashes. Whichever you choose, make sure to set the hash algorithm properly in the checksum argument to the URL.
+
+### Preparing for an Upgrade
+
+To prepare for an upgrade, use the `prepare-upgrade` command:
+
+```shell
+cosmovisor prepare-upgrade
+```
+
+This command performs the following actions:
+
+1. Retrieves upgrade information directly from the blockchain about the next scheduled upgrade.
+2. Downloads the new binary specified in the upgrade plan.
+3. Verifies the binary's checksum (if required by configuration).
+4. Places the new binary in the appropriate directory for Cosmovisor to use during the upgrade.
+
+The `prepare-upgrade` command provides detailed logging throughout the process, including:
+
+* The name and height of the upcoming upgrade
+* The URL from which the new binary is being downloaded
+* Confirmation of successful download and verification
+* The path where the new binary has been placed
+
+Example output:
+
+```bash
+INFO Preparing for upgrade name=v1.0.0 height=1000000
+INFO Downloading upgrade binary url=https://example.com/binary/v1.0.0?checksum=sha256:339911508de5e20b573ce902c500ee670589073485216bee8b045e853f24bce8
+INFO Upgrade preparation complete name=v1.0.0 height=1000000
+```
+
+*Note: The current way of downloading manually and placing the binary at the right place would still work.*
+
+## Example: SimApp Upgrade
+
+The following instructions provide a demonstration of `cosmovisor` using the simulation application (`simapp`) shipped with the Cosmos SDK's source code. The following commands are to be run from within the `cosmos-sdk` repository.
+
+### Chain Setup
+
+Let's create a new chain using the `v0.47.4` version of simapp (the Cosmos SDK demo app):
+
+```shell
+git checkout v0.47.4
+make build
+```
+
+Clean `~/.simapp` (never do this in a production environment):
+
+```shell
+./build/simd tendermint unsafe-reset-all
+```
+
+Set up app config:
+
+```shell
+./build/simd config chain-id test
+./build/simd config keyring-backend test
+./build/simd config broadcast-mode sync
+```
+
+Initialize the node and overwrite any previous genesis file (never do this in a production environment):
+
+```shell
+./build/simd init test --chain-id test --overwrite
+```
+
+For the sake of this demonstration, amend `voting_period` in `genesis.json` to a reduced time of 20 seconds (`20s`):
+
+```shell
+cat <<< $(jq '.app_state.gov.params.voting_period = "20s"' $HOME/.simapp/config/genesis.json) > $HOME/.simapp/config/genesis.json
+```
+
+Create a validator, and setup genesis transaction:
+
+```shell
+./build/simd keys add validator
+./build/simd genesis add-genesis-account validator 1000000000stake --keyring-backend test
+./build/simd genesis gentx validator 1000000stake --chain-id test
+./build/simd genesis collect-gentxs
+```
+
+#### Prepare Cosmovisor and Start the Chain
+
+Set the required environment variables:
+
+```shell
+export DAEMON_NAME=simd
+export DAEMON_HOME=$HOME/.simapp
+```
+
+Set the optional environment variable to trigger an automatic app restart:
+
+```shell
+export DAEMON_RESTART_AFTER_UPGRADE=true
+```
+
+Initialize cosmovisor with the current binary:
+
+```shell
+cosmovisor init ./build/simd
+```
+
+Now you can run cosmovisor with simapp v0.47.4:
+
+```shell
+cosmovisor run start
+```
+
+### Update App
+
+Update app to the latest version (e.g. v0.50.0).
+
+:::note
+
+Migration plans are defined using the `x/upgrade` module and described in [In-Place Store Migrations](https://github.com/cosmos/cosmos-sdk/blob/main/docs/learn/advanced/15-upgrade.md). Migrations can perform any deterministic state change.
+
+The migration plan to upgrade the simapp from v0.47 to v0.50 is defined in `simapp/upgrade.go`.
+
+:::
+
+Build the new version `simd` binary:
+
+```shell
+make build
+```
+
+Add the new `simd` binary and the upgrade name:
+
+:::warning
+
+The migration name must match the one defined in the migration plan.
+
+:::
+
+```shell
+cosmovisor add-upgrade v047-to-v050 ./build/simd
+```
+
+Open a new terminal window and submit an upgrade proposal along with a deposit and a vote (these commands must be run within 20 seconds of each other):
+
+```shell
+./build/simd tx upgrade software-upgrade v047-to-v050 --title upgrade --summary upgrade --upgrade-height 200 --upgrade-info "{}" --no-validate --from validator --yes
+./build/simd tx gov deposit 1 10000000stake --from validator --yes
+./build/simd tx gov vote 1 yes --from validator --yes
+```
+
+The upgrade will occur automatically at height 200. Note: you may need to change the upgrade height in the snippet above if your test play takes more time.
diff --git a/copy-of-sdk-docs/build/tooling/02-confix.md b/copy-of-sdk-docs/build/tooling/02-confix.md
new file mode 100644
index 00000000..00851ede
--- /dev/null
+++ b/copy-of-sdk-docs/build/tooling/02-confix.md
@@ -0,0 +1,156 @@
+---
+sidebar_position: 1
+---
+
+# Confix
+
+`Confix` is a configuration management tool that allows you to manage your configuration via CLI.
+
+It is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md).
+
+## Installation
+
+### Add Config Command
+
+To add the confix tool, it's required to add the `ConfigCommand` to your application's root command file (e.g. `/cmd/root.go`).
+
+Import the `confixCmd` package:
+
+```go
+import "cosmossdk.io/tools/confix/cmd"
+```
+
+Find the following line:
+
+```go
+initRootCmd(rootCmd, moduleManager)
+```
+
+After that line, add the following:
+
+```go
+rootCmd.AddCommand(
+ confixcmd.ConfigCommand(),
+)
+```
+
+The `ConfixCommand` function builds the `config` root command and is defined in the `confixCmd` package (`cosmossdk.io/tools/confix/cmd`).
+An implementation example can be found in `simapp`.
+
+The command will be available as `simd config`.
+
+:::tip
+Using confix directly in the application can have less features than using it standalone.
+This is because confix is versioned with the SDK, while `latest` is the standalone version.
+:::
+
+### Using Confix Standalone
+
+To use Confix standalone, without having to add it in your application, install it with the following command:
+
+```bash
+go install cosmossdk.io/tools/confix/cmd/confix@latest
+```
+
+Alternatively, for building from source, simply run `make confix`. The binary will be located in `tools/confix`.
+
+## Usage
+
+Use standalone:
+
+```shell
+confix --help
+```
+
+Use in simd:
+
+```shell
+simd config fix --help
+```
+
+### Get
+
+Get a configuration value, e.g.:
+
+```shell
+simd config get app pruning # gets the value pruning from app.toml
+simd config get client chain-id # gets the value chain-id from client.toml
+```
+
+```shell
+confix get ~/.simapp/config/app.toml pruning # gets the value pruning from app.toml
+confix get ~/.simapp/config/client.toml chain-id # gets the value chain-id from client.toml
+```
+
+### Set
+
+Set a configuration value, e.g.:
+
+```shell
+simd config set app pruning "enabled" # sets the value pruning from app.toml
+simd config set client chain-id "foo-1" # sets the value chain-id from client.toml
+```
+
+```shell
+confix set ~/.simapp/config/app.toml pruning "enabled" # sets the value pruning from app.toml
+confix set ~/.simapp/config/client.toml chain-id "foo-1" # sets the value chain-id from client.toml
+```
+
+### Migrate
+
+Migrate a configuration file to a new version, config type defaults to `app.toml`, if you want to change it to `client.toml`, please indicate it by adding the optional parameter, e.g.:
+
+```shell
+simd config migrate v0.50 # migrates defaultHome/config/app.toml to the latest v0.50 config
+simd config migrate v0.50 --client # migrates defaultHome/config/client.toml to the latest v0.50 config
+```
+
+```shell
+confix migrate v0.50 ~/.simapp/config/app.toml # migrate ~/.simapp/config/app.toml to the latest v0.50 config
+confix migrate v0.50 ~/.simapp/config/client.toml --client # migrate ~/.simapp/config/client.toml to the latest v0.50 config
+```
+
+### Diff
+
+Get the diff between a given configuration file and the default configuration file, e.g.:
+
+```shell
+simd config diff v0.47 # gets the diff between defaultHome/config/app.toml and the latest v0.47 config
+simd config diff v0.47 --client # gets the diff between defaultHome/config/client.toml and the latest v0.47 config
+```
+
+```shell
+confix diff v0.47 ~/.simapp/config/app.toml # gets the diff between ~/.simapp/config/app.toml and the latest v0.47 config
+confix diff v0.47 ~/.simapp/config/client.toml --client # gets the diff between ~/.simapp/config/client.toml and the latest v0.47 config
+```
+
+### View
+
+View a configuration file, e.g:
+
+```shell
+simd config view client # views the current app client config
+```
+
+```shell
+confix view ~/.simapp/config/client.toml # views the current app client conf
+```
+
+### Maintainer
+
+At each SDK modification of the default configuration, add the default SDK config under `data/vXX-app.toml`.
+This allows users to use the tool standalone.
+
+### Compatibility
+
+The recommended standalone version is `latest`, which is using the latest development version of the Confix.
+
+| SDK Version | Confix Version |
+| ----------- | -------------- |
+| v0.50 | v0.1.x |
+| v0.52 | v0.2.x |
+| v2 | v0.2.x |
+
+## Credits
+
+This project is based on the [CometBFT RFC 019](https://github.com/cometbft/cometbft/blob/5013bc3f4a6d64dcc2bf02ccc002ebc9881c62e4/docs/rfc/rfc-019-config-version.md) and their never released own implementation of [confix](https://github.com/cometbft/cometbft/blob/v0.36.x/scripts/confix/confix.go).
diff --git a/copy-of-sdk-docs/build/tooling/03-hubl.md b/copy-of-sdk-docs/build/tooling/03-hubl.md
new file mode 100644
index 00000000..97d02921
--- /dev/null
+++ b/copy-of-sdk-docs/build/tooling/03-hubl.md
@@ -0,0 +1,73 @@
+---
+sidebar_position: 1
+---
+
+# Hubl
+
+`Hubl` is a tool that allows you to query any Cosmos SDK based blockchain.
+It takes advantage of the new [AutoCLI](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/client/v2@v2.0.0-20220916140313-c5245716b516/cli) feature of the Cosmos SDK.
+
+## Installation
+
+Hubl can be installed using `go install`:
+
+```shell
+go install cosmossdk.io/tools/hubl/cmd/hubl@latest
+```
+
+Or build from source:
+
+```shell
+git clone --depth=1 https://github.com/cosmos/cosmos-sdk
+make hubl
+```
+
+The binary will be located in `tools/hubl`.
+
+## Usage
+
+```shell
+hubl --help
+```
+
+### Add chain
+
+To configure a new chain just run this command using the --init flag and the name of the chain as it's listed in the chain registry ().
+
+If the chain is not listed in the chain registry, you can use any unique name.
+
+```shell
+hubl init [chain-name]
+hubl init regen
+```
+
+The chain configuration is stored in `~/.hubl/config.toml`.
+
+:::tip
+
+When using an unsecure gRPC endpoint, change the `insecure` field to `true` in the config file.
+
+```toml
+[chains]
+[chains.regen]
+[[chains.regen.trusted-grpc-endpoints]]
+endpoint = 'localhost:9090'
+insecure = true
+```
+
+Or use the `--insecure` flag:
+
+```shell
+hubl init regen --insecure
+```
+
+:::
+
+### Query
+
+To query a chain, you can use the `query` command.
+Then specify which module you want to query and the query itself.
+
+```shell
+hubl regen query auth module-accounts
+```
diff --git a/copy-of-sdk-docs/build/tooling/README.md b/copy-of-sdk-docs/build/tooling/README.md
new file mode 100644
index 00000000..230918c2
--- /dev/null
+++ b/copy-of-sdk-docs/build/tooling/README.md
@@ -0,0 +1,17 @@
+---
+sidebar_position: 0
+---
+
+# Tools
+
+This section provides documentation on various tooling maintained by the SDK team.
+This includes tools for development, operating a node, and ease of use of a Cosmos SDK chain.
+
+## CLI Tools
+
+* [Cosmovisor](../../../tools/cosmovisor/README.md)
+* [Confix](../../../tools/confix/README.md)
+
+## Other Tools
+
+* [Protocol Buffers](./00-protobuf.md)
diff --git a/copy-of-sdk-docs/build/tooling/_category_.json b/copy-of-sdk-docs/build/tooling/_category_.json
new file mode 100644
index 00000000..eb57cb8a
--- /dev/null
+++ b/copy-of-sdk-docs/build/tooling/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Tooling",
+ "position": 5,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/learn/advanced/00-baseapp.md b/copy-of-sdk-docs/learn/advanced/00-baseapp.md
new file mode 100644
index 00000000..b24a570d
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/00-baseapp.md
@@ -0,0 +1,547 @@
+---
+sidebar_position: 1
+---
+
+# BaseApp
+
+:::note Synopsis
+This document describes `BaseApp`, the abstraction that implements the core functionalities of a Cosmos SDK application.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK application](../beginner/00-app-anatomy.md)
+* [Lifecycle of a Cosmos SDK transaction](../beginner/01-tx-lifecycle.md)
+
+:::
+
+## Introduction
+
+`BaseApp` is a base type that implements the core of a Cosmos SDK application, namely:
+
+* The [Application Blockchain Interface](#main-abci-messages), for the state-machine to communicate with the underlying consensus engine (e.g. CometBFT).
+* [Service Routers](#service-routers), to route messages and queries to the appropriate module.
+* Different [states](#state-updates), as the state-machine can have different volatile states updated based on the ABCI message received.
+
+The goal of `BaseApp` is to provide the fundamental layer of a Cosmos SDK application
+that developers can easily extend to build their own custom application. Usually,
+developers will create a custom type for their application, like so:
+
+```go
+type App struct {
+ // reference to a BaseApp
+ *baseapp.BaseApp
+
+ // list of application store keys
+
+ // list of application keepers
+
+ // module manager
+}
+```
+
+Extending the application with `BaseApp` gives the former access to all of `BaseApp`'s methods.
+This allows developers to compose their custom application with the modules they want, while not
+having to concern themselves with the hard work of implementing the ABCI, the service routers and state
+management logic.
+
+## Type Definition
+
+The `BaseApp` type holds many important parameters for any Cosmos SDK based application.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L64-L201
+```
+
+Let us go through the most important components.
+
+> **Note**: Not all parameters are described, only the most important ones. Refer to the
+> type definition for the full list.
+
+First, the important parameters that are initialized during the bootstrapping of the application:
+
+* [`CommitMultiStore`](./04-store.md#commitmultistore): This is the main store of the application,
+ which holds the canonical state that is committed at the [end of each block](#commit). This store
+ is **not** cached, meaning it is not used to update the application's volatile (un-committed) states.
+ The `CommitMultiStore` is a multi-store, meaning a store of stores. Each module of the application
+ uses one or multiple `KVStores` in the multi-store to persist their subset of the state.
+* Database: The `db` is used by the `CommitMultiStore` to handle data persistence.
+* [`Msg` Service Router](#msg-service-router): The `msgServiceRouter` facilitates the routing of `sdk.Msg` requests to the appropriate
+ module `Msg` service for processing. Here a `sdk.Msg` refers to the transaction component that needs to be
+ processed by a service in order to update the application state, and not to ABCI message which implements
+ the interface between the application and the underlying consensus engine.
+* [gRPC Query Router](#grpc-query-router): The `grpcQueryRouter` facilitates the routing of gRPC queries to the
+ appropriate module for it to be processed. These queries are not ABCI messages themselves, but they
+ are relayed to the relevant module's gRPC `Query` service.
+* [`TxDecoder`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types#TxDecoder): It is used to decode
+ raw transaction bytes relayed by the underlying CometBFT engine.
+* [`AnteHandler`](#antehandler): This handler is used to handle signature verification, fee payment,
+ and other pre-message execution checks when a transaction is received. It's executed during
+ [`CheckTx/RecheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock).
+* [`InitChainer`](../beginner/00-app-anatomy.md#initchainer), [`PreBlocker`](../beginner/00-app-anatomy.md#preblocker), [`BeginBlocker` and `EndBlocker`](../beginner/00-app-anatomy.md#beginblocker-and-endblocker): These are
+ the functions executed when the application receives the `InitChain` and `FinalizeBlock`
+ ABCI messages from the underlying CometBFT engine.
+
+Then, parameters used to define [volatile states](#state-updates) (i.e. cached states):
+
+* `checkState`: This state is updated during [`CheckTx`](#checktx), and reset on [`Commit`](#commit).
+* `finalizeBlockState`: This state is updated during [`FinalizeBlock`](#finalizeblock), and set to `nil` on
+ [`Commit`](#commit) and gets re-initialized on `FinalizeBlock`.
+* `processProposalState`: This state is updated during [`ProcessProposal`](#process-proposal).
+* `prepareProposalState`: This state is updated during [`PrepareProposal`](#prepare-proposal).
+
+Finally, a few more important parameters:
+
+* `voteInfos`: This parameter carries the list of validators whose precommit is missing, either
+ because they did not vote or because the proposer did not include their vote. This information is
+ carried by the [Context](./02-context.md) and can be used by the application for various things like
+ punishing absent validators.
+* `minGasPrices`: This parameter defines the minimum gas prices accepted by the node. This is a
+ **local** parameter, meaning each full-node can set a different `minGasPrices`. It is used in the
+ `AnteHandler` during [`CheckTx`](#checktx), mainly as a spam protection mechanism. The transaction
+ enters the [mempool](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#mempool-methods)
+ only if the gas prices of the transaction are greater than one of the minimum gas price in
+ `minGasPrices` (e.g. if `minGasPrices == 1uatom,1photon`, the `gas-price` of the transaction must be
+ greater than `1uatom` OR `1photon`).
+* `appVersion`: Version of the application. It is set in the
+ [application's constructor function](../beginner/00-app-anatomy.md#constructor-function).
+
+## Constructor
+
+```go
+func NewBaseApp(
+ name string, logger log.Logger, db dbm.DB, txDecoder sdk.TxDecoder, options ...func(*BaseApp),
+) *BaseApp {
+
+ // ...
+}
+```
+
+The `BaseApp` constructor function is pretty straightforward. The only thing worth noting is the
+possibility to provide additional [`options`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/options.go)
+to the `BaseApp`, which will execute them in order. The `options` are generally `setter` functions
+for important parameters, like `SetPruning()` to set pruning options or `SetMinGasPrices()` to set
+the node's `min-gas-prices`.
+
+Naturally, developers can add additional `options` based on their application's needs.
+
+## State Updates
+
+The `BaseApp` maintains four primary volatile states and a root or main state. The main state
+is the canonical state of the application and the volatile states, `checkState`, `prepareProposalState`, `processProposalState` and `finalizeBlockState`
+are used to handle state transitions in-between the main state made during [`Commit`](#commit).
+
+Internally, there is only a single `CommitMultiStore` which we refer to as the main or root state.
+From this root state, we derive four volatile states by using a mechanism called _store branching_ (performed by `CacheWrap` function).
+The types can be illustrated as follows:
+
+
+
+### InitChain State Updates
+
+During `InitChain`, the four volatile states, `checkState`, `prepareProposalState`, `processProposalState`
+and `finalizeBlockState` are set by branching the root `CommitMultiStore`. Any subsequent reads and writes happen
+on branched versions of the `CommitMultiStore`.
+To avoid unnecessary roundtrip to the main state, all reads to the branched store are cached.
+
+
+
+### CheckTx State Updates
+
+During `CheckTx`, the `checkState`, which is based off of the last committed state from the root
+store, is used for any reads and writes. Here we only execute the `AnteHandler` and verify a service router
+exists for every message in the transaction. Note, when we execute the `AnteHandler`, we branch
+the already branched `checkState`.
+This has the side effect that if the `AnteHandler` fails, the state transitions won't be reflected in the `checkState`
+-- i.e. `checkState` is only updated on success.
+
+
+
+### PrepareProposal State Updates
+
+During `PrepareProposal`, the `prepareProposalState` is set by branching the root `CommitMultiStore`.
+The `prepareProposalState` is used for any reads and writes that occur during the `PrepareProposal` phase.
+The function uses the `Select()` method of the mempool to iterate over the transactions. `runTx` is then called,
+which encodes and validates each transaction and from there the `AnteHandler` is executed.
+If successful, valid transactions are returned inclusive of the events, tags, and data generated
+during the execution of the proposal.
+The described behavior is that of the default handler, applications have the flexibility to define their own
+[custom mempool handlers](https://docs.cosmos.network/main/build/building-apps/app-mempool).
+
+
+
+### ProcessProposal State Updates
+
+During `ProcessProposal`, the `processProposalState` is set based off of the last committed state
+from the root store and is used to process a signed proposal received from a validator.
+In this state, `runTx` is called and the `AnteHandler` is executed and the context used in this state is built with information
+from the header and the main state, including the minimum gas prices, which are also set.
+Again we want to highlight that the described behavior is that of the default handler and applications have the flexibility to define their own
+[custom mempool handlers](https://docs.cosmos.network/main/build/building-apps/app-mempool).
+
+
+
+### FinalizeBlock State Updates
+
+During `FinalizeBlock`, the `finalizeBlockState` is set for use during transaction execution and endblock. The
+`finalizeBlockState` is based off of the last committed state from the root store and is branched.
+Note, the `finalizeBlockState` is set to `nil` on [`Commit`](#commit).
+
+The state flow for transaction execution is nearly identical to `CheckTx` except state transitions occur on
+the `finalizeBlockState` and messages in a transaction are executed. Similarly to `CheckTx`, state transitions
+occur on a doubly branched state -- `finalizeBlockState`. Successful message execution results in
+writes being committed to `finalizeBlockState`. Note, if message execution fails, state transitions from
+the AnteHandler are persisted.
+
+### Commit State Updates
+
+During `Commit` all the state transitions that occurred in the `finalizeBlockState` are finally written to
+the root `CommitMultiStore` which in turn is committed to disk and results in a new application
+root hash. These state transitions are now considered final. Finally, the `checkState` is set to the
+newly committed state and `finalizeBlockState` is set to `nil` to be reset on `FinalizeBlock`.
+
+
+
+## ParamStore
+
+During `InitChain`, the `RequestInitChain` provides `ConsensusParams` which contains parameters
+related to block execution such as maximum gas and size in addition to evidence parameters. If these
+parameters are non-nil, they are set in the BaseApp's `ParamStore`. Behind the scenes, the `ParamStore`
+is managed by an `x/consensus_params` module. This allows the parameters to be tweaked via
+ on-chain governance.
+
+## Service Routers
+
+When messages and queries are received by the application, they must be routed to the appropriate module in order to be processed. Routing is done via `BaseApp`, which holds a `msgServiceRouter` for messages, and a `grpcQueryRouter` for queries.
+
+### `Msg` Service Router
+
+[`sdk.Msg`s](../../build/building-modules/02-messages-and-queries.md#messages) need to be routed after they are extracted from transactions, which are sent from the underlying CometBFT engine via the [`CheckTx`](#checktx) and [`FinalizeBlock`](#finalizeblock) ABCI messages. To do so, `BaseApp` holds a `msgServiceRouter` which maps fully-qualified service methods (`string`, defined in each module's Protobuf `Msg` service) to the appropriate module's `MsgServer` implementation.
+
+The [default `msgServiceRouter` included in `BaseApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go) is stateless. However, some applications may want to make use of more stateful routing mechanisms such as allowing governance to disable certain routes or point them to new modules for upgrade purposes. For this reason, the `sdk.Context` is also passed into each [route handler inside `msgServiceRouter`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/msg_service_router.go#L35-L36). For a stateless router that doesn't want to make use of this, you can just ignore the `ctx`.
+
+The application's `msgServiceRouter` is initialized with all the routes using the application's [module manager](../../build/building-modules/01-module-manager.md#manager) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](../beginner/00-app-anatomy.md#constructor-function).
+
+### gRPC Query Router
+
+Similar to `sdk.Msg`s, [`queries`](../../build/building-modules/02-messages-and-queries.md#queries) need to be routed to the appropriate module's [`Query` service](../../build/building-modules/04-query-services.md). To do so, `BaseApp` holds a `grpcQueryRouter`, which maps modules' fully-qualified service methods (`string`, defined in their Protobuf `Query` gRPC) to their `QueryServer` implementation. The `grpcQueryRouter` is called during the initial stages of query processing, which can be either by directly sending a gRPC query to the gRPC endpoint, or via the [`Query` ABCI message](#query) on the CometBFT RPC endpoint.
+
+Just like the `msgServiceRouter`, the `grpcQueryRouter` is initialized with all the query routes using the application's [module manager](../../build/building-modules/01-module-manager.md) (via the `RegisterServices` method), which itself is initialized with all the application's modules in the application's [constructor](../beginner/00-app-anatomy.md#app-constructor).
+
+## Main ABCI 2.0 Messages
+
+The [Application-Blockchain Interface](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md) (ABCI) is a generic interface that connects a state-machine with a consensus engine to form a functional full-node. It can be wrapped in any language, and needs to be implemented by each application-specific blockchain built on top of an ABCI-compatible consensus engine like CometBFT.
+
+The consensus engine handles two main tasks:
+
+* The networking logic, which mainly consists in gossiping block parts, transactions and consensus votes.
+* The consensus logic, which results in the deterministic ordering of transactions in the form of blocks.
+
+It is **not** the role of the consensus engine to define the state or the validity of transactions. Generally, transactions are handled by the consensus engine in the form of `[]bytes`, and relayed to the application via the ABCI to be decoded and processed. At keys moments in the networking and consensus processes (e.g. beginning of a block, commit of a block, reception of an unconfirmed transaction, ...), the consensus engine emits ABCI messages for the state-machine to act on.
+
+Developers building on top of the Cosmos SDK need not implement the ABCI themselves, as `BaseApp` comes with a built-in implementation of the interface. Let us go through the main ABCI messages that `BaseApp` implements:
+
+* [`Prepare Proposal`](#prepare-proposal)
+* [`Process Proposal`](#process-proposal)
+* [`CheckTx`](#checktx)
+* [`FinalizeBlock`](#finalizeblock)
+* [`ExtendVote`](#extendvote)
+* [`VerifyVoteExtension`](#verifyvoteextension)
+
+
+### Prepare Proposal
+
+The `PrepareProposal` function is part of the new methods introduced in Application Blockchain Interface (ABCI++) in CometBFT and is an important part of the application's overall governance system. In the Cosmos SDK, it allows the application to have more fine-grained control over the transactions that are processed, and ensures that only valid transactions are committed to the blockchain.
+
+Here is how the `PrepareProposal` function can be implemented:
+
+1. Extract the `sdk.Msg`s from the transaction.
+2. Perform _stateful_ checks by calling `Validate()` on each of the `sdk.Msg`'s. This is done after _stateless_ checks as _stateful_ checks are more computationally expensive. If `Validate()` fails, `PrepareProposal` returns before running further checks, which saves resources.
+3. Perform any additional checks that are specific to the application, such as checking account balances, or ensuring that certain conditions are met before a transaction is proposed.hey are processed by the consensus engine, if necessary.
+4. Return the updated transactions to be processed by the consensus engine
+
+Note that, unlike `CheckTx()`, `PrepareProposal` process `sdk.Msg`s, so it can directly update the state. However, unlike `FinalizeBlock()`, it does not commit the state updates. It's important to exercise caution when using `PrepareProposal` as incorrect coding could affect the overall liveness of the network.
+
+It's important to note that `PrepareProposal` complements the `ProcessProposal` method which is executed after this method. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more.
+
+`PrepareProposal` returns a response to the underlying consensus engine of type [`abci.CheckTxResponse`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains:
+
+* `Code (uint32)`: Response Code. `0` if successful.
+* `Data ([]byte)`: Result bytes, if any.
+* `Log (string):` The output of the application's logger. May be non-deterministic.
+* `Info (string):` Additional information. May be non-deterministic.
+
+
+### Process Proposal
+
+The `ProcessProposal` function is called by the BaseApp as part of the ABCI message flow, and is executed during the `FinalizeBlock` phase of the consensus process. The purpose of this function is to give more control to the application for block validation, allowing it to check all transactions in a proposed block before the validator sends the prevote for the block. It allows a validator to perform application-dependent work in a proposed block, enabling features such as immediate block execution, and allows the Application to reject invalid blocks.
+
+The `ProcessProposal` function performs several key tasks, including:
+
+1. Validating the proposed block by checking all transactions in it.
+2. Checking the proposed block against the current state of the application, to ensure that it is valid and that it can be executed.
+3. Updating the application's state based on the proposal, if it is valid and passes all checks.
+4. Returning a response to CometBFT indicating the result of the proposal processing.
+
+The `ProcessProposal` is an important part of the application's overall governance system. It is used to manage the network's parameters and other key aspects of its operation. It also ensures that the coherence property is adhered to i.e. all honest validators must accept a proposal by an honest proposer.
+
+It's important to note that `ProcessProposal` complements the `PrepareProposal` method which enables the application to have more fine-grained transaction control by allowing it to reorder, drop, delay, modify, and even add transactions as they see necessary. The combination of these two methods means that it is possible to guarantee that no invalid transactions are ever committed. Furthermore, such a setup can give rise to other interesting use cases such as Oracles, threshold decryption and more.
+
+CometBFT calls it when it receives a proposal and the CometBFT algorithm has not locked on a value. The Application cannot modify the proposal at this point but can reject it if it is invalid. If that is the case, CometBFT will prevote `nil` on the proposal, which has strong liveness implications for CometBFT. As a general rule, the Application SHOULD accept a prepared proposal passed via `ProcessProposal`, even if a part of the proposal is invalid (e.g., an invalid transaction); the Application can ignore the invalid part of the prepared proposal at block execution time.
+
+However, developers must exercise greater caution when using these methods. Incorrectly coding these methods could affect liveness as CometBFT is unable to receive 2/3 valid precommits to finalize a block.
+
+`ProcessProposal` returns a response to the underlying consensus engine of type [`abci.CheckTxResponse`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#processproposal). The response contains:
+
+* `Code (uint32)`: Response Code. `0` if successful.
+* `Data ([]byte)`: Result bytes, if any.
+* `Log (string):` The output of the application's logger. May be non-deterministic.
+* `Info (string):` Additional information. May be non-deterministic.
+
+
+### CheckTx
+
+`CheckTx` is sent by the underlying consensus engine when a new unconfirmed (i.e. not yet included in a valid block)
+transaction is received by a full-node. The role of `CheckTx` is to guard the full-node's mempool
+(where unconfirmed transactions are stored until they are included in a block) from spam transactions.
+Unconfirmed transactions are relayed to peers only if they pass `CheckTx`.
+
+`CheckTx()` can perform both _stateful_ and _stateless_ checks, but developers should strive to
+make the checks **lightweight** because gas fees are not charged for the resources (CPU, data load...) used during the `CheckTx`.
+
+In the Cosmos SDK, after [decoding transactions](./05-encoding.md), `CheckTx()` is implemented
+to do the following checks:
+
+1. Extract the `sdk.Msg`s from the transaction.
+2. **Optionally** perform _stateless_ checks by calling `ValidateBasic()` on each of the `sdk.Msg`s. This is done
+ first, as _stateless_ checks are less computationally expensive than _stateful_ checks. If
+ `ValidateBasic()` fail, `CheckTx` returns before running _stateful_ checks, which saves resources.
+ This check is still performed for messages that have not yet migrated to the new message validation mechanism defined in [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) and still have a `ValidateBasic()` method.
+3. Perform non-module related _stateful_ checks on the [account](../beginner/03-accounts.md). This step is mainly about checking
+ that the `sdk.Msg` signatures are valid, that enough fees are provided and that the sending account
+ has enough funds to pay for said fees. Note that no precise [`gas`](../beginner/04-gas-fees.md) counting occurs here,
+ as `sdk.Msg`s are not processed. Usually, the [`AnteHandler`](../beginner/04-gas-fees.md#antehandler) will check that the `gas` provided
+ with the transaction is superior to a minimum reference gas amount based on the raw transaction size,
+ in order to avoid spam with transactions that provide 0 gas.
+
+`CheckTx` does **not** process `sdk.Msg`s - they only need to be processed when the canonical state needs to be updated, which happens during `FinalizeBlock`.
+
+Steps 2. and 3. are performed by the [`AnteHandler`](../beginner/04-gas-fees.md#antehandler) in the [`RunTx()`](#runtx-antehandler-and-runmsgs)
+function, which `CheckTx()` calls with the `runTxModeCheck` mode. During each step of `CheckTx()`, a
+special [volatile state](#state-updates) called `checkState` is updated. This state is used to keep
+track of the temporary changes triggered by the `CheckTx()` calls of each transaction without modifying
+the [main canonical state](#main-state). For example, when a transaction goes through `CheckTx()`, the
+transaction's fees are deducted from the sender's account in `checkState`. If a second transaction is
+received from the same account before the first is processed, and the account has consumed all its
+funds in `checkState` during the first transaction, the second transaction will fail `CheckTx`() and
+be rejected. In any case, the sender's account will not actually pay the fees until the transaction
+is actually included in a block, because `checkState` never gets committed to the main state. The
+`checkState` is reset to the latest state of the main state each time a blocks gets [committed](#commit).
+
+`CheckTx` returns a response to the underlying consensus engine of type [`abci.CheckTxResponse`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#checktx).
+The response contains:
+
+* `Code (uint32)`: Response Code. `0` if successful.
+* `Data ([]byte)`: Result bytes, if any.
+* `Log (string):` The output of the application's logger. May be non-deterministic.
+* `Info (string):` Additional information. May be non-deterministic.
+* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction.
+* `GasUsed (int64)`: Amount of gas consumed by transaction. During `CheckTx`, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction. Next is an example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/basic.go#L104
+```
+
+* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](./08-events.md) for more.
+* `Codespace (string)`: Namespace for the Code.
+
+#### RecheckTx
+
+After `Commit`, `CheckTx` is run again on all transactions that remain in the node's local mempool
+excluding the transactions that are included in the block. To prevent the mempool from rechecking all transactions
+every time a block is committed, the configuration option `mempool.recheck=false` can be set. As of
+Tendermint v0.32.1, an additional `Type` parameter is made available to the `CheckTx` function that
+indicates whether an incoming transaction is new (`CheckTxType_New`), or a recheck (`CheckTxType_Recheck`).
+This allows certain checks like signature verification can be skipped during `CheckTxType_Recheck`.
+
+## RunTx, AnteHandler, RunMsgs, PostHandler
+
+### RunTx
+
+`RunTx` is called from `CheckTx`/`Finalizeblock` to handle the transaction, with `execModeCheck` or `execModeFinalize` as parameter to differentiate between the two modes of execution. Note that when `RunTx` receives a transaction, it has already been decoded.
+
+The first thing `RunTx` does upon being called is to retrieve the `context`'s `CacheMultiStore` by calling the `getContextForTx()` function with the appropriate mode (either `runTxModeCheck` or `execModeFinalize`). This `CacheMultiStore` is a branch of the main store, with cache functionality (for query requests), instantiated during `FinalizeBlock` for transaction execution and during the `Commit` of the previous block for `CheckTx`. After that, two `defer func()` are called for [`gas`](../beginner/04-gas-fees.md) management. They are executed when `runTx` returns and make sure `gas` is actually consumed, and will throw errors, if any.
+
+After that, `RunTx()` calls `ValidateBasic()`, when available and for backward compatibility, on each `sdk.Msg`in the `Tx`, which runs preliminary _stateless_ validity checks. If any `sdk.Msg` fails to pass `ValidateBasic()`, `RunTx()` returns with an error.
+
+Then, the [`anteHandler`](#antehandler) of the application is run (if it exists). In preparation of this step, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L706-L722
+```
+
+This allows `RunTx` not to commit the changes made to the state during the execution of `anteHandler` if it ends up failing. It also prevents the module implementing the `anteHandler` from writing to state, which is an important part of the [object-capabilities](./10-ocap.md) of the Cosmos SDK.
+
+Finally, the [`RunMsgs()`](#runmsgs) function is called to process the `sdk.Msg`s in the `Tx`. In preparation of this step, just like with the `anteHandler`, both the `checkState`/`finalizeBlockState`'s `context` and `context`'s `CacheMultiStore` are branched using the `cacheTxContext()` function.
+
+### AnteHandler
+
+The `AnteHandler` is a special handler that implements the `AnteHandler` interface and is used to authenticate the transaction before the transaction's internal messages are processed.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/handler.go#L3-L5
+```
+
+The `AnteHandler` is theoretically optional, but still a very important component of public blockchain networks. It serves 3 primary purposes:
+
+* Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](./01-transactions.md#transaction-generation) checking.
+* Perform preliminary _stateful_ validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees.
+* Play a role in the incentivization of stakeholders via the collection of transaction fees.
+
+`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](../beginner/00-app-anatomy.md#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/ante.go).
+
+Click [here](../beginner/04-gas-fees.md#antehandler) for more on the `anteHandler`.
+
+### RunMsgs
+
+`RunMsgs` is called from `RunTx` with `runTxModeCheck` as parameter to check the existence of a route for each message the transaction, and with `execModeFinalize` to actually process the `sdk.Msg`s.
+
+First, it retrieves the `sdk.Msg`'s fully-qualified type name, by checking the `type_url` of the Protobuf `Any` representing the `sdk.Msg`. Then, using the application's [`msgServiceRouter`](#msg-service-router), it checks for the existence of `Msg` service method related to that `type_url`. At this point, if `mode == runTxModeCheck`, `RunMsgs` returns. Otherwise, if `mode == execModeFinalize`, the [`Msg` service](../../build/building-modules/03-msg-services.md) RPC is executed, before `RunMsgs` returns.
+
+### PostHandler
+
+`PostHandler` is similar to `AnteHandler`, but it, as the name suggests, executes custom post tx processing logic after [`RunMsgs`](#runmsgs) is called. `PostHandler` receives the `Result` of the `RunMsgs` in order to enable this customizable behavior.
+
+Like `AnteHandler`s, `PostHandler`s are theoretically optional.
+
+Other use cases like unused gas refund can also be enabled by `PostHandler`s.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/posthandler/post.go#L1-L15
+```
+
+Note, when `PostHandler`s fail, the state from `runMsgs` is also reverted, effectively making the transaction fail.
+
+## Other ABCI Messages
+
+### InitChain
+
+The [`InitChain` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when the chain is first started. It is mainly used to **initialize** parameters and state like:
+
+* [Consensus Parameters](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#consensus-parameters) via `setConsensusParams`.
+* [`checkState` and `finalizeBlockState`](#state-updates) via `setState`.
+* The [block gas meter](../beginner/04-gas-fees.md#block-gas-meter), with infinite gas to process genesis transactions.
+
+Finally, the `InitChain(req abci.InitChainRequest)` method of `BaseApp` calls the [`initChainer()`](../beginner/00-app-anatomy.md#initchainer) of the application in order to initialize the main state of the application from the `genesis file` and, if defined, call the [`InitGenesis`](../../build/building-modules/08-genesis.md#initgenesis) function of each of the application's modules.
+
+
+### FinalizeBlock
+
+The [`FinalizeBlock` ABCI message](https://github.com/cometbft/cometbft/blob/v0.38.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine when a block proposal created by the correct proposer is received. The previous `BeginBlock, DeliverTx and Endblock` calls are private methods on the BaseApp struct.
+
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci.go#L869
+```
+
+#### PreBlock
+
+* Run the application's [`preBlocker()`](../beginner/00-app-anatomy.md#preblocker), which mainly runs the [`PreBlocker()`](../../build/building-modules/17-preblock.md#preblock) method of each of the modules.
+
+#### BeginBlock
+
+* Initialize [`finalizeBlockState`](#state-updates) with the latest header using the `req abci.FinalizeBlockRequest` passed as parameter via the `setState` function.
+
+ ```go reference
+ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L746-L770
+ ```
+
+ This function also resets the [main gas meter](../beginner/04-gas-fees.md#main-gas-meter).
+
+* Initialize the [block gas meter](../beginner/04-gas-fees.md#block-gas-meter) with the `maxGas` limit. The `gas` consumed within the block cannot go above `maxGas`. This parameter is defined in the application's consensus parameters.
+* Run the application's [`beginBlocker()`](../beginner/00-app-anatomy.md#beginblocker-and-endblocker), which mainly runs the [`BeginBlocker()`](../../build/building-modules/06-beginblock-endblock.md#beginblock) method of each of the modules.
+* Set the [`VoteInfos`](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_methods.md#voteinfo) of the application, i.e. the list of validators whose _precommit_ for the previous block was included by the proposer of the current block. This information is carried into the [`Context`](./02-context.md) so that it can be used during transaction execution and EndBlock.
+
+#### Transaction Execution
+
+When the underlying consensus engine receives a block proposal, each transaction in the block needs to be processed by the application. To that end, the underlying consensus engine sends the transactions in FinalizeBlock message to the application for each transaction in a sequential order.
+
+Before the first transaction of a given block is processed, a [volatile state](#state-updates) called `finalizeBlockState` is initialized during FinalizeBlock. This state is updated each time a transaction is processed via `FinalizeBlock`, and committed to the [main state](#main-state) when the block is [committed](#commit), after what it is set to `nil`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L772-L807
+```
+
+Transaction execution within `FinalizeBlock` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step:
+
+1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivized to do so, as they earn a bonus on the total fee of the block they propose.
+2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md). Additional _stateful_ checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`.
+
+During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/gas.go#L230-L241
+```
+
+At any point, if `GasConsumed > GasWanted`, the function returns with `Code != 0` and the execution fails.
+
+Each transactions returns a response to the underlying consensus engine of type [`abci.ExecTxResult`](https://github.com/cometbft/cometbft/blob/v0.38.0-rc1/spec/abci/abci%2B%2B_methods.md#exectxresult). The response contains:
+
+* `Code (uint32)`: Response Code. `0` if successful.
+* `Data ([]byte)`: Result bytes, if any.
+* `Log (string):` The output of the application's logger. May be non-deterministic.
+* `Info (string):` Additional information. May be non-deterministic.
+* `GasWanted (int64)`: Amount of gas requested for transaction. It is provided by users when they generate the transaction.
+* `GasUsed (int64)`: Amount of gas consumed by transaction. During transaction execution, this value is computed by multiplying the standard cost of a transaction byte by the size of the raw transaction, and by adding gas each time a read/write to the store occurs.
+* `Events ([]cmn.KVPair)`: Key-Value tags for filtering and indexing transactions (eg. by account). See [`event`s](./08-events.md) for more.
+* `Codespace (string)`: Namespace for the Code.
+
+#### EndBlock
+
+EndBlock is run after transaction execution completes. It allows developers to have logic be executed at the end of each block. In the Cosmos SDK, the bulk EndBlock() method is to run the application's EndBlocker(), which mainly runs the EndBlocker() method of each of the application's modules.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#L811-L833
+```
+
+### Commit
+
+The [`Commit` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#method-overview) is sent from the underlying CometBFT engine after the full-node has received _precommits_ from 2/3+ of validators (weighted by voting power). On the `BaseApp` end, the `Commit(res abci.CommitResponse)` function is implemented to commit all the valid state transitions that occurred during `FinalizeBlock` and to reset state for the next block.
+
+To commit state-transitions, the `Commit` function calls the `Write()` function on `finalizeBlockState.ms`, where `finalizeBlockState.ms` is a branched multistore of the main store `app.cms`. Then, the `Commit` function sets `checkState` to the latest header (obtained from `finalizeBlockState.ctx.BlockHeader`) and `finalizeBlockState` to `nil`.
+
+Finally, `Commit` returns the hash of the commitment of `app.cms` back to the underlying consensus engine. This hash is used as a reference in the header of the next block.
+
+### Info
+
+The [`Info` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is a simple query from the underlying consensus engine, notably used to sync the latter with the application during a handshake that happens on startup. When called, the `Info(res abci.InfoResponse)` function from `BaseApp` will return the application's name, version and the hash of the last commit of `app.cms`.
+
+### Query
+
+The [`Query` ABCI message](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_basic_concepts.md#info-methods) is used to serve queries received from the underlying consensus engine, including queries received via RPC like CometBFT RPC. It used to be the main entrypoint to build interfaces with the application, but with the introduction of [gRPC queries](../../build/building-modules/04-query-services.md) in Cosmos SDK v0.40, its usage is more limited. The application must respect a few rules when implementing the `Query` method, which are outlined [here](https://github.com/cometbft/cometbft/blob/v0.37.x/spec/abci/abci++_app_requirements.md#query).
+
+Each CometBFT `query` comes with a `path`, which is a `string` which denotes what to query. If the `path` matches a gRPC fully-qualified service method, then `BaseApp` will defer the query to the `grpcQueryRouter` and let it handle it like explained [above](#grpc-query-router). Otherwise, the `path` represents a query that is not (yet) handled by the gRPC router. `BaseApp` splits the `path` string with the `/` delimiter. By convention, the first element of the split string (`split[0]`) contains the category of `query` (`app`, `p2p`, `store` or `custom` ). The `BaseApp` implementation of the `Query(req abci.QueryRequest)` method is a simple dispatcher serving these 4 main categories of queries:
+
+* Application-related queries like querying the application's version, which are served via the `handleQueryApp` method.
+* Direct queries to the multistore, which are served by the `handlerQueryStore` method. These direct queries are different from custom queries which go through `app.queryRouter`, and are mainly used by third-party service provider like block explorers.
+* P2P queries, which are served via the `handleQueryP2P` method. These queries return either `app.addrPeerFilter` or `app.ipPeerFilter` that contain the list of peers filtered by address or IP respectively. These lists are first initialized via `options` in `BaseApp`'s [constructor](#constructor).
+
+### ExtendVote
+
+`ExtendVote` allows an application to extend a pre-commit vote with arbitrary data. This process does NOT have to be deterministic and the data returned can be unique to the validator process.
+
+In the Cosmos-SDK this is implemented as a NoOp:
+
+``` go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci_utils.go#L444-L450
+```
+
+### VerifyVoteExtension
+
+`VerifyVoteExtension` allows an application to verify that the data returned by `ExtendVote` is valid. This process MUST be deterministic. Moreover, the value of ResponseVerifyVoteExtension.status MUST exclusively depend on the parameters passed in the call to RequestVerifyVoteExtension, and the last committed Application state.
+
+In the Cosmos-SDK this is implemented as a NoOp:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/abci_utils.go#L452-L458
+```
diff --git a/copy-of-sdk-docs/learn/advanced/01-transactions.md b/copy-of-sdk-docs/learn/advanced/01-transactions.md
new file mode 100644
index 00000000..72575563
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/01-transactions.md
@@ -0,0 +1,229 @@
+---
+sidebar_position: 1
+---
+
+# Transactions
+
+:::note Synopsis
+`Transactions` are objects created by end-users to trigger state changes in the application.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK Application](../beginner/00-app-anatomy.md)
+
+:::
+
+## Transactions
+
+Transactions are comprised of metadata held in [contexts](./02-context.md) and [`sdk.Msg`s](../../build/building-modules/02-messages-and-queries.md) that trigger state changes within a module through the module's Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md).
+
+When users want to interact with an application and make state changes (e.g. sending coins), they create transactions. Each of a transaction's `sdk.Msg` must be signed using the private key associated with the appropriate account(s), before the transaction is broadcasted to the network. A transaction must then be included in a block, validated, and approved by the network through the consensus process. To read more about the lifecycle of a transaction, click [here](../beginner/01-tx-lifecycle.md).
+
+## Type Definition
+
+Transaction objects are Cosmos SDK types that implement the `Tx` interface
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/tx_msg.go#L53-L58
+```
+
+It contains the following methods:
+
+* **GetMsgs:** unwraps the transaction and returns a list of contained `sdk.Msg`s - one transaction may have one or multiple messages, which are defined by module developers.
+
+As a developer, you should rarely manipulate `Tx` directly, as `Tx` is an intermediate type used for transaction generation. Instead, developers should prefer the `TxBuilder` interface, which you can learn more about [below](#transaction-generation).
+
+### Signing Transactions
+
+Every message in a transaction must be signed by the addresses specified by its `GetSigners`. The Cosmos SDK currently allows signing transactions in two different ways.
+
+#### `SIGN_MODE_DIRECT` (preferred)
+
+The most used implementation of the `Tx` interface is the Protobuf `Tx` message, which is used in `SIGN_MODE_DIRECT`:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L15-L28
+```
+
+Because Protobuf serialization is not deterministic, the Cosmos SDK uses an additional `TxRaw` type to denote the pinned bytes over which a transaction is signed. Any user can generate a valid `body` and `auth_info` for a transaction, and serialize these two messages using Protobuf. `TxRaw` then pins the user's exact binary representation of `body` and `auth_info`, called respectively `body_bytes` and `auth_info_bytes`. The document that is signed by all signers of the transaction is `SignDoc` (deterministically serialized using [ADR-027](../../build/architecture/adr-027-deterministic-protobuf-serialization.md)):
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L50-L67
+```
+
+Once signed by all signers, the `body_bytes`, `auth_info_bytes` and `signatures` are gathered into `TxRaw`, whose serialized bytes are broadcasted over the network.
+
+#### `SIGN_MODE_LEGACY_AMINO_JSON`
+
+The legacy implementation of the `Tx` interface is the `StdTx` struct from `x/auth`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdtx.go#L82-L89
+```
+
+The document signed by all signers is `StdSignDoc`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdsign.go#L30-L43
+```
+
+which is encoded into bytes using Amino JSON. Once all signatures are gathered into `StdTx`, `StdTx` is serialized using Amino JSON, and these bytes are broadcasted over the network.
+
+#### Other Sign Modes
+
+The Cosmos SDK also provides a couple of other sign modes for particular use cases.
+
+#### `SIGN_MODE_DIRECT_AUX`
+
+`SIGN_MODE_DIRECT_AUX` is a sign mode released in the Cosmos SDK v0.46 which targets transactions with multiple signers. Whereas `SIGN_MODE_DIRECT` expects each signer to sign over both `TxBody` and `AuthInfo` (which includes all other signers' signer infos, i.e. their account sequence, public key and mode info), `SIGN_MODE_DIRECT_AUX` allows N-1 signers to only sign over `TxBody` and _their own_ signer info. Moreover, each auxiliary signer (i.e. a signer using `SIGN_MODE_DIRECT_AUX`) doesn't
+need to sign over the fees:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L68-L93
+```
+
+The use case is a multi-signer transaction, where one of the signers is appointed to gather all signatures, broadcast the signature and pay for fees, and the others only care about the transaction body. This generally allows for a better multi-signing UX. If Alice, Bob and Charlie are part of a 3-signer transaction, then Alice and Bob can both use `SIGN_MODE_DIRECT_AUX` to sign over the `TxBody` and their own signer info (no need an additional step to gather other signers' ones, like in `SIGN_MODE_DIRECT`), without specifying a fee in their SignDoc. Charlie can then gather both signatures from Alice and Bob, and
+create the final transaction by appending a fee. Note that the fee payer of the transaction (in our case Charlie) must sign over the fees, so must use `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`.
+
+
+#### `SIGN_MODE_TEXTUAL`
+
+`SIGN_MODE_TEXTUAL` is a new sign mode for delivering a better signing experience on hardware wallets and it is included in the v0.50 release. In this mode, the signer signs over the human-readable string representation of the transaction (CBOR) and makes all data being displayed easier to read. The data is formatted as screens, and each screen is meant to be displayed in its entirety even on small devices like the Ledger Nano.
+
+There are also _expert_ screens, which will only be displayed if the user has chosen that option in its hardware device. These screens contain things like account number, account sequence and the sign data hash.
+
+Data is formatted using a set of `ValueRenderer` which the SDK provides defaults for all the known messages and value types. Chain developers can also opt to implement their own `ValueRenderer` for a type/message if they'd like to display information differently.
+
+If you wish to learn more, please refer to [ADR-050](../../build/architecture/adr-050-sign-mode-textual.md).
+
+#### Custom Sign modes
+
+There is an opportunity to add your own custom sign mode to the Cosmos-SDK. While we can not accept the implementation of the sign mode to the repository, we can accept a pull request to add the custom signmode to the SignMode enum located [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/signing/v1beta1/signing.proto#L17)
+
+## Transaction Process
+
+The process of an end-user sending a transaction is:
+
+* decide on the messages to put into the transaction,
+* generate the transaction using the Cosmos SDK's `TxBuilder`,
+* broadcast the transaction using one of the available interfaces.
+
+The next paragraphs will describe each of these components, in this order.
+
+### Messages
+
+:::tip
+Module `sdk.Msg`s are not to be confused with [ABCI Messages](https://docs.cometbft.com/v0.37/spec/abci/) which define interactions between the CometBFT and application layers.
+:::
+
+**Messages** (or `sdk.Msg`s) are module-specific objects that trigger state transitions within the scope of the module they belong to. Module developers define the messages for their module by adding methods to the Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md), and also implement the corresponding `MsgServer`.
+
+Each `sdk.Msg`s is related to exactly one Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md) RPC, defined inside each module's `tx.proto` file. A SDK app router automatically maps every `sdk.Msg` to a corresponding RPC. Protobuf generates a `MsgServer` interface for each module `Msg` service, and the module developer needs to implement this interface.
+This design puts more responsibility on module developers, allowing application developers to reuse common functionalities without having to implement state transition logic repetitively.
+
+To learn more about Protobuf `Msg` services and how to implement `MsgServer`, click [here](../../build/building-modules/03-msg-services.md).
+
+While messages contain the information for state transition logic, a transaction's other metadata and relevant information are stored in the `TxBuilder` and `Context`.
+
+### Transaction Generation
+
+The `TxBuilder` interface contains data closely related with the generation of transactions, which an end-user can set to generate the desired transaction:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/tx_config.go#L39-L57
+```
+
+* `Msg`s, the array of [messages](#messages) included in the transaction.
+* `GasLimit`, option chosen by the users for how to calculate how much gas they will need to pay.
+* `Memo`, a note or comment to send with the transaction.
+* `FeeAmount`, the maximum amount the user is willing to pay in fees.
+* `TimeoutHeight`, block height until which the transaction is valid.
+* `Unordered`, an option indicating this transaction may be executed in any order (requires Sequence to be unset.)
+* `TimeoutTimestamp`, the timeout timestamp (unordered nonce) of the transaction (required to be used with Unordered).
+* `Signatures`, the array of signatures from all signers of the transaction.
+
+As there are currently two sign modes for signing transactions, there are also two implementations of `TxBuilder`:
+
+* [wrapper](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/tx/builder.go#L27-L44) for creating transactions for `SIGN_MODE_DIRECT`,
+* [StdTxBuilder](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/migrations/legacytx/stdtx_builder.go#L14-L17) for `SIGN_MODE_LEGACY_AMINO_JSON`.
+
+However, the two implementations of `TxBuilder` should be hidden away from end-users, as they should prefer using the overarching `TxConfig` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/tx_config.go#L27-L37
+```
+
+`TxConfig` is an app-wide configuration for managing transactions. Most importantly, it holds the information about whether to sign each transaction with `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`. By calling `txBuilder := txConfig.NewTxBuilder()`, a new `TxBuilder` will be created with the appropriate sign mode.
+
+Once `TxBuilder` is correctly populated with the setters exposed above, `TxConfig` will also take care of correctly encoding the bytes (again, either using `SIGN_MODE_DIRECT` or `SIGN_MODE_LEGACY_AMINO_JSON`). Here's a pseudo-code snippet of how to generate and encode a transaction, using the `TxEncoder()` method:
+
+```go
+txBuilder := txConfig.NewTxBuilder()
+txBuilder.SetMsgs(...) // and other setters on txBuilder
+
+bz, err := txConfig.TxEncoder()(txBuilder.GetTx())
+// bz are bytes to be broadcasted over the network
+```
+
+### Broadcasting the Transaction
+
+Once the transaction bytes are generated, there are currently three ways of broadcasting it.
+
+#### CLI
+
+Application developers create entry points to the application by creating a [command-line interface](./07-cli.md), [gRPC and/or REST interface](./06-grpc_rest.md), typically found in the application's `./cmd` folder. These interfaces allow users to interact with the application through command-line.
+
+For the [command-line interface](../../build/building-modules/09-module-interfaces.md#cli), module developers create subcommands to add as children to the application top-level transaction command `TxCmd`. CLI commands actually bundle all the steps of transaction processing into one simple command: creating messages, generating transactions and broadcasting. For concrete examples, see the [Interacting with a Node](../../user/run-node/02-interact-node.md) section. An example transaction made using CLI looks like:
+
+```bash
+simd tx send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake
+```
+
+#### gRPC
+
+[gRPC](https://grpc.io) is the main component for the Cosmos SDK's RPC layer. Its principal usage is in the context of modules' [`Query` services](../../build/building-modules/04-query-services.md). However, the Cosmos SDK also exposes a few other module-agnostic gRPC services, one of them being the `Tx` service:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/service.proto
+```
+
+The `Tx` service exposes a handful of utility functions, such as simulating a transaction or querying a transaction, and also one method to broadcast transactions.
+
+Examples of broadcasting and simulating a transaction are shown [here](../../user/run-node/03-txs.md#programmatically-with-go).
+
+#### REST
+
+Each gRPC method has its corresponding REST endpoint, generated using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway). Therefore, instead of using gRPC, you can also use HTTP to broadcast the same transaction, on the `POST /cosmos/tx/v1beta1/txs` endpoint.
+
+An example can be seen [here](../../user/run-node/03-txs.md#using-rest)
+
+#### CometBFT RPC
+
+The three methods presented above are actually higher abstractions over the CometBFT RPC `/broadcast_tx_{async,sync,commit}` endpoints, documented [here](https://docs.cometbft.com/v0.37/core/rpc). This means that you can use the CometBFT RPC endpoints directly to broadcast the transaction, if you wish so.
+
+### Unordered Transactions
+
+:::tip
+
+Looking to enable unordered transactions on your chain?
+Check out the [v0.53.0 Upgrade Guide](https://docs.cosmos.network/v0.53/build/migrations/upgrade-guide#enable-unordered-transactions-optional)
+
+:::
+
+:::warning
+
+Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value,
+the transaction will be rejected. Services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions.
+Services should be aware that when the transaction is unordered, the transaction sequence will always be zero.
+
+:::
+
+Beginning with Cosmos SDK v0.53.0, chains may enable unordered transaction support.
+Unordered transactions work by using a timestamp as the transaction's nonce value. The sequence value must NOT be set in the signature(s) of the transaction.
+The timestamp must be greater than the current block time and not exceed the chain's configured max unordered timeout timestamp duration.
+Senders must use a unique timestamp for each distinct transaction. The difference may be as small as a nanosecond, however.
+
+These unique timestamps serve as a one-shot nonce, and their lifespan in state is short-lived.
+Upon transaction inclusion, an entry consisting of timeout timestamp and account address will be recorded to state.
+Once the block time is passed the timeout timestamp value, the entry will be removed. This ensures that unordered nonces do not indefinitely fill up the chain's storage.
diff --git a/copy-of-sdk-docs/learn/advanced/02-context.md b/copy-of-sdk-docs/learn/advanced/02-context.md
new file mode 100644
index 00000000..578bb1f1
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/02-context.md
@@ -0,0 +1,103 @@
+---
+sidebar_position: 1
+---
+
+# Context
+
+:::note Synopsis
+The `context` is a data structure intended to be passed from function to function that carries information about the current state of the application. It provides access to a branched storage (a safe branch of the entire state) as well as useful objects and information like `gasMeter`, `block height`, `consensus parameters` and more.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK Application](../beginner/00-app-anatomy.md)
+* [Lifecycle of a Transaction](../beginner/01-tx-lifecycle.md)
+
+:::
+
+## Context Definition
+
+The Cosmos SDK `Context` is a custom data structure that contains Go's stdlib [`context`](https://pkg.go.dev/context) as its base, and has many additional types within its definition that are specific to the Cosmos SDK. The `Context` is integral to transaction processing in that it allows modules to easily access their respective [store](./04-store.md#base-layer-kvstores) in the [`multistore`](./04-store.md#multistore) and retrieve transactional context such as the block header and gas meter.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/context.go#L40-L67
+```
+
+* **Base Context:** The base type is a Go [Context](https://pkg.go.dev/context), which is explained further in the [Go Context Package](#go-context-package) section below.
+* **Multistore:** Every application's `BaseApp` contains a [`CommitMultiStore`](./04-store.md#multistore) which is provided when a `Context` is created. Calling the `KVStore()` and `TransientStore()` methods allows modules to fetch their respective [`KVStore`](./04-store.md#base-layer-kvstores) using their unique `StoreKey`.
+* **Header:** The [header](https://docs.cometbft.com/v0.37/spec/core/data_structures#header) is a Blockchain type. It carries important information about the state of the blockchain, such as block height and proposer of the current block.
+* **Header Hash:** The current block header hash, obtained during `abci.FinalizeBlock`.
+* **Chain ID:** The unique identification number of the blockchain a block pertains to.
+* **Transaction Bytes:** The `[]byte` representation of a transaction being processed using the context. Every transaction is processed by various parts of the Cosmos SDK and consensus engine (e.g. CometBFT) throughout its [lifecycle](../beginner/01-tx-lifecycle.md), some of which do not have any understanding of transaction types. Thus, transactions are marshaled into the generic `[]byte` type using some kind of [encoding format](./05-encoding.md) such as [Amino](./05-encoding.md).
+* **Logger:** A `logger` from the CometBFT libraries. Learn more about logs [here](https://docs.cometbft.com/v0.37/core/configuration). Modules call this method to create their own unique module-specific logger.
+* **VoteInfo:** A list of the ABCI type [`VoteInfo`](https://docs.cometbft.com/main/spec/abci/abci++_methods.html#voteinfo), which includes the name of a validator and a boolean indicating whether they have signed the block.
+* **Gas Meters:** Specifically, a [`gasMeter`](../beginner/04-gas-fees.md#main-gas-meter) for the transaction currently being processed using the context and a [`blockGasMeter`](../beginner/04-gas-fees.md#block-gas-meter) for the entire block it belongs to. Users specify how much in fees they wish to pay for the execution of their transaction; these gas meters keep track of how much [gas](../beginner/04-gas-fees.md) has been used in the transaction or block so far. If the gas meter runs out, execution halts.
+* **CheckTx Mode:** A boolean value indicating whether a transaction should be processed in `CheckTx` or `DeliverTx` mode.
+* **Min Gas Price:** The minimum [gas](../beginner/04-gas-fees.md) price a node is willing to take in order to include a transaction in its block. This price is a local value configured by each node individually, and should therefore **not be used in any functions used in sequences leading to state-transitions**.
+* **Consensus Params:** The ABCI type [Consensus Parameters](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#consensus-parameters), which specify certain limits for the blockchain, such as maximum gas for a block.
+* **Event Manager:** The event manager allows any caller with access to a `Context` to emit [`Events`](./08-events.md). Modules may define module specific
+ `Events` by defining various `Types` and `Attributes` or use the common definitions found in `types/`. Clients can subscribe or query for these `Events`. These `Events` are collected throughout `FinalizeBlock` and are returned to CometBFT for indexing.
+* **Priority:** The transaction priority, only relevant in `CheckTx`.
+* **KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the `KVStore`.
+* **Transient KV `GasConfig`:** Enables applications to set a custom `GasConfig` for the transient `KVStore`.
+* **StreamingManager:** The streamingManager field provides access to the streaming manager, which allows modules to subscribe to state changes emitted by the blockchain. The streaming manager is used by the state listening API, which is described in [ADR 038](https://docs.cosmos.network/main/architecture/adr-038-state-listening).
+* **CometInfo:** A lightweight field that contains information about the current block, such as the block height, time, and hash. This information can be used for validating evidence, providing historical data, and enhancing the user experience. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/comet/service.go#L14).
+* **HeaderInfo:** The `headerInfo` field contains information about the current block header, such as the chain ID, gas limit, and timestamp. For further details see [here](https://github.com/cosmos/cosmos-sdk/blob/main/core/header/service.go#L14).
+
+## Go Context Package
+
+A basic `Context` is defined in the [Golang Context Package](https://pkg.go.dev/context). A `Context`
+is an immutable data structure that carries request-scoped data across APIs and processes. Contexts
+are also designed to enable concurrency and to be used in goroutines.
+
+Contexts are intended to be **immutable**; they should never be edited. Instead, the convention is
+to create a child context from its parent using a `With` function. For example:
+
+```go
+childCtx = parentCtx.WithBlockHeader(header)
+```
+
+The [Golang Context Package](https://pkg.go.dev/context) documentation instructs developers to
+explicitly pass a context `ctx` as the first argument of a process.
+
+## Store branching
+
+The `Context` contains a `MultiStore`, which allows for branching and caching functionality using `CacheMultiStore`
+(queries in `CacheMultiStore` are cached to avoid future round trips).
+Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to
+the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can
+be committed to the underlying store at the end of the sequence or disregard them if something
+goes wrong. The pattern of usage for a Context is as follows:
+
+1. A process receives a Context `ctx` from its parent process, which provides information needed to
+ perform the process.
+2. The `ctx.ms` is a **branched store**, i.e. a branch of the [multistore](./04-store.md#multistore) is made so that the process can make changes to the state as it executes, without changing the original `ctx.ms`. This is useful to protect the underlying multistore in case the changes need to be reverted at some point in the execution.
+3. The process may read and write from `ctx` as it is executing. It may call a subprocess and pass
+ `ctx` to it as needed.
+4. When a subprocess returns, it checks if the result is a success or failure. If a failure, nothing
+ needs to be done - the branch `ctx` is simply discarded. If successful, the changes made to
+ the `CacheMultiStore` can be committed to the original `ctx.ms` via `Write()`.
+
+For example, here is a snippet from the [`runTx`](./00-baseapp.md#runtx-antehandler-runmsgs-posthandler) function in [`baseapp`](./00-baseapp.md):
+
+```go
+runMsgCtx, msCache := app.cacheTxContext(ctx, txBytes)
+result = app.runMsgs(runMsgCtx, msgs, mode)
+result.GasWanted = gasWanted
+if mode != runTxModeDeliver {
+ return result
+}
+if result.IsOK() {
+ msCache.Write()
+}
+```
+
+Here is the process:
+
+1. Prior to calling `runMsgs` on the message(s) in the transaction, it uses `app.cacheTxContext()`
+ to branch and cache the context and multistore.
+2. `runMsgCtx` - the context with branched store, is used in `runMsgs` to return a result.
+3. If the process is running in [`checkTxMode`](./00-baseapp.md#checktx), there is no need to write the
+ changes - the result is returned immediately.
+4. If the process is running in [`deliverTxMode`](./00-baseapp.md#delivertx) and the result indicates
+ a successful run over all the messages, the branched multistore is written back to the original.
diff --git a/copy-of-sdk-docs/learn/advanced/03-node.md b/copy-of-sdk-docs/learn/advanced/03-node.md
new file mode 100644
index 00000000..375dedb0
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/03-node.md
@@ -0,0 +1,96 @@
+---
+sidebar_position: 1
+---
+
+# Node Client (Daemon)
+
+:::note Synopsis
+The main endpoint of a Cosmos SDK application is the daemon client, otherwise known as the full-node client. The full-node runs the state-machine, starting from a genesis file. It connects to peers running the same client in order to receive and relay transactions, block proposals and signatures. The full-node is constituted of the application, defined with the Cosmos SDK, and of a consensus engine connected to the application via the ABCI.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of an SDK application](../beginner/00-app-anatomy.md)
+
+:::
+
+## `main` function
+
+The full-node client of any Cosmos SDK application is built by running a `main` function. The client is generally named by appending the `-d` suffix to the application name (e.g. `appd` for an application named `app`), and the `main` function is defined in a `./appd/cmd/main.go` file. Running this function creates an executable `appd` that comes with a set of commands. For an app named `app`, the main command is [`appd start`](#start-command), which starts the full-node.
+
+In general, developers will implement the `main.go` function with the following structure:
+
+* First, an [`encodingCodec`](./05-encoding.md) is instantiated for the application.
+* Then, the `config` is retrieved and config parameters are set. This mainly involves setting the Bech32 prefixes for [addresses](../beginner/03-accounts.md#addresses).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/config.go#L14-L29
+```
+
+* Using [cobra](https://github.com/spf13/cobra), the root command of the full-node client is created. After that, all the custom commands of the application are added using the `AddCommand()` method of `rootCmd`.
+* Add default server commands to `rootCmd` using the `server.AddCommands()` method. These commands are separated from the ones added above since they are standard and defined at Cosmos SDK level. They should be shared by all Cosmos SDK-based applications. They include the most important command: the [`start` command](#start-command).
+* Prepare and execute the `executor`.
+
+```go reference
+https://github.com/cometbft/cometbft/blob/v0.37.0/libs/cli/setup.go#L74-L78
+```
+
+See an example of `main` function from the `simapp` application, the Cosmos SDK's application for demo purposes:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/main.go
+```
+
+## `start` command
+
+The `start` command is defined in the `/server` folder of the Cosmos SDK. It is added to the root command of the full-node client in the [`main` function](#main-function) and called by the end-user to start their node:
+
+```bash
+# For an example app named "app", the following command starts the full-node.
+appd start
+
+# Using the Cosmos SDK's own simapp, the following commands start the simapp node.
+simd start
+```
+
+As a reminder, the full-node is composed of three conceptual layers: the networking layer, the consensus layer and the application layer. The first two are generally bundled together in an entity called the consensus engine (CometBFT by default), while the third is the state-machine defined with the help of the Cosmos SDK. Currently, the Cosmos SDK uses CometBFT as the default consensus engine, meaning the start command is implemented to boot up a CometBFT node.
+
+The flow of the `start` command is pretty straightforward. First, it retrieves the `config` from the `context` in order to open the `db` (a [`leveldb`](https://github.com/syndtr/goleveldb) instance by default). This `db` contains the latest known state of the application (empty if the application is started from the first time.
+
+With the `db`, the `start` command creates a new instance of the application using an `appCreator` function:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/start.go#L1007
+```
+
+Note that an `appCreator` is a function that fulfills the `AppCreator` signature:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/types/app.go#L69
+```
+
+In practice, the [constructor of the application](../beginner/00-app-anatomy.md#constructor-function) is passed as the `appCreator`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L294-L308
+```
+
+Then, the instance of `app` is used to instantiate a new CometBFT node:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/start.go#L361-L400
+```
+
+The CometBFT node can be created with `app` because the latter satisfies the [`abci.Application` interface](https://github.com/cometbft/cometbft/blob/v0.37.0/abci/types/application.go#L9-L35) (given that `app` extends [`baseapp`](./00-baseapp.md)). As part of the `node.New` method, CometBFT makes sure that the height of the application (i.e. number of blocks since genesis) is equal to the height of the CometBFT node. The difference between these two heights should always be negative or null. If it is strictly negative, `node.New` will replay blocks until the height of the application reaches the height of the CometBFT node. Finally, if the height of the application is `0`, the CometBFT node will call [`InitChain`](./00-baseapp.md#initchain) on the application to initialize the state from the genesis file.
+
+Once the CometBFT node is instantiated and in sync with the application, the node can be started:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/start.go#L373-L374
+```
+
+Upon starting, the node will bootstrap its RPC and P2P server and start dialing peers. During handshake with its peers, if the node realizes they are ahead, it will query all the blocks sequentially in order to catch up. Then, it will wait for new block proposals and block signatures from validators in order to make progress.
+
+## Other commands
+
+To discover how to concretely run a node and interact with it, please refer to our [Running a Node, API and CLI](../../user/run-node/01-run-node.md) guide.
diff --git a/copy-of-sdk-docs/learn/advanced/04-store.md b/copy-of-sdk-docs/learn/advanced/04-store.md
new file mode 100644
index 00000000..860bb3d0
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/04-store.md
@@ -0,0 +1,288 @@
+---
+sidebar_position: 1
+---
+
+# Store
+
+:::note Synopsis
+A store is a data structure that holds the state of the application.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK application](../beginner/00-app-anatomy.md)
+
+:::
+
+## Introduction to Cosmos SDK Stores
+
+The Cosmos SDK comes with a large set of stores to persist the state of applications. By default, the main store of Cosmos SDK applications is a `multistore`, i.e. a store of stores. Developers can add any number of key-value stores to the multistore, depending on their application needs. The multistore exists to support the modularity of the Cosmos SDK, as it lets each module declare and manage their own subset of the state. Key-value stores in the multistore can only be accessed with a specific capability `key`, which is typically held in the [`keeper`](../../build/building-modules/06-keeper.md) of the module that declared the store.
+
+```text
++-----------------------------------------------------+
+| |
+| +--------------------------------------------+ |
+| | | |
+| | KVStore 1 - Manage by keeper of Module 1 |
+| | | |
+| +--------------------------------------------+ |
+| |
+| +--------------------------------------------+ |
+| | | |
+| | KVStore 2 - Manage by keeper of Module 2 | |
+| | | |
+| +--------------------------------------------+ |
+| |
+| +--------------------------------------------+ |
+| | | |
+| | KVStore 3 - Manage by keeper of Module 2 | |
+| | | |
+| +--------------------------------------------+ |
+| |
+| +--------------------------------------------+ |
+| | | |
+| | KVStore 4 - Manage by keeper of Module 3 | |
+| | | |
+| +--------------------------------------------+ |
+| |
+| +--------------------------------------------+ |
+| | | |
+| | KVStore 5 - Manage by keeper of Module 4 | |
+| | | |
+| +--------------------------------------------+ |
+| |
+| Main Multistore |
+| |
++-----------------------------------------------------+
+
+ Application's State
+```
+
+### Store Interface
+
+At its very core, a Cosmos SDK `store` is an object that holds a `CacheWrapper` and has a `GetStoreType()` method:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L17-L20
+```
+
+The `GetStoreType` is a simple method that returns the type of store, whereas a `CacheWrapper` is a simple interface that implements store read caching and write branching through `Write` method:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L285-L317
+```
+
+Branching and cache is used ubiquitously in the Cosmos SDK and required to be implemented on every store type. A storage branch creates an isolated, ephemeral branch of a store that can be passed around and updated without affecting the main underlying store. This is used to trigger temporary state-transitions that may be reverted later should an error occur. Read more about it in [context](./02-context.md#Store-branching)
+
+### Commit Store
+
+A commit store is a store that has the ability to commit changes made to the underlying tree or db. The Cosmos SDK differentiates simple stores from commit stores by extending the basic store interfaces with a `Committer`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L34-L38
+```
+
+The `Committer` is an interface that defines methods to persist changes to disk:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L22-L32
+```
+
+The `CommitID` is a deterministic commit of the state tree. Its hash is returned to the underlying consensus engine and stored in the block header. Note that commit store interfaces exist for various purposes, one of which is to make sure not every object can commit the store. As part of the [object-capabilities model](./10-ocap.md) of the Cosmos SDK, only `baseapp` should have the ability to commit stores. For example, this is the reason why the `ctx.KVStore()` method by which modules typically access stores returns a `KVStore` and not a `CommitKVStore`.
+
+The Cosmos SDK comes with many types of stores, the most used being [`CommitMultiStore`](#multistore), [`KVStore`](#kvstore) and [`GasKv` store](#gaskv-store). [Other types of stores](#other-stores) include `Transient` and `TraceKV` stores.
+
+## Multistore
+
+### Multistore Interface
+
+Each Cosmos SDK application holds a multistore at its root to persist its state. The multistore is a store of `KVStores` that follows the `Multistore` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L115-L147
+```
+
+If tracing is enabled, then branching the multistore will firstly wrap all the underlying `KVStore` in [`TraceKv.Store`](#tracekv-store).
+
+### CommitMultiStore
+
+The main type of `Multistore` used in the Cosmos SDK is `CommitMultiStore`, which is an extension of the `Multistore` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L155-L225
+```
+
+As for concrete implementation, the [`rootMulti.Store`] is the go-to implementation of the `CommitMultiStore` interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/rootmulti/store.go#L56-L82
+```
+
+The `rootMulti.Store` is a base-layer multistore built around a `db` on top of which multiple `KVStores` can be mounted, and is the default multistore store used in [`baseapp`](./00-baseapp.md).
+
+### CacheMultiStore
+
+Whenever the `rootMulti.Store` needs to be branched, a [`cachemulti.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachemulti/store.go) is used.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachemulti/store.go#L20-L34
+```
+
+`cachemulti.Store` branches all substores (creates a virtual store for each substore) in its constructor and hold them in `Store.stores`. Moreover caches all read queries. `Store.GetKVStore()` returns the store from `Store.stores`, and `Store.Write()` recursively calls `CacheWrap.Write()` on all the substores.
+
+## Base-layer KVStores
+
+### `KVStore` and `CommitKVStore` Interfaces
+
+A `KVStore` is a simple key-value store used to store and retrieve data. A `CommitKVStore` is a `KVStore` that also implements a `Committer`. By default, stores mounted in `baseapp`'s main `CommitMultiStore` are `CommitKVStore`s. The `KVStore` interface is primarily used to restrict modules from accessing the committer.
+
+Individual `KVStore`s are used by modules to manage a subset of the global state. `KVStores` can be accessed by objects that hold a specific key. This `key` should only be exposed to the [`keeper`](../../build/building-modules/06-keeper.md) of the module that defines the store.
+
+`CommitKVStore`s are declared by proxy of their respective `key` and mounted on the application's [multistore](#multistore) in the [main application file](../beginner/00-app-anatomy.md#core-application-file). In the same file, the `key` is also passed to the module's `keeper` that is responsible for managing the store.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/store.go#L227-L264
+```
+
+Apart from the traditional `Get` and `Set` methods, that a `KVStore` must implement via the `BasicKVStore` interface; a `KVStore` must provide an `Iterator(start, end)` method which returns an `Iterator` object. It is used to iterate over a range of keys, typically keys that share a common prefix. Below is an example from the bank's module keeper, used to iterate over all account balances:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/bank/keeper/view.go#L121-L137
+```
+
+### `IAVL` Store
+
+The default implementation of `KVStore` and `CommitKVStore` used in `baseapp` is the `iavl.Store`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/iavl/store.go#L36-L41
+```
+
+`iavl` stores are based around an [IAVL Tree](https://github.com/cosmos/iavl), a self-balancing binary tree which guarantees that:
+
+* `Get` and `Set` operations are O(log n), where n is the number of elements in the tree.
+* Iteration efficiently returns the sorted elements within the range.
+* Each tree version is immutable and can be retrieved even after a commit (depending on the pruning settings).
+
+The documentation on the IAVL Tree is located [here](https://github.com/cosmos/iavl/blob/master/docs/overview.md).
+
+### `DbAdapter` Store
+
+`dbadapter.Store` is an adapter for `dbm.DB` making it fulfilling the `KVStore` interface.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/dbadapter/store.go#L13-L16
+```
+
+`dbadapter.Store` embeds `dbm.DB`, meaning most of the `KVStore` interface functions are implemented. The other functions (mostly miscellaneous) are manually implemented. This store is primarily used within [Transient Stores](#transient-store)
+
+### `Transient` Store
+
+`Transient.Store` is a base-layer `KVStore` which is automatically discarded at the end of the block.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/transient/store.go#L16-L19
+```
+
+`Transient.Store` is a `dbadapter.Store` with a `dbm.NewMemDB()`. All `KVStore` methods are reused. When `Store.Commit()` is called, a new `dbadapter.Store` is assigned, discarding previous reference and making it garbage collected.
+
+This type of store is useful to persist information that is only relevant per-block. One example would be to store parameter changes (i.e. a bool set to `true` if a parameter changed in a block).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/params/types/subspace.go#L22-L32
+```
+
+Transient stores are typically accessed via the [`context`](./02-context.md) via the `TransientStore()` method:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/context.go#L347-L350
+```
+
+## KVStore Wrappers
+
+### CacheKVStore
+
+`cachekv.Store` is a wrapper `KVStore` which provides buffered writing / cached reading functionalities over the underlying `KVStore`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/cachekv/store.go#L26-L36
+```
+
+This is the type used whenever an IAVL Store needs to be branched to create an isolated store (typically when we need to mutate a state that might be reverted later).
+
+#### `Get`
+
+`Store.Get()` firstly checks if `Store.cache` has an associated value with the key. If the value exists, the function returns it. If not, the function calls `Store.parent.Get()`, caches the result in `Store.cache`, and returns it.
+
+#### `Set`
+
+`Store.Set()` sets the key-value pair to the `Store.cache`. `cValue` has the field dirty bool which indicates whether the cached value is different from the underlying value. When `Store.Set()` caches a new pair, the `cValue.dirty` is set `true` so when `Store.Write()` is called it can be written to the underlying store.
+
+#### `Iterator`
+
+`Store.Iterator()` has to traverse on both cached items and the original items. In `Store.iterator()`, two iterators are generated for each of them, and merged. `memIterator` is essentially a slice of the `KVPairs`, used for cached items. `mergeIterator` is a combination of two iterators, where traverse happens ordered on both iterators.
+
+### `GasKv` Store
+
+Cosmos SDK applications use [`gas`](../beginner/04-gas-fees.md) to track resources usage and prevent spam. [`GasKv.Store`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/gaskv/store.go) is a `KVStore` wrapper that enables automatic gas consumption each time a read or write to the store is made. It is the solution of choice to track storage usage in Cosmos SDK applications.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/gaskv/store.go#L11-L17
+```
+
+When methods of the parent `KVStore` are called, `GasKv.Store` automatically consumes appropriate amount of gas depending on the `Store.gasConfig`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/gas.go#L219-L228
+```
+
+By default, all `KVStores` are wrapped in `GasKv.Stores` when retrieved. This is done in the `KVStore()` method of the [`context`](./02-context.md):
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/context.go#L342-L345
+```
+
+In this case, the gas configuration set in the `context` is used. The gas configuration can be set using the `WithKVGasConfig` method of the `context`.
+Otherwise it uses the following default:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/types/gas.go#L230-L241
+```
+
+### `TraceKv` Store
+
+`tracekv.Store` is a wrapper `KVStore` which provides operation tracing functionalities over the underlying `KVStore`. It is applied automatically by the Cosmos SDK on all `KVStore` if tracing is enabled on the parent `MultiStore`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/tracekv/store.go#L20-L43
+```
+
+When each `KVStore` methods are called, `tracekv.Store` automatically logs `traceOperation` to the `Store.writer`. `traceOperation.Metadata` is filled with `Store.context` when it is not nil. `TraceContext` is a `map[string]interface{}`.
+
+### `Prefix` Store
+
+`prefix.Store` is a wrapper `KVStore` which provides automatic key-prefixing functionalities over the underlying `KVStore`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/prefix/store.go#L15-L21
+```
+
+When `Store.{Get, Set}()` is called, the store forwards the call to its parent, with the key prefixed with the `Store.prefix`.
+
+When `Store.Iterator()` is called, it does not simply prefix the `Store.prefix`, since it does not work as intended. In that case, some of the elements are traversed even if they are not starting with the prefix.
+
+### `ListenKv` Store
+
+`listenkv.Store` is a wrapper `KVStore` which provides state listening capabilities over the underlying `KVStore`.
+It is applied automatically by the Cosmos SDK on any `KVStore` whose `StoreKey` is specified during state streaming configuration.
+Additional information about state streaming configuration can be found in the [store/streaming/README.md](https://github.com/cosmos/cosmos-sdk/tree/v0.53.0/store/streaming).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/store/listenkv/store.go#L11-L18
+```
+
+When `KVStore.Set` or `KVStore.Delete` methods are called, `listenkv.Store` automatically writes the operations to the set of `Store.listeners`.
+
+## `BasicKVStore` interface
+
+An interface providing only the basic CRUD functionality (`Get`, `Set`, `Has`, and `Delete` methods), without iteration or caching. This is used to partially expose components of a larger store.
diff --git a/copy-of-sdk-docs/learn/advanced/05-encoding.md b/copy-of-sdk-docs/learn/advanced/05-encoding.md
new file mode 100644
index 00000000..3c730741
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/05-encoding.md
@@ -0,0 +1,285 @@
+---
+sidebar_position: 1
+---
+
+# Encoding
+
+:::note Synopsis
+While encoding in the Cosmos SDK used to be mainly handled by `go-amino` codec, the Cosmos SDK is moving towards using `gogoprotobuf` for both state and client-side encoding.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK application](../beginner/00-app-anatomy.md)
+
+:::
+
+## Encoding
+
+The Cosmos SDK utilizes two binary wire encoding protocols, [Amino](https://github.com/tendermint/go-amino/) which is an object encoding specification and [Protocol Buffers](https://developers.google.com/protocol-buffers), a subset of Proto3 with an extension for
+interface support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3)
+for more information on Proto3, which Amino is largely compatible with (but not with Proto2).
+
+Due to Amino having significant performance drawbacks, being reflection-based, and
+not having any meaningful cross-language/client support, Protocol Buffers, specifically
+[gogoprotobuf](https://github.com/cosmos/gogoproto/), is being used in place of Amino.
+Note, this process of using Protocol Buffers over Amino is still an ongoing process.
+
+Binary wire encoding of types in the Cosmos SDK can be broken down into two main
+categories, client encoding and store encoding. Client encoding mainly revolves
+around transaction processing and signing, whereas store encoding revolves around
+types used in state-machine transitions and what is ultimately stored in the Merkle
+tree.
+
+For store encoding, protobuf definitions can exist for any type and will typically
+have an Amino-based "intermediary" type. Specifically, the protobuf-based type
+definition is used for serialization and persistence, whereas the Amino-based type
+is used for business logic in the state-machine where they may convert back-n-forth.
+Note, the Amino-based types may slowly be phased-out in the future, so developers
+should take note to use the protobuf message definitions where possible.
+
+In the `codec` package, there exists two core interfaces, `BinaryCodec` and `JSONCodec`,
+where the former encapsulates the current Amino interface except it operates on
+types implementing the latter instead of generic `interface{}` types.
+
+The `ProtoCodec`, where both binary and JSON serialization is handled
+via Protobuf. This means that modules may use Protobuf encoding, but the types must
+implement `ProtoMarshaler`. If modules wish to avoid implementing this interface
+for their types, this is autogenerated via [buf](https://buf.build/)
+
+If modules use [Collections](../../build/packages/02-collections.md), encoding and decoding are handled, marshal and unmarshal should not be handled manually unless for specific cases identified by the developer.
+
+### Gogoproto
+
+Modules are encouraged to utilize Protobuf encoding for their respective types. In the Cosmos SDK, we use the [Gogoproto](https://github.com/cosmos/gogoproto) specific implementation of the Protobuf spec that offers speed and DX improvements compared to the official [Google protobuf implementation](https://github.com/protocolbuffers/protobuf).
+
+### Guidelines for protobuf message definitions
+
+In addition to [following official Protocol Buffer guidelines](https://developers.google.com/protocol-buffers/docs/proto3#simple), we recommend using these annotations in .proto files when dealing with interfaces:
+
+* use `cosmos_proto.accepts_interface` to annotate `Any` fields that accept interfaces
+ * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface`
+ * example: `(cosmos_proto.accepts_interface) = "cosmos.gov.v1beta1.Content"` (and not just `Content`)
+* annotate interface implementations with `cosmos_proto.implements_interface`
+ * pass the same fully qualified name as `protoName` to `InterfaceRegistry.RegisterInterface`
+ * example: `(cosmos_proto.implements_interface) = "cosmos.authz.v1beta1.Authorization"` (and not just `Authorization`)
+
+Code generators can then match the `accepts_interface` and `implements_interface` annotations to know whether some Protobuf messages are allowed to be packed in a given `Any` field or not.
+
+### Transaction Encoding
+
+Another important use of Protobuf is the encoding and decoding of
+[transactions](./01-transactions.md). Transactions are defined by the application or
+the Cosmos SDK but are then passed to the underlying consensus engine to be relayed to
+other peers. Since the underlying consensus engine is agnostic to the application,
+the consensus engine accepts only transactions in the form of raw bytes.
+
+* The `TxEncoder` object performs the encoding.
+* The `TxDecoder` object performs the decoding.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/tx_msg.go#L109-L113
+```
+
+A standard implementation of both these objects can be found in the [`auth/tx` module](../../build/modules/auth/2-tx.md):
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/auth/tx/decoder.go
+```
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/auth/tx/encoder.go
+```
+
+See [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-020-protobuf-transaction-encoding.md) for details of how a transaction is encoded.
+
+### Interface Encoding and Usage of `Any`
+
+The Protobuf DSL is strongly typed, which can make inserting variable-typed fields difficult. Imagine we want to create a `Profile` protobuf message that serves as a wrapper over [an account](../beginner/03-accounts.md):
+
+```protobuf
+message Profile {
+ // account is the account associated to a profile.
+ cosmos.auth.v1beta1.BaseAccount account = 1;
+ // bio is a short description of the account.
+ string bio = 4;
+}
+```
+
+In this `Profile` example, we hardcoded `account` as a `BaseAccount`. However, there are several other types of [user accounts related to vesting](../../build/modules/auth/1-vesting.md), such as `BaseVestingAccount` or `ContinuousVestingAccount`. All of these accounts are different, but they all implement the `AccountI` interface. How would you create a `Profile` that allows all these types of accounts with an `account` field that accepts an `AccountI` interface?
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/account.go#L15-L32
+```
+
+In [ADR-019](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-019-protobuf-state-encoding.md), it has been decided to use [`Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto)s to encode interfaces in protobuf. An `Any` contains an arbitrary serialized message as bytes, along with a URL that acts as a globally unique identifier for and resolves to that message's type. This strategy allows us to pack arbitrary Go types inside protobuf messages. Our new `Profile` then looks like:
+
+```protobuf
+message Profile {
+ // account is the account associated to a profile.
+ google.protobuf.Any account = 1 [
+ (cosmos_proto.accepts_interface) = "cosmos.auth.v1beta1.AccountI"; // Asserts that this field only accepts Go types implementing `AccountI`. It is purely informational for now.
+ ];
+ // bio is a short description of the account.
+ string bio = 4;
+}
+```
+
+To add an account inside a profile, we need to "pack" it inside an `Any` first, using `codectypes.NewAnyWithValue`:
+
+```go
+var myAccount AccountI
+myAccount = ... // Can be a BaseAccount, a ContinuousVestingAccount or any struct implementing `AccountI`
+
+// Pack the account into an Any
+accAny, err := codectypes.NewAnyWithValue(myAccount)
+if err != nil {
+ return nil, err
+}
+
+// Create a new Profile with the any.
+profile := Profile {
+ Account: accAny,
+ Bio: "some bio",
+}
+
+// We can then marshal the profile as usual.
+bz, err := cdc.Marshal(profile)
+jsonBz, err := cdc.MarshalJSON(profile)
+```
+
+To summarize, to encode an interface, you must 1/ pack the interface into an `Any` and 2/ marshal the `Any`. For convenience, the Cosmos SDK provides a `MarshalInterface` method to bundle these two steps. Have a look at [a real-life example in the x/auth module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/auth/keeper/keeper.go#L239-L242).
+
+The reverse operation of retrieving the concrete Go type from inside an `Any`, called "unpacking", is done with the `GetCachedValue()` on `Any`.
+
+```go
+profileBz := ... // The proto-encoded bytes of a Profile, e.g. retrieved through gRPC.
+var myProfile Profile
+// Unmarshal the bytes into the myProfile struct.
+err := cdc.Unmarshal(profilebz, &myProfile)
+
+// Let's see the types of the Account field.
+fmt.Printf("%T\n", myProfile.Account) // Prints "Any"
+fmt.Printf("%T\n", myProfile.Account.GetCachedValue()) // Prints "BaseAccount", "ContinuousVestingAccount" or whatever was initially packed in the Any.
+
+// Get the address of the account.
+accAddr := myProfile.Account.GetCachedValue().(AccountI).GetAddress()
+```
+
+It is important to note that for `GetCachedValue()` to work, `Profile` (and any other structs embedding `Profile`) must implement the `UnpackInterfaces` method:
+
+```go
+func (p *Profile) UnpackInterfaces(unpacker codectypes.AnyUnpacker) error {
+ if p.Account != nil {
+ var account AccountI
+ return unpacker.UnpackAny(p.Account, &account)
+ }
+
+ return nil
+}
+```
+
+The `UnpackInterfaces` gets called recursively on all structs implementing this method, to allow all `Any`s to have their `GetCachedValue()` correctly populated.
+
+For more information about interface encoding, and especially on `UnpackInterfaces` and how the `Any`'s `type_url` gets resolved using the `InterfaceRegistry`, please refer to [ADR-019](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-019-protobuf-state-encoding.md).
+
+#### `Any` Encoding in the Cosmos SDK
+
+The above `Profile` example is a fictive example used for educational purposes. In the Cosmos SDK, we use `Any` encoding in several places (non-exhaustive list):
+
+* the `cryptotypes.PubKey` interface for encoding different types of public keys,
+* the `sdk.Msg` interface for encoding different `Msg`s in a transaction,
+* the `AccountI` interface for encoding different types of accounts (similar to the above example) in the x/auth query responses,
+* the `EvidenceI` interface for encoding different types of evidences in the x/evidence module,
+* the `AuthorizationI` interface for encoding different types of x/authz authorizations,
+* the [`Validator`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/types/staking.pb.go#L340-L375) struct that contains information about a validator.
+
+A real-life example of encoding the pubkey as `Any` inside the Validator struct in x/staking is shown in the following example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/types/validator.go#L43-L66
+```
+
+#### `Any`'s TypeURL
+
+When packing a protobuf message inside an `Any`, the message's type is uniquely defined by its type URL, which is the message's fully qualified name prefixed by a `/` (slash) character. In some implementations of `Any`, like the gogoproto one, there's generally [a resolvable prefix, e.g. `type.googleapis.com`](https://github.com/gogo/protobuf/blob/b03c65ea87cdc3521ede29f62fe3ce239267c1bc/protobuf/google/protobuf/any.proto#L87-L91). However, in the Cosmos SDK, we made the decision to not include such prefix, to have shorter type URLs. The Cosmos SDK's own `Any` implementation can be found in `github.com/cosmos/cosmos-sdk/codec/types`.
+
+The Cosmos SDK is also switching away from gogoproto to the official `google.golang.org/protobuf` (known as the Protobuf API v2). Its default `Any` implementation also contains the [`type.googleapis.com`](https://github.com/protocolbuffers/protobuf-go/blob/v1.28.1/types/known/anypb/any.pb.go#L266) prefix. To maintain compatibility with the SDK, the following methods from `"google.golang.org/protobuf/types/known/anypb"` should not be used:
+
+* `anypb.New`
+* `anypb.MarshalFrom`
+* `anypb.Any#MarshalFrom`
+
+Instead, the Cosmos SDK provides helper functions in `"github.com/cosmos/cosmos-proto/anyutil"`, which create an official `anypb.Any` without inserting the prefixes:
+
+* `anyutil.New`
+* `anyutil.MarshalFrom`
+
+For example, to pack a `sdk.Msg` called `internalMsg`, use:
+
+```diff
+import (
+- "google.golang.org/protobuf/types/known/anypb"
++ "github.com/cosmos/cosmos-proto/anyutil"
+)
+
+- anyMsg, err := anypb.New(internalMsg.Message().Interface())
++ anyMsg, err := anyutil.New(internalMsg.Message().Interface())
+
+- fmt.Println(anyMsg.TypeURL) // type.googleapis.com/cosmos.bank.v1beta1.MsgSend
++ fmt.Println(anyMsg.TypeURL) // /cosmos.bank.v1beta1.MsgSend
+```
+
+## FAQ
+
+### How to create modules using protobuf encoding
+
+#### Defining module types
+
+Protobuf types can be defined to encode:
+
+* state
+* [`Msg`s](../../build/building-modules/02-messages-and-queries.md#messages)
+* [Query services](../../build/building-modules/04-query-services.md)
+* [genesis](../../build/building-modules/08-genesis.md)
+
+#### Naming and conventions
+
+We encourage developers to follow industry guidelines: [Protocol Buffers style guide](https://developers.google.com/protocol-buffers/docs/style)
+and [Buf](https://buf.build/docs/style-guide), see more details in [ADR 023](https://github.com/cosmos/cosmos-sdk/blob/release/v0.53.x/docs/architecture/adr-023-protobuf-naming.md)
+
+### How to update modules to protobuf encoding
+
+If modules do not contain any interfaces (e.g. `Account` or `Content`), then they
+may simply migrate any existing types that
+are encoded and persisted via their concrete Amino codec to Protobuf (see 1. for further guidelines) and accept a `Marshaler` as the codec which is implemented via the `ProtoCodec`
+without any further customization.
+
+However, if a module type composes an interface, it must wrap it in the `sdk.Any` (from `/types` package) type. To do that, a module-level .proto file must use [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto) for respective message type interface types.
+
+For example, in the `x/evidence` module defines an `Evidence` interface, which is used by the `MsgSubmitEvidence`. The structure definition must use `sdk.Any` to wrap the evidence file. In the proto file we define it as follows:
+
+```protobuf
+// proto/cosmos/evidence/v1beta1/tx.proto
+
+message MsgSubmitEvidence {
+ string submitter = 1;
+ google.protobuf.Any evidence = 2 [(cosmos_proto.accepts_interface) = "cosmos.evidence.v1beta1.Evidence"];
+}
+```
+
+The Cosmos SDK `codec.Codec` interface provides support methods `MarshalInterface` and `UnmarshalInterface` for easy encoding of state to `Any`.
+
+Module should register interfaces using `InterfaceRegistry` which provides a mechanism for registering interfaces: `RegisterInterface(protoName string, iface interface{}, impls ...proto.Message)` and implementations: `RegisterImplementations(iface interface{}, impls ...proto.Message)` that can be safely unpacked from Any, similarly to type registration with Amino:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/codec/types/interface_registry.go#L40-L87
+```
+
+In addition, an `UnpackInterfaces` phase should be introduced to deserialization to unpack interfaces before they're needed. Protobuf types that contain a protobuf `Any` either directly or via one of their members should implement the `UnpackInterfacesMessage` interface:
+
+```go
+type UnpackInterfacesMessage interface {
+ UnpackInterfaces(InterfaceUnpacker) error
+}
+```
diff --git a/copy-of-sdk-docs/learn/advanced/06-grpc_rest.md b/copy-of-sdk-docs/learn/advanced/06-grpc_rest.md
new file mode 100644
index 00000000..d3ab827a
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/06-grpc_rest.md
@@ -0,0 +1,105 @@
+---
+sidebar_position: 1
+---
+
+# gRPC, REST, and CometBFT Endpoints
+
+:::note Synopsis
+This document presents an overview of all the endpoints a node exposes: gRPC, REST as well as some other endpoints.
+:::
+
+## An Overview of All Endpoints
+
+Each node exposes the following endpoints for users to interact with a node, each endpoint is served on a different port. Details on how to configure each endpoint is provided in the endpoint's own section.
+
+* the gRPC server (default port: `9090`),
+* the REST server (default port: `1317`),
+* the CometBFT RPC endpoint (default port: `26657`).
+
+:::tip
+The node also exposes some other endpoints, such as the CometBFT P2P endpoint, or the [Prometheus endpoint](https://docs.cometbft.com/v0.37/core/metrics), which are not directly related to the Cosmos SDK. Please refer to the [CometBFT documentation](https://docs.cometbft.com/v0.37/core/configuration) for more information about these endpoints.
+:::
+
+:::note
+All endpoints are defaulted to localhost and must be modified to be exposed to the public internet.
+:::
+
+## gRPC Server
+
+In the Cosmos SDK, Protobuf is the main [encoding](./05-encoding.md) library. This brings a wide range of Protobuf-based tools that can be plugged into the Cosmos SDK. One such tool is [gRPC](https://grpc.io), a modern open-source high performance RPC framework that has decent client support in several languages.
+
+Each module exposes a [Protobuf `Query` service](../../build/building-modules/02-messages-and-queries.md#queries) that defines state queries. The `Query` services and a transaction service used to broadcast transactions are hooked up to the gRPC server via the following function inside the application:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/server/types/app.go#L46-L48
+```
+
+Note: It is not possible to expose any [Protobuf `Msg` service](../../build/building-modules/02-messages-and-queries.md#messages) endpoints via gRPC. Transactions must be generated and signed using the CLI or programmatically before they can be broadcasted using gRPC. See [Generating, Signing, and Broadcasting Transactions](../../user/run-node/03-txs.md) for more information.
+
+The `grpc.Server` is a concrete gRPC server, which spawns and serves all gRPC query requests and a broadcast transaction request. This server can be configured inside `~/.simapp/config/app.toml`:
+
+* `grpc.enable = true|false` field defines if the gRPC server should be enabled. Defaults to `true`.
+* `grpc.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `localhost:9090`.
+
+:::tip
+`~/.simapp` is the directory where the node's configuration and databases are stored. By default, it's set to `~/.{app_name}`.
+:::
+
+Once the gRPC server is started, you can send requests to it using a gRPC client. Some examples are given in our [Interact with the Node](../../user/run-node/02-interact-node.md#using-grpc) tutorial.
+
+An overview of all available gRPC endpoints shipped with the Cosmos SDK is [Protobuf documentation](https://buf.build/cosmos/cosmos-sdk).
+
+## REST Server
+
+Cosmos SDK supports REST routes via gRPC-gateway.
+
+All routes are configured under the following fields in `~/.simapp/config/app.toml`:
+
+* `api.enable = true|false` field defines if the REST server should be enabled. Defaults to `false`.
+* `api.address = {string}` field defines the `ip:port` the server should bind to. Defaults to `tcp://localhost:1317`.
+* some additional API configuration options are defined in `~/.simapp/config/app.toml`, along with comments, please refer to that file directly.
+
+### gRPC-gateway REST Routes
+
+If, for various reasons, you cannot use gRPC (for example, you are building a web application, and browsers don't support HTTP2 on which gRPC is built), then the Cosmos SDK offers REST routes via gRPC-gateway.
+
+[gRPC-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) is a tool to expose gRPC endpoints as REST endpoints. For each gRPC endpoint defined in a Protobuf `Query` service, the Cosmos SDK offers a REST equivalent. For instance, querying a balance could be done via the `/cosmos.bank.v1beta1.QueryAllBalances` gRPC endpoint, or alternatively via the gRPC-gateway `"/cosmos/bank/v1beta1/balances/{address}"` REST endpoint: both will return the same result. For each RPC method defined in a Protobuf `Query` service, the corresponding REST endpoint is defined as an option:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/bank/v1beta1/query.proto#L23-L30
+```
+
+For application developers, gRPC-gateway REST routes needs to be wired up to the REST server, this is done by calling the `RegisterGRPCGatewayRoutes` function on the ModuleManager.
+
+### Swagger
+
+A [Swagger](https://swagger.io/) (or OpenAPIv2) specification file is exposed under the `/swagger` route on the API server. Swagger is an open specification describing the API endpoints a server serves, including description, input arguments, return types and much more about each endpoint.
+
+Enabling the `/swagger` endpoint is configurable inside `~/.simapp/config/app.toml` via the `api.swagger` field, which is set to false by default.
+
+For application developers, you may want to generate your own Swagger definitions based on your custom modules.
+The Cosmos SDK's [Swagger generation script](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/scripts/protoc-swagger-gen.sh) is a good place to start.
+
+## CometBFT RPC
+
+Independently from the Cosmos SDK, CometBFT also exposes a RPC server. This RPC server can be configured by tuning parameters under the `rpc` table in the `~/.simapp/config/config.toml`, the default listening address is `tcp://localhost:26657`. An OpenAPI specification of all CometBFT RPC endpoints is available [here](https://docs.cometbft.com/main/rpc/).
+
+Some CometBFT RPC endpoints are directly related to the Cosmos SDK:
+
+* `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings:
+ * any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.Query/AllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf.
+ * `/app/simulate`: this will simulate a transaction, and return some information such as gas used.
+ * `/app/version`: this will return the application's version.
+ * `/store/{storeName}/key`: this will directly query the named store for data associated with the key represented in the `data` parameter.
+ * `/store/{storeName}/subspace`: this will directly query the named store for key/value pairs in which the key has the value of the `data` parameter as a prefix.
+ * `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port.
+ * `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID.
+* `/broadcast_tx_{sync,async,commit}`: these 3 endpoints will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transactions](./01-transactions.md#broadcasting-the-transaction), but they all use these 3 CometBFT RPCs under the hood.
+
+## Comparison Table
+
+| Name | Advantages | Disadvantages |
+| -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
+| gRPC | - can use code-generated stubs in various languages - supports streaming and bidirectional communication (HTTP2) - small wire binary sizes, faster transmission | - based on HTTP2, not available in browsers - learning curve (mostly due to Protobuf) |
+| REST | - ubiquitous - client libraries in all languages, faster implementation | - only supports unary request-response communication (HTTP1.1) - bigger over-the-wire message sizes (JSON) |
+| CometBFT RPC | - easy to use | - bigger over-the-wire message sizes (JSON) |
diff --git a/copy-of-sdk-docs/learn/advanced/07-cli.md b/copy-of-sdk-docs/learn/advanced/07-cli.md
new file mode 100644
index 00000000..cd9e34de
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/07-cli.md
@@ -0,0 +1,211 @@
+---
+sidebar_position: 1
+---
+
+# Command-Line Interface
+
+:::note Synopsis
+This document describes how command-line interface (CLI) works on a high-level, for an [**application**](../beginner/00-app-anatomy.md). A separate document for implementing a CLI for a Cosmos SDK [**module**](../../build/building-modules/00-intro.md) can be found [here](../../build/building-modules/09-module-interfaces.md#cli).
+:::
+
+## Command-Line Interface
+
+### Example Command
+
+There is no set way to create a CLI, but Cosmos SDK modules typically use the [Cobra Library](https://github.com/spf13/cobra). Building a CLI with Cobra entails defining commands, arguments, and flags. [**Commands**](#root-command) understand the actions users wish to take, such as `tx` for creating a transaction and `query` for querying the application. Each command can also have nested subcommands, necessary for naming the specific transaction type. Users also supply **Arguments**, such as account numbers to send coins to, and [**Flags**](#flags) to modify various aspects of the commands, such as gas prices or which node to broadcast to.
+
+Here is an example of a command a user might enter to interact with the simapp CLI `simd` in order to send some tokens:
+
+```bash
+simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --gas auto --gas-prices
+```
+
+The first four strings specify the command:
+
+* The root command for the entire application `simd`.
+* The subcommand `tx`, which contains all commands that let users create transactions.
+* The subcommand `bank` to indicate which module to route the command to ([`x/bank`](../../build/modules/bank/README.md) module in this case).
+* The type of transaction `send`.
+
+The next two strings are arguments: the `from_address` the user wishes to send from, the `to_address` of the recipient, and the `amount` they want to send. Finally, the last few strings of the command are optional flags to indicate how much the user is willing to pay in fees (calculated using the amount of gas used to execute the transaction and the gas prices provided by the user).
+
+The CLI interacts with a [node](./03-node.md) to handle this command. The interface itself is defined in a `main.go` file.
+
+### Building the CLI
+
+The `main.go` file needs to have a `main()` function that creates a root command, to which all the application commands will be added as subcommands. The root command additionally handles:
+
+* **setting configurations** by reading in configuration files (e.g. the Cosmos SDK config file).
+* **adding any flags** to it, such as `--chain-id`.
+* **instantiating the `codec`** by injecting the application codecs. The [`codec`](./05-encoding.md) is used to encode and decode data structures for the application - stores can only persist `[]byte`s so the developer must define a serialization format for their data structures or use the default, Protobuf.
+* **adding subcommand** for all the possible user interactions, including [transaction commands](#transaction-commands) and [query commands](#query-commands).
+
+The `main()` function finally creates an executor and [execute](https://pkg.go.dev/github.com/spf13/cobra#Command.Execute) the root command. See an example of `main()` function from the `simapp` application:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/main.go#L14-L24
+```
+
+The rest of the document will detail what needs to be implemented for each step and include smaller portions of code from the `simapp` CLI files.
+
+## Adding Commands to the CLI
+
+Every application CLI first constructs a root command, then adds functionality by aggregating subcommands (often with further nested subcommands) using `rootCmd.AddCommand()`. The bulk of an application's unique capabilities lies in its transaction and query commands, called `TxCmd` and `QueryCmd` respectively.
+
+### Root Command
+
+The root command (called `rootCmd`) is what the user first types into the command line to indicate which application they wish to interact with. The string used to invoke the command (the "Use" field) is typically the name of the application suffixed with `-d`, e.g. `simd` or `gaiad`. The root command typically includes the following commands to support basic functionality in the application.
+
+* **Status** command from the Cosmos SDK rpc client tools, which prints information about the status of the connected [`Node`](./03-node.md). The Status of a node includes `NodeInfo`,`SyncInfo` and `ValidatorInfo`.
+* **Keys** [commands](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys) from the Cosmos SDK client tools, which includes a collection of subcommands for using the key functions in the Cosmos SDK crypto tools, including adding a new key and saving it to the keyring, listing all public keys stored in the keyring, and deleting a key. For example, users can type `simd keys add ` to add a new key and save an encrypted copy to the keyring, using the flag `--recover` to recover a private key from a seed phrase or the flag `--multisig` to group multiple keys together to create a multisig key. For full details on the `add` key command, see the code [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys/add.go). For more details about usage of `--keyring-backend` for storage of key credentials look at the [keyring docs](../../user/run-node/00-keyring.md).
+* **Server** commands from the Cosmos SDK server package. These commands are responsible for providing the mechanisms necessary to start an ABCI CometBFT application and provides the CLI framework (based on [cobra](https://github.com/spf13/cobra)) necessary to fully bootstrap an application. The package exposes two core functions: `StartCmd` and `ExportCmd` which creates commands to start the application and export state respectively.
+Learn more [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server).
+* [**Transaction**](#transaction-commands) commands.
+* [**Query**](#query-commands) commands.
+
+Next is an example `rootCmd` function from the `simapp` application. It instantiates the root command, adds a [*persistent* flag](#flags) and `PreRun` function to be run before every execution, and adds all of the necessary subcommands.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L47-L130
+```
+
+:::tip
+Use the `EnhanceRootCommand()` from the AutoCLI options to automatically add auto-generated commands from the modules to the root command.
+Additionally it adds all manually defined modules commands (`tx` and `query`) as well.
+Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section.
+:::
+
+`rootCmd` has a function called `initAppConfig()` which is useful for setting the application's custom configs.
+By default app uses CometBFT app config template from Cosmos SDK, which can be over-written via `initAppConfig()`.
+Here's an example code to override default `app.toml` template.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L144-L199
+```
+
+The `initAppConfig()` also allows overriding the default Cosmos SDK's [server config](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/config/config.go#L231). One example is the `min-gas-prices` config, which defines the minimum gas prices a validator is willing to accept for processing a transaction. By default, the Cosmos SDK sets this parameter to `""` (empty string), which forces all validators to tweak their own `app.toml` and set a non-empty value, or else the node will halt on startup. This might not be the best UX for validators, so the chain developer can set a default `app.toml` value for validators inside this `initAppConfig()` function.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L164-L180
+```
+
+The root-level `status` and `keys` subcommands are common across most applications and do not interact with application state. The bulk of an application's functionality - what users can actually *do* with it - is enabled by its `tx` and `query` commands.
+
+### Transaction Commands
+
+[Transactions](./01-transactions.md) are objects wrapping [`Msg`s](../../build/building-modules/02-messages-and-queries.md#messages) that trigger state changes. To enable the creation of transactions using the CLI interface, a function `txCommand` is generally added to the `rootCmd`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229
+```
+
+This `txCommand` function adds all the transaction available to end-users for the application. This typically includes:
+
+* **Sign command** from the [`auth`](../../build/modules/auth/README.md) module that signs messages in a transaction. To enable multisig, add the `auth` module's `MultiSign` command. Since every transaction requires some sort of signature in order to be valid, the signing command is necessary for every application.
+* **Broadcast command** from the Cosmos SDK client tools, to broadcast transactions.
+* **All [module transaction commands](../../build/building-modules/09-module-interfaces.md#transaction-commands)** the application is dependent on, retrieved by using the [basic module manager's](../../build/building-modules/01-module-manager.md#basic-manager) `AddTxCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli).
+
+Here is an example of a `txCommand` aggregating these subcommands from the `simapp` application:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L270-L292
+```
+
+:::tip
+When using AutoCLI to generate module transaction commands, `EnhanceRootCommand()` automatically adds the module `tx` command to the root command.
+Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section.
+:::
+
+### Query Commands
+
+[**Queries**](../../build/building-modules/02-messages-and-queries.md#queries) are objects that allow users to retrieve information about the application's state. To enable the creation of queries using the CLI interface, a function `queryCommand` is generally added to the `rootCmd`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L222-L229
+```
+
+This `queryCommand` function adds all the queries available to end-users for the application. This typically includes:
+
+* **QueryTx** and/or other transaction query commands from the `auth` module which allow the user to search for a transaction by inputting its hash, a list of tags, or a block height. These queries allow users to see if transactions have been included in a block.
+* **Account command** from the `auth` module, which displays the state (e.g. account balance) of an account given an address.
+* **Validator command** from the Cosmos SDK rpc client tools, which displays the validator set of a given height.
+* **Block command** from the Cosmos SDK RPC client tools, which displays the block data for a given height.
+* **All [module query commands](../../build/building-modules/09-module-interfaces.md#query-commands)** the application is dependent on, retrieved by using the [basic module manager's](../../build/building-modules/01-module-manager.md#basic-manager) `AddQueryCommands()` function, or enhanced by [AutoCLI](https://docs.cosmos.network/main/core/autocli).
+
+Here is an example of a `queryCommand` aggregating subcommands from the `simapp` application:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L249-L268
+```
+
+:::tip
+When using AutoCLI to generate module query commands, `EnhanceRootCommand()` automatically adds the module `query` command to the root command.
+Read more about [AutoCLI](https://docs.cosmos.network/main/core/autocli) in its dedicated section.
+:::
+
+## Flags
+
+Flags are used to modify commands; developers can include them in a `flags.go` file with their CLI. Users can explicitly include them in commands or pre-configure them by inside their [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml). Commonly pre-configured flags include the `--node` to connect to and `--chain-id` of the blockchain the user wishes to interact with.
+
+A *persistent* flag (as opposed to a *local* flag) added to a command transcends all of its children: subcommands will inherit the configured values for these flags. Additionally, all flags have default values when they are added to commands; some toggle an option off but others are empty values that the user needs to override to create valid commands. A flag can be explicitly marked as *required* so that an error is automatically thrown if the user does not provide a value, but it is also acceptable to handle unexpected missing flags differently.
+
+Flags are added to commands directly (generally in the [module's CLI file](../../build/building-modules/09-module-interfaces.md#flags) where module commands are defined) and no flag except for the `rootCmd` persistent flags has to be added at application level. It is common to add a *persistent* flag for `--chain-id`, the unique identifier of the blockchain the application pertains to, to the root command. Adding this flag can be done in the `main()` function. Adding this flag makes sense as the chain ID should not be changing across commands in this application CLI.
+
+## Environment variables
+
+Each flag is bound to its respective named environment variable. The name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--node` for application with basename `GAIA` is bound to `GAIA_NODE`. It allows reducing the amount of flags typed for routine operations. For example instead of:
+
+```shell
+gaia --home=./ --node= --chain-id="testchain-1" --keyring-backend=test tx ... --from=
+```
+
+this will be more convenient:
+
+```shell
+# define env variables in .env, .envrc etc
+GAIA_HOME=
+GAIA_NODE=
+GAIA_CHAIN_ID="testchain-1"
+GAIA_KEYRING_BACKEND="test"
+
+# and later just use
+gaia tx ... --from=
+```
+
+## Configurations
+
+It is vital that the root command of an application uses `PersistentPreRun()` cobra command property for executing the command, so all child commands have access to the server and client contexts. These contexts are set as their default values initially and may be modified, scoped to the command, in their respective `PersistentPreRun()` functions. Note that the `client.Context` is typically pre-populated with "default" values that may be useful for all commands to inherit and override if necessary.
+
+Here is an example of an `PersistentPreRun()` function from `simapp`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/simd/cmd/root_v2.go#L81-L120
+```
+
+The `SetCmdClientContextHandler` call reads persistent flags via `ReadPersistentCommandFlags` which creates a `client.Context` and sets that on the root command's `Context`.
+
+The `InterceptConfigsPreRunHandler` call creates a viper literal, default `server.Context`, and a logger and sets that on the root command's `Context`. The `server.Context` will be modified and saved to disk. The internal `interceptConfigs` call reads or creates a CometBFT configuration based on the home path provided. In addition, `interceptConfigs` also reads and loads the application configuration, `app.toml`, and binds that to the `server.Context` viper literal. This is vital so the application can get access to not only the CLI flags, but also to the application configuration values provided by this file.
+
+:::tip
+When willing to configure which logger is used, do not use `InterceptConfigsPreRunHandler`, which sets the default SDK logger, but instead use `InterceptConfigsAndCreateContext` and set the server context and the logger manually:
+
+```diff
+-return server.InterceptConfigsPreRunHandler(cmd, customAppTemplate, customAppConfig, customCMTConfig)
+
++serverCtx, err := server.InterceptConfigsAndCreateContext(cmd, customAppTemplate, customAppConfig, customCMTConfig)
++if err != nil {
++ return err
++}
+
++// overwrite default server logger
++logger, err := server.CreateSDKLogger(serverCtx, cmd.OutOrStdout())
++if err != nil {
++ return err
++}
++serverCtx.Logger = logger.With(log.ModuleKey, "server")
+
++// set server context
++return server.SetCmdServerContext(cmd, serverCtx)
+```
+
+:::
diff --git a/copy-of-sdk-docs/learn/advanced/08-events.md b/copy-of-sdk-docs/learn/advanced/08-events.md
new file mode 100644
index 00000000..52d02641
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/08-events.md
@@ -0,0 +1,159 @@
+---
+sidebar_position: 1
+---
+# Events
+
+:::note Synopsis
+`Event`s are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallet to track the execution of various messages and index transactions.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK application](../beginner/00-app-anatomy.md)
+* [CometBFT Documentation on Events](https://docs.cometbft.com/v0.37/spec/abci/abci++_basic_concepts#events)
+
+:::
+
+## Events
+
+Events are implemented in the Cosmos SDK as an alias of the ABCI `Event` type and
+take the form of: `{eventType}.{attributeKey}={attributeValue}`.
+
+```protobuf reference
+https://github.com/cometbft/cometbft/blob/v0.37.0/proto/tendermint/abci/types.proto#L334-L343
+```
+
+An Event contains:
+
+* A `type` to categorize the Event at a high-level; for example, the Cosmos SDK uses the `"message"` type to filter Events by `Msg`s.
+* A list of `attributes` are key-value pairs that give more information about the categorized Event. For example, for the `"message"` type, we can filter Events by key-value pairs using `message.action={some_action}`, `message.module={some_module}` or `message.sender={some_sender}`.
+* A `msg_index` to identify which messages relate to the same transaction
+
+:::tip
+To parse the attribute values as strings, make sure to add `'` (single quotes) around each attribute value.
+:::
+
+_Typed Events_ are Protobuf-defined [messages](../../../architecture/adr-032-typed-events.md) used by the Cosmos SDK
+for emitting and querying Events. They are defined in a `event.proto` file, on a **per-module basis** and are read as `proto.Message`.
+_Legacy Events_ are defined on a **per-module basis** in the module's `/types/events.go` file.
+They are triggered from the module's Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md)
+by using the [`EventManager`](#eventmanager).
+
+In addition, each module documents its events under in the `Events` sections of its specs (x/{moduleName}/`README.md`).
+
+Lastly, Events are returned to the underlying consensus engine in the response of the following ABCI messages:
+
+* [`BeginBlock`](./00-baseapp.md#beginblock)
+* [`EndBlock`](./00-baseapp.md#endblock)
+* [`CheckTx`](./00-baseapp.md#checktx)
+* [`Transaction Execution`](./00-baseapp.md#transactionexecution)
+
+### Examples
+
+The following examples show how to query Events using the Cosmos SDK.
+
+| Event | Description |
+| ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `tx.height=23` | Query all transactions at height 23 |
+| `message.action='/cosmos.bank.v1beta1.Msg/Send'` | Query all transactions containing a x/bank `Send` [Service `Msg`](../../build/building-modules/03-msg-services.md). Note the `'`s around the value. |
+| `message.module='bank'` | Query all transactions containing messages from the x/bank module. Note the `'`s around the value. |
+| `create_validator.validator='cosmosval1...'` | x/staking-specific Event, see [x/staking SPEC](../../../../x/staking/README.md). |
+
+## EventManager
+
+In Cosmos SDK applications, Events are managed by an abstraction called the `EventManager`.
+Internally, the `EventManager` tracks a list of Events for the entire execution flow of `FinalizeBlock`
+(i.e. transaction execution, `BeginBlock`, `EndBlock`).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/events.go#L18-L25
+```
+
+The `EventManager` comes with a set of useful methods to manage Events. The method
+that is used most by module and application developers is `EmitTypedEvent` or `EmitEvent` that tracks
+an Event in the `EventManager`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/events.go#L51-L60
+```
+
+Module developers should handle Event emission via the `EventManager#EmitTypedEvent` or `EventManager#EmitEvent` in each message
+`Handler` and in each `BeginBlock`/`EndBlock` handler. The `EventManager` is accessed via
+the [`Context`](./02-context.md), where Event should be already registered, and emitted like this:
+
+
+**Typed events:**
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/group/keeper/msg_server.go#L95-L97
+```
+
+**Legacy events:**
+
+```go
+ctx.EventManager().EmitEvent(
+ sdk.NewEvent(eventType, sdk.NewAttribute(attributeKey, attributeValue)),
+)
+```
+
+Where the `EventManager` is accessed via the [`Context`](./02-context.md).
+
+See the [`Msg` services](../../build/building-modules/03-msg-services.md) concept doc for a more detailed
+view on how to typically implement Events and use the `EventManager` in modules.
+
+## Subscribing to Events
+
+You can use CometBFT's [Websocket](https://docs.cometbft.com/v0.37/core/subscription) to subscribe to Events by calling the `subscribe` RPC method:
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "subscribe",
+ "id": "0",
+ "params": {
+ "query": "tm.event='eventCategory' AND eventType.eventAttribute='attributeValue'"
+ }
+}
+```
+
+The main `eventCategory` you can subscribe to are:
+
+* `NewBlock`: Contains Events triggered during `BeginBlock` and `EndBlock`.
+* `Tx`: Contains Events triggered during `DeliverTx` (i.e. transaction processing).
+* `ValidatorSetUpdates`: Contains validator set updates for the block.
+
+These Events are triggered from the `state` package after a block is committed. You can get the
+full list of Event categories [on the CometBFT Go documentation](https://pkg.go.dev/github.com/cometbft/cometbft/types#pkg-constants).
+
+The `type` and `attribute` value of the `query` allow you to filter the specific Event you are looking for. For example, a `Mint` transaction triggers an Event of type `EventMint` and has an `Id` and an `Owner` as `attributes` (as defined in the [`events.proto` file of the `NFT` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/nft/v1beta1/event.proto#L21-L31)).
+
+Subscribing to this Event would be done like so:
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "subscribe",
+ "id": "0",
+ "params": {
+ "query": "tm.event='Tx' AND mint.owner='ownerAddress'"
+ }
+}
+```
+
+where `ownerAddress` is an address following the [`AccAddress`](../beginner/03-accounts.md#addresses) format.
+
+The same way can be used to subscribe to [legacy events](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/bank/types/events.go).
+
+## Default Events
+
+There are a few events that are automatically emitted for all messages, directly from `baseapp`.
+
+* `message.action`: The name of the message type.
+* `message.sender`: The address of the message signer.
+* `message.module`: The name of the module that emitted the message.
+
+:::tip
+The module name is assumed by `baseapp` to be the second element of the message route: `"cosmos.bank.v1beta1.MsgSend" -> "bank"`.
+In case a module does not follow the standard message path, (e.g. IBC), it is advised to keep emitting the module name event.
+`Baseapp` only emits that event if the module have not already done so.
+:::
diff --git a/copy-of-sdk-docs/learn/advanced/09-telemetry.md b/copy-of-sdk-docs/learn/advanced/09-telemetry.md
new file mode 100644
index 00000000..14d1aa7c
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/09-telemetry.md
@@ -0,0 +1,128 @@
+---
+sidebar_position: 1
+---
+
+# Telemetry
+
+:::note Synopsis
+Gather relevant insights about your application and modules with custom metrics and telemetry.
+:::
+
+The Cosmos SDK enables operators and developers to gain insight into the performance and behavior of
+their application through the use of the `telemetry` package. To enable telemetry, set `telemetry.enabled = true` in the app.toml config file.
+
+The Cosmos SDK currently supports enabling in-memory and prometheus as telemetry sinks. In-memory sink is always attached (when the telemetry is enabled) with 10 second interval and 1 minute retention. This means that metrics will be aggregated over 10 seconds, and metrics will be kept alive for 1 minute.
+
+To query active metrics (see retention note above) you have to enable API server (`api.enabled = true` in the app.toml). Single API endpoint is exposed: `http://localhost:1317/metrics?format={text|prometheus}`, the default being `text`.
+
+## Emitting metrics
+
+If telemetry is enabled via configuration, a single global metrics collector is registered via the
+[go-metrics](https://github.com/hashicorp/go-metrics) library. This allows emitting and collecting
+metrics through simple [API](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/telemetry/wrapper.go). Example:
+
+```go
+func EndBlocker(ctx sdk.Context, k keeper.Keeper) {
+ defer telemetry.ModuleMeasureSince(types.ModuleName, time.Now(), telemetry.MetricKeyEndBlocker)
+
+ // ...
+}
+```
+
+Developers may use the `telemetry` package directly, which provides wrappers around metric APIs
+that include adding useful labels, or they must use the `go-metrics` library directly. It is preferable
+to add as much context and adequate dimensionality to metrics as possible, so the `telemetry` package
+is advised. Regardless of the package or method used, the Cosmos SDK supports the following metrics
+types:
+
+* gauges
+* summaries
+* counters
+
+## Labels
+
+Certain components of modules will have their name automatically added as a label (e.g. `BeginBlock`).
+Operators may also supply the application with a global set of labels that will be applied to all
+metrics emitted using the `telemetry` package (e.g. chain-id). Global labels are supplied as a list
+of [name, value] tuples.
+
+Example:
+
+```toml
+global-labels = [
+ ["chain_id", "chain-OfXo4V"],
+]
+```
+
+## Cardinality
+
+Cardinality is key, specifically label and key cardinality. Cardinality is how many unique values of
+something there are. So there is naturally a tradeoff between granularity and how much stress is put
+on the telemetry sink in terms of indexing, scrape, and query performance.
+
+Developers should take care to support metrics with enough dimensionality and granularity to be
+useful, but not increase the cardinality beyond the sink's limits. A general rule of thumb is to not
+exceed a cardinality of 10.
+
+Consider the following examples with enough granularity and adequate cardinality:
+
+* begin/end blocker time
+* tx gas used
+* block gas used
+* amount of tokens minted
+* amount of accounts created
+
+The following examples expose too much cardinality and may not even prove to be useful:
+
+* transfers between accounts with amount
+* voting/deposit amount from unique addresses
+
+## Supported Metrics
+
+| Metric | Description | Unit | Type |
+|:--------------------------------|:------------------------------------------------------------------------------------------|:----------------|:--------|
+| `tx_count` | Total number of txs processed via `DeliverTx` | tx | counter |
+| `tx_successful` | Total number of successful txs processed via `DeliverTx` | tx | counter |
+| `tx_failed` | Total number of failed txs processed via `DeliverTx` | tx | counter |
+| `tx_gas_used` | The total amount of gas used by a tx | gas | gauge |
+| `tx_gas_wanted` | The total amount of gas requested by a tx | gas | gauge |
+| `tx_msg_send` | The total amount of tokens sent in a `MsgSend` (per denom) | token | gauge |
+| `tx_msg_withdraw_reward` | The total amount of tokens withdrawn in a `MsgWithdrawDelegatorReward` (per denom) | token | gauge |
+| `tx_msg_withdraw_commission` | The total amount of tokens withdrawn in a `MsgWithdrawValidatorCommission` (per denom) | token | gauge |
+| `tx_msg_delegate` | The total amount of tokens delegated in a `MsgDelegate` | token | gauge |
+| `tx_msg_begin_unbonding` | The total amount of tokens undelegated in a `MsgUndelegate` | token | gauge |
+| `tx_msg_begin_redelegate` | The total amount of tokens redelegated in a `MsgBeginRedelegate` | token | gauge |
+| `tx_msg_ibc_transfer` | The total amount of tokens transferred via IBC in a `MsgTransfer` (source or sink chain) | token | gauge |
+| `ibc_transfer_packet_receive` | The total amount of tokens received in a `FungibleTokenPacketData` (source or sink chain) | token | gauge |
+| `new_account` | Total number of new accounts created | account | counter |
+| `gov_proposal` | Total number of governance proposals | proposal | counter |
+| `gov_vote` | Total number of governance votes for a proposal | vote | counter |
+| `gov_deposit` | Total number of governance deposits for a proposal | deposit | counter |
+| `staking_delegate` | Total number of delegations | delegation | counter |
+| `staking_undelegate` | Total number of undelegations | undelegation | counter |
+| `staking_redelegate` | Total number of redelegations | redelegation | counter |
+| `ibc_transfer_send` | Total number of IBC transfers sent from a chain (source or sink) | transfer | counter |
+| `ibc_transfer_receive` | Total number of IBC transfers received to a chain (source or sink) | transfer | counter |
+| `ibc_client_create` | Total number of clients created | create | counter |
+| `ibc_client_update` | Total number of client updates | update | counter |
+| `ibc_client_upgrade` | Total number of client upgrades | upgrade | counter |
+| `ibc_client_misbehaviour` | Total number of client misbehaviours | misbehaviour | counter |
+| `ibc_connection_open-init` | Total number of connection `OpenInit` handshakes | handshake | counter |
+| `ibc_connection_open-try` | Total number of connection `OpenTry` handshakes | handshake | counter |
+| `ibc_connection_open-ack` | Total number of connection `OpenAck` handshakes | handshake | counter |
+| `ibc_connection_open-confirm` | Total number of connection `OpenConfirm` handshakes | handshake | counter |
+| `ibc_channel_open-init` | Total number of channel `OpenInit` handshakes | handshake | counter |
+| `ibc_channel_open-try` | Total number of channel `OpenTry` handshakes | handshake | counter |
+| `ibc_channel_open-ack` | Total number of channel `OpenAck` handshakes | handshake | counter |
+| `ibc_channel_open-confirm` | Total number of channel `OpenConfirm` handshakes | handshake | counter |
+| `ibc_channel_close-init` | Total number of channel `CloseInit` handshakes | handshake | counter |
+| `ibc_channel_close-confirm` | Total number of channel `CloseConfirm` handshakes | handshake | counter |
+| `tx_msg_ibc_recv_packet` | Total number of IBC packets received | packet | counter |
+| `tx_msg_ibc_acknowledge_packet` | Total number of IBC packets acknowledged | acknowledgement | counter |
+| `ibc_timeout_packet` | Total number of IBC timeout packets | timeout | counter |
+| `store_iavl_get` | Duration of an IAVL `Store#Get` call | ms | summary |
+| `store_iavl_set` | Duration of an IAVL `Store#Set` call | ms | summary |
+| `store_iavl_has` | Duration of an IAVL `Store#Has` call | ms | summary |
+| `store_iavl_delete` | Duration of an IAVL `Store#Delete` call | ms | summary |
+| `store_iavl_commit` | Duration of an IAVL `Store#Commit` call | ms | summary |
+| `store_iavl_query` | Duration of an IAVL `Store#Query` call | ms | summary |
diff --git a/copy-of-sdk-docs/learn/advanced/10-ocap.md b/copy-of-sdk-docs/learn/advanced/10-ocap.md
new file mode 100644
index 00000000..62076172
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/10-ocap.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 1
+---
+
+# Object-Capability Model
+
+## Intro
+
+When thinking about security, it is good to start with a specific threat model. Our threat model is the following:
+
+> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules.
+
+The Cosmos SDK is designed to address this threat by being the
+foundation of an object capability system.
+
+> The structural properties of object capability systems favor
+> modularity in code design and ensure reliable encapsulation in
+> code implementation.
+>
+> These structural properties facilitate the analysis of some
+> security properties of an object-capability program or operating
+> system. Some of these — in particular, information flow properties
+> — can be analyzed at the level of object references and
+> connectivity, independent of any knowledge or analysis of the code
+> that determines the behavior of the objects.
+>
+> As a consequence, these security properties can be established
+> and maintained in the presence of new objects that contain unknown
+> and possibly malicious code.
+>
+> These structural properties stem from the two rules governing
+> access to existing objects:
+>
+> 1. An object A can send a message to B only if object A holds a
+> reference to B.
+> 2. An object A can obtain a reference to C only
+> if object A receives a message containing a reference to C. As a
+> consequence of these two rules, an object can obtain a reference
+> to another object only through a preexisting chain of references.
+> In short, "Only connectivity begets connectivity."
+
+For an introduction to object-capabilities, see this [Wikipedia article](https://en.wikipedia.org/wiki/Object-capability_model).
+
+## Ocaps in practice
+
+The idea is to only reveal what is necessary to get the work done.
+
+For example, the following code snippet violates the object capabilities
+principle:
+
+```go
+type AppAccount struct {...}
+account := &AppAccount{
+ Address: pub.Address(),
+ Coins: sdk.Coins{sdk.NewInt64Coin("ATM", 100)},
+}
+sumValue := externalModule.ComputeSumValue(account)
+```
+
+The method `ComputeSumValue` implies a pure function, yet the implied
+capability of accepting a pointer value is the capability to modify that
+value. The preferred method signature should take a copy instead.
+
+```go
+sumValue := externalModule.ComputeSumValue(*account)
+```
+
+In the Cosmos SDK, you can see the application of this principle in simapp.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app.go
+```
+
+The following diagram shows the current dependencies between keepers.
+
+
diff --git a/copy-of-sdk-docs/learn/advanced/11-runtx_middleware.md b/copy-of-sdk-docs/learn/advanced/11-runtx_middleware.md
new file mode 100644
index 00000000..bb8c04aa
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/11-runtx_middleware.md
@@ -0,0 +1,67 @@
+---
+sidebar_position: 1
+---
+
+# RunTx recovery middleware
+
+`BaseApp.runTx()` function handles Go panics that might occur during transactions execution, for example, keeper has faced an invalid state and panicked.
+Depending on the panic type different handler is used, for instance the default one prints an error log message.
+Recovery middleware is used to add custom panic recovery for Cosmos SDK application developers.
+
+More context can found in the corresponding [ADR-022](../../build/architecture/adr-022-custom-panic-handling.md) and the implementation in [recovery.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/baseapp/recovery.go).
+
+## Interface
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/baseapp/recovery.go#L14-L17
+```
+
+`recoveryObj` is a return value for `recover()` function from the `building` Go package.
+
+**Contract:**
+
+* RecoveryHandler returns `nil` if `recoveryObj` wasn't handled and should be passed to the next recovery middleware;
+* RecoveryHandler returns a non-nil `error` if `recoveryObj` was handled;
+
+## Custom RecoveryHandler register
+
+`BaseApp.AddRunTxRecoveryHandler(handlers ...RecoveryHandler)`
+
+BaseApp method adds recovery middleware to the default recovery chain.
+
+## Example
+
+Lets assume we want to emit the "Consensus failure" chain state if some particular error occurred.
+
+We have a module keeper that panics:
+
+```go
+func (k FooKeeper) Do(obj interface{}) {
+ if obj == nil {
+ // that shouldn't happen, we need to crash the app
+ err := errorsmod.Wrap(fooTypes.InternalError, "obj is nil")
+ panic(err)
+ }
+}
+```
+
+By default that panic would be recovered and an error message will be printed to log. To override that behavior we should register a custom RecoveryHandler:
+
+```go
+// Cosmos SDK application constructor
+customHandler := func(recoveryObj interface{}) error {
+ err, ok := recoveryObj.(error)
+ if !ok {
+ return nil
+ }
+
+ if fooTypes.InternalError.Is(err) {
+ panic(fmt.Errorf("FooKeeper did panic with error: %w", err))
+ }
+
+ return nil
+}
+
+baseApp := baseapp.NewBaseApp(...)
+baseApp.AddRunTxRecoveryHandler(customHandler)
+```
diff --git a/copy-of-sdk-docs/learn/advanced/12-simulation.md b/copy-of-sdk-docs/learn/advanced/12-simulation.md
new file mode 100644
index 00000000..709ce176
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/12-simulation.md
@@ -0,0 +1,94 @@
+---
+sidebar_position: 1
+---
+
+# Cosmos Blockchain Simulator
+
+The Cosmos SDK offers a full fledged simulation framework to fuzz test every
+message defined by a module.
+
+On the Cosmos SDK, this functionality is provided by [`SimApp`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/simapp/app_di.go), which is a
+`Baseapp` application that is used for running the [`simulation`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation) module.
+This module defines all the simulation logic as well as the operations for
+randomized parameters like accounts, balances etc.
+
+## Goals
+
+The blockchain simulator tests how the blockchain application would behave under
+real life circumstances by generating and sending randomized messages.
+The goal of this is to detect and debug failures that could halt a live chain,
+by providing logs and statistics about the operations run by the simulator as
+well as exporting the latest application state when a failure was found.
+
+Its main difference with integration testing is that the simulator app allows
+you to pass parameters to customize the chain that's being simulated.
+This comes in handy when trying to reproduce bugs that were generated in the
+provided operations (randomized or not).
+
+## Simulation commands
+
+The simulation app has different commands, each of which tests a different
+failure type:
+
+* `AppImportExport`: The simulator exports the initial app state and then it
+ creates a new app with the exported `genesis.json` as an input, checking for
+ inconsistencies between the stores.
+* `AppSimulationAfterImport`: Queues two simulations together. The first one provides the app state (_i.e_ genesis) to the second. Useful to test software upgrades or hard-forks from a live chain.
+* `AppStateDeterminism`: Checks that all the nodes return the same values, in the same order.
+* `FullAppSimulation`: General simulation mode. Runs the chain and the specified operations for a given number of blocks. Tests that there're no `panics` on the simulation.
+
+Each simulation must receive a set of inputs (_i.e_ flags) such as the number of
+blocks that the simulation is run, seed, block size, etc.
+Check the full list of flags [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L43-L70).
+
+## Simulator Modes
+
+In addition to the various inputs and commands, the simulator runs in three modes:
+
+1. Completely random where the initial state, module parameters and simulation
+ parameters are **pseudo-randomly generated**.
+2. From a `genesis.json` file where the initial state and the module parameters are defined.
+ This mode is helpful for running simulations on a known state such as a live network export where a new (mostly likely breaking) version of the application needs to be tested.
+3. From a `params.json` file where the initial state is pseudo-randomly generated but the module and simulation parameters can be provided manually.
+ This allows for a more controlled and deterministic simulation setup while allowing the state space to still be pseudo-randomly simulated.
+ The list of available parameters are listed [here](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/simulation/client/cli/flags.go#L72-L90).
+
+:::tip
+These modes are not mutually exclusive. So you can for example run a randomly
+generated genesis state (`1`) with manually generated simulation params (`3`).
+:::
+
+## Usage
+
+This is a general example of how simulations are run. For more specific examples
+check the Cosmos SDK [Makefile](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/Makefile#L285-L320).
+
+```bash
+ $ go test -mod=readonly github.com/cosmos/cosmos-sdk/simapp \
+ -run=TestApp \
+ ...
+ -v -timeout 24h
+```
+
+## Debugging Tips
+
+Here are some suggestions when encountering a simulation failure:
+
+* Export the app state at the height where the failure was found. You can do this
+ by passing the `-ExportStatePath` flag to the simulator.
+* Use `-Verbose` logs. They could give you a better hint on all the operations
+ involved.
+* Try using another `-Seed`. If it can reproduce the same error and if it fails
+ sooner, you will spend less time running the simulations.
+* Reduce the `-NumBlocks` . How's the app state at the height previous to the
+ failure?
+* Try adding logs to operations that are not logged. You will have to define a
+ [Logger](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/x/staking/keeper/keeper.go#L77-L81) on your `Keeper`.
+
+## Use simulation in your Cosmos SDK-based application
+
+Learn how you can build the simulation into your Cosmos SDK-based application:
+
+* Application Simulation Manager
+* [Building modules: Simulator](../../build/building-modules/14-simulator.md)
+* Simulator tests
diff --git a/copy-of-sdk-docs/learn/advanced/13-proto-docs.md b/copy-of-sdk-docs/learn/advanced/13-proto-docs.md
new file mode 100644
index 00000000..6c857446
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/13-proto-docs.md
@@ -0,0 +1,7 @@
+---
+sidebar_position: 1
+---
+
+# Protobuf Documentation
+
+See [Cosmos SDK Buf Proto-docs](https://buf.build/cosmos/cosmos-sdk/docs/main)
diff --git a/copy-of-sdk-docs/learn/advanced/15-upgrade.md b/copy-of-sdk-docs/learn/advanced/15-upgrade.md
new file mode 100644
index 00000000..e2332bd1
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/15-upgrade.md
@@ -0,0 +1,162 @@
+---
+sidebar_position: 1
+---
+
+# In-Place Store Migrations
+
+:::warning
+Read and understand all the in-place store migration documentation before you run a migration on a live chain.
+:::
+
+:::note Synopsis
+Upgrade your app modules smoothly with custom in-place store migration logic.
+:::
+
+The Cosmos SDK uses two methods to perform upgrades:
+
+* Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file.
+
+* Perform upgrades in place, which significantly decrease the upgrade time for chains with a larger state. Use the [Module Upgrade Guide](../../build/building-modules/13-upgrade.md) to set up your application modules to take advantage of in-place upgrades.
+
+This document provides steps to use the In-Place Store Migrations upgrade method.
+
+## Tracking Module Versions
+
+Each module gets assigned a consensus version by the module developer. The consensus version serves as the breaking change version of the module. The Cosmos SDK keeps track of all module consensus versions in the x/upgrade `VersionMap` store. During an upgrade, the difference between the old `VersionMap` stored in state and the new `VersionMap` is calculated by the Cosmos SDK. For each identified difference, the module-specific migrations are run and the respective consensus version of each upgraded module is incremented.
+
+### Consensus Version
+
+The consensus version is defined on each app module by the module developer and serves as the breaking change version of the module. The consensus version informs the Cosmos SDK on which modules need to be upgraded. For example, if the bank module was version 2 and an upgrade introduces bank module 3, the Cosmos SDK upgrades the bank module and runs the "version 2 to 3" migration script.
+
+### Version Map
+
+The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted in the state.
+
+## Upgrade Handlers
+
+Upgrades use an `UpgradeHandler` to facilitate migrations. The `UpgradeHandler` functions implemented by the app developer must conform to the following function signature. These functions retrieve the `VersionMap` from x/upgrade's state and return the new `VersionMap` to be stored in x/upgrade after the upgrade. The diff between the two `VersionMap`s determines which modules need upgrading.
+
+```go
+type UpgradeHandler func(ctx sdk.Context, plan Plan, fromVM VersionMap) (VersionMap, error)
+```
+
+Inside these functions, you must perform any upgrade logic to include in the provided `plan`. All upgrade handler functions must end with the following line of code:
+
+```go
+ return app.mm.RunMigrations(ctx, cfg, fromVM)
+```
+
+## Running Migrations
+
+Migrations are run inside of an `UpgradeHandler` using `app.mm.RunMigrations(ctx, cfg, vm)`. The `UpgradeHandler` functions describe the functionality to occur during an upgrade. The `RunMigration` function loops through the `VersionMap` argument and runs the migration scripts for all versions that are less than the versions of the new binary app module. After the migrations are finished, a new `VersionMap` is returned to persist the upgraded module versions to state.
+
+```go
+cfg := module.NewConfigurator(...)
+app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
+
+ // ...
+ // additional upgrade logic
+ // ...
+
+ // returns a VersionMap with the updated module ConsensusVersions
+ return app.mm.RunMigrations(ctx, fromVM)
+})
+```
+
+To learn more about configuring migration scripts for your modules, see the [Module Upgrade Guide](../../build/building-modules/13-upgrade.md).
+
+### Order Of Migrations
+
+By default, all migrations are run in module name alphabetical ascending order, except `x/auth` which is run last. The reason is state dependencies between x/auth and other modules (you can read more in [issue #10606](https://github.com/cosmos/cosmos-sdk/issues/10606)).
+
+If you want to change the order of migration, then you should call `app.mm.SetOrderMigrations(module1, module2, ...)` in your app.go file. The function will panic if you forget to include a module in the argument list.
+
+## Adding New Modules During Upgrades
+
+You can introduce entirely new modules to the application during an upgrade. New modules are recognized because they have not yet been registered in `x/upgrade`'s `VersionMap` store. In this case, `RunMigrations` calls the `InitGenesis` function from the corresponding module to set up its initial state.
+
+### Add StoreUpgrades for New Modules
+
+All chains preparing to run in-place store migrations will need to manually add store upgrades for new modules and then configure the store loader to apply those upgrades. This ensures that the new module's stores are added to the multistore before the migrations begin.
+
+```go
+upgradeInfo, err := app.UpgradeKeeper.ReadUpgradeInfoFromDisk()
+if err != nil {
+ panic(err)
+}
+
+if upgradeInfo.Name == "my-plan" && !app.UpgradeKeeper.IsSkipHeight(upgradeInfo.Height) {
+ storeUpgrades := storetypes.StoreUpgrades{
+ // add store upgrades for new modules
+ // Example:
+ // Added: []string{"foo", "bar"},
+ // ...
+ }
+
+ // configure store loader that checks if version == upgradeHeight and applies store upgrades
+ app.SetStoreLoader(upgradetypes.UpgradeStoreLoader(upgradeInfo.Height, &storeUpgrades))
+}
+```
+
+## Genesis State
+
+When starting a new chain, the consensus version of each module MUST be saved to state during the application's genesis. To save the consensus version, add the following line to the `InitChainer` method in `app.go`:
+
+```diff
+func (app *MyApp) InitChainer(ctx sdk.Context, req abci.InitChainRequest) abci.InitChainResponse {
+ ...
++ app.UpgradeKeeper.SetModuleVersionMap(ctx, app.mm.GetVersionMap())
+ ...
+}
+```
+
+This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app.
+
+For a new module `foo`, `InitGenesis` is called by `RunMigration` only when `foo` is registered in the module manager but it's not set in the `fromVM`. Therefore, if you want to skip `InitGenesis` when a new module is added to the app, then you should set its module version in `fromVM` to the module consensus version:
+
+```go
+app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
+ // ...
+
+ // Set foo's version to the latest ConsensusVersion in the VersionMap.
+ // This will skip running InitGenesis on Foo
+ fromVM[foo.ModuleName] = foo.AppModule{}.ConsensusVersion()
+
+ return app.mm.RunMigrations(ctx, fromVM)
+})
+```
+
+### Overwriting Genesis Functions
+
+The Cosmos SDK offers modules that the application developer can import in their app. These modules often have an `InitGenesis` function already defined.
+
+You can write your own `InitGenesis` function for an imported module. To do this, manually trigger your custom genesis function in the upgrade handler.
+
+:::warning
+You MUST manually set the consensus version in the version map passed to the `UpgradeHandler` function. Without this, the SDK will run the Module's existing `InitGenesis` code even if you triggered your custom function in the `UpgradeHandler`.
+:::
+
+```go
+import foo "github.com/my/module/foo"
+
+app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
+
+ // Register the consensus version in the version map
+ // to avoid the SDK from triggering the default
+ // InitGenesis function.
+ fromVM["foo"] = foo.AppModule{}.ConsensusVersion()
+
+ // Run custom InitGenesis for foo
+ app.mm["foo"].InitGenesis(ctx, app.appCodec, myCustomGenesisState)
+
+ return app.mm.RunMigrations(ctx, cfg, fromVM)
+})
+```
+
+## Syncing a Full Node to an Upgraded Blockchain
+
+You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor
+
+To successfully sync, you must start with the initial binary that the blockchain started with at genesis. If all Software Upgrade Plans contain binary instruction, then you can run Cosmovisor with auto-download option to automatically handle downloading and switching to the binaries associated with each sequential upgrade. Otherwise, you need to manually provide all binaries to Cosmovisor.
+
+To learn more about Cosmovisor, see the [Cosmovisor Quick Start](../../../../tools/cosmovisor/README.md).
diff --git a/copy-of-sdk-docs/learn/advanced/16-config.md b/copy-of-sdk-docs/learn/advanced/16-config.md
new file mode 100644
index 00000000..03aa55a2
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/16-config.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 1
+---
+
+# Configuration
+
+This documentation refers to the app.toml, if you'd like to read about the config.toml please visit [CometBFT docs](https://docs.cometbft.com/v0.37/).
+
+
+```python reference
+https://github.com/cosmos/cosmos-sdk/blob/main/tools/confix/data/v0.47-app.toml
+```
+
+## inter-block-cache
+
+This feature will consume more ram than a normal node, if enabled.
+
+## iavl-cache-size
+
+Using this feature will increase ram consumption
+
+## iavl-lazy-loading
+
+This feature is to be used for archive nodes, allowing them to have a faster start up time.
diff --git a/copy-of-sdk-docs/learn/advanced/17-autocli.md b/copy-of-sdk-docs/learn/advanced/17-autocli.md
new file mode 100644
index 00000000..41688309
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/17-autocli.md
@@ -0,0 +1,258 @@
+---
+sidebar_position: 1
+---
+
+# AutoCLI
+
+:::note Synopsis
+This document details how to build CLI and REST interfaces for a module. Examples from various Cosmos SDK modules are included.
+:::
+
+:::note Pre-requisite Readings
+
+* [CLI](https://docs.cosmos.network/main/core/cli)
+
+:::
+
+The `autocli` (also known as `client/v2`) package is a [Go library](https://pkg.go.dev/cosmossdk.io/client/v2/autocli) for generating CLI (command line interface) interfaces for Cosmos SDK-based applications. It provides a simple way to add CLI commands to your application by generating them automatically based on your gRPC service definitions. Autocli generates CLI commands and flags directly from your protobuf messages, including options, input parameters, and output parameters. This means that you can easily add a CLI interface to your application without having to manually create and manage commands.
+
+## Overview
+
+`autocli` generates CLI commands and flags for each method defined in your gRPC service. By default, it generates commands for each gRPC services. The commands are named based on the name of the service method.
+
+For example, given the following protobuf definition for a service:
+
+```protobuf
+service MyService {
+ rpc MyMethod(MyRequest) returns (MyResponse) {}
+}
+```
+
+For instance, `autocli` would generate a command named `my-method` for the `MyMethod` method. The command will have flags for each field in the `MyRequest` message.
+
+It is possible to customize the generation of transactions and queries by defining options for each service.
+
+## Application Wiring
+
+Here are the steps to use AutoCLI:
+
+1. Ensure your app's modules implements the `appmodule.AppModule` interface.
+2. (optional) Configure how behave `autocli` command generation, by implementing the `func (am AppModule) AutoCLIOptions() *autocliv1.ModuleOptions` method on the module.
+3. Use the `autocli.AppOptions` struct to specify the modules you defined. If you are using `depinject`, it can automatically create an instance of `autocli.AppOptions` based on your app's configuration.
+4. Use the `EnhanceRootCommand()` method provided by `autocli` to add the CLI commands for the specified modules to your root command.
+
+:::tip
+AutoCLI is additive only, meaning _enhancing_ the root command will only add subcommands that are not already registered. This means that you can use AutoCLI alongside other custom commands within your app.
+:::
+
+Here's an example of how to use `autocli` in your app:
+
+``` go
+// Define your app's modules
+testModules := map[string]appmodule.AppModule{
+ "testModule": &TestModule{},
+}
+
+// Define the autocli AppOptions
+autoCliOpts := autocli.AppOptions{
+ Modules: testModules,
+}
+
+// Create the root command
+rootCmd := &cobra.Command{
+ Use: "app",
+}
+
+if err := appOptions.EnhanceRootCommand(rootCmd); err != nil {
+ return err
+}
+
+// Run the root command
+if err := rootCmd.Execute(); err != nil {
+ return err
+}
+```
+
+### Keyring
+
+`autocli` uses a keyring for key name resolving names and signing transactions.
+
+:::tip
+AutoCLI provides a better UX than normal CLI as it allows to resolve key names directly from the keyring in all transactions and commands.
+
+```sh
+ q bank balances alice
+ tx bank send alice bob 1000denom
+```
+
+:::
+
+The keyring used for resolving names and signing transactions is provided via the `client.Context`.
+The keyring is then converted to the `client/v2/autocli/keyring` interface.
+If no keyring is provided, the `autocli` generated command will not be able to sign transactions, but will still be able to query the chain.
+
+:::tip
+The Cosmos SDK keyring implements the `client/v2/autocli/keyring` interface, thanks to the following wrapper:
+
+```go
+keyring.NewAutoCLIKeyring(kb)
+```
+
+:::
+
+## Signing
+
+`autocli` supports signing transactions with the keyring.
+The [`cosmos.msg.v1.signer` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) defines the signer field of the message.
+This field is automatically filled when using the `--from` flag or defining the signer as a positional argument.
+
+:::warning
+AutoCLI currently supports only one signer per transaction.
+:::
+
+## Module wiring & Customization
+
+The `AutoCLIOptions()` method on your module allows to specify custom commands, sub-commands or flags for each service, as it was a `cobra.Command` instance, within the `RpcCommandOptions` struct. Defining such options will customize the behavior of the `autocli` command generation, which by default generates a command for each method in your gRPC service.
+
+```go
+*autocliv1.RpcCommandOptions{
+ RpcMethod: "Params", // The name of the gRPC service
+ Use: "params", // Command usage that is displayed in the help
+ Short: "Query the parameters of the governance process", // Short description of the command
+ Long: "Query the parameters of the governance process. Specify specific param types (voting|tallying|deposit) to filter results.", // Long description of the command
+ PositionalArgs: []*autocliv1.PositionalArgDescriptor{
+ {ProtoField: "params_type", Optional: true}, // Transform a flag into a positional argument
+ },
+}
+```
+
+:::tip
+AutoCLI can create a gov proposal of any tx by simply setting the `GovProposal` field to `true` in the `autocli.RpcCommandOptions` struct.
+Users can however use the `--no-proposal` flag to disable the proposal creation (which is useful if the authority isn't the gov module on a chain).
+:::
+
+### Specifying Subcommands
+
+By default, `autocli` generates a command for each method in your gRPC service. However, you can specify subcommands to group related commands together. To specify subcommands, use the `autocliv1.ServiceCommandDescriptor` struct.
+
+This example shows how to use the `autocliv1.ServiceCommandDescriptor` struct to group related commands together and specify subcommands in your gRPC service by defining an instance of `autocliv1.ModuleOptions` in your `autocli.go`.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/x/gov/autocli.go#L94-L97
+```
+
+### Positional Arguments
+
+By default `autocli` generates a flag for each field in your protobuf message. However, you can choose to use positional arguments instead of flags for certain fields.
+
+To add positional arguments to a command, use the `autocliv1.PositionalArgDescriptor` struct, as seen in the example below. Specify the `ProtoField` parameter, which is the name of the protobuf field that should be used as the positional argument. In addition, if the parameter is a variable-length argument, you can specify the `Varargs` parameter as `true`. This can only be applied to the last positional parameter, and the `ProtoField` must be a repeated field.
+
+Here's an example of how to define a positional argument for the `Account` method of the `auth` service:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.50.0-beta.0/x/auth/autocli.go#L25-L30
+```
+
+Then the command can be used as follows, instead of having to specify the `--address` flag:
+
+```bash
+ query auth account cosmos1abcd...xyz
+```
+
+#### Flattened Fields in Positional Arguments
+
+AutoCLI also supports flattening nested message fields as positional arguments. This means you can access nested fields
+using dot notation in the `ProtoField` parameter. This is particularly useful when you want to directly set nested
+message fields as positional arguments.
+
+For example, if you have a nested message structure like this:
+
+```protobuf
+message Permissions {
+ string level = 1;
+ repeated string limit_type_urls = 2;
+}
+
+message MsgAuthorizeCircuitBreaker {
+ string grantee = 1;
+ Permissions permissions = 2;
+}
+```
+
+You can flatten the fields in your AutoCLI configuration:
+
+```go
+{
+ RpcMethod: "AuthorizeCircuitBreaker",
+ Use: "authorize ",
+ PositionalArgs: []*autocliv1.PositionalArgDescriptor{
+ {ProtoField: "grantee"},
+ {ProtoField: "permissions.level"},
+ {ProtoField: "permissions.limit_type_urls"},
+ },
+}
+```
+
+This allows users to provide values for nested fields directly as positional arguments:
+
+```bash
+ tx circuit authorize cosmos1... super-admin "/cosmos.bank.v1beta1.MsgSend,/cosmos.bank.v1beta1.MsgMultiSend"
+```
+
+Instead of having to provide a complex JSON structure for nested fields, flattening makes the CLI more user-friendly by allowing direct access to nested fields.
+
+#### Customising Flag Names
+
+By default, `autocli` generates flag names based on the names of the fields in your protobuf message. However, you can customise the flag names by providing a `FlagOptions`. This parameter allows you to specify custom names for flags based on the names of the message fields.
+
+For example, if you have a message with the fields `test` and `test1`, you can use the following naming options to customise the flags:
+
+``` go
+autocliv1.RpcCommandOptions{
+ FlagOptions: map[string]*autocliv1.FlagOptions{
+ "test": { Name: "custom_name", },
+ "test1": { Name: "other_name", },
+ },
+}
+```
+
+`FlagsOptions` is defined like sub commands in the `AutoCLIOptions()` method on your module.
+
+### Combining AutoCLI with Other Commands Within A Module
+
+AutoCLI can be used alongside other commands within a module. For example, the `gov` module uses AutoCLI to generate commands for the `query` subcommand, but also defines custom commands for the `proposer` subcommands.
+
+In order to enable this behavior, set in `AutoCLIOptions()` the `EnhanceCustomCommand` field to `true`, for the command type (queries and/or transactions) you want to enhance.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/fa4d87ef7e6d87aaccc94c337ffd2fe90fcb7a9d/x/gov/autocli.go#L98
+```
+
+If not set to true, `AutoCLI` will not generate commands for the module if there are already commands registered for the module (when `GetTxCmd()` or `GetTxCmd()` are defined).
+
+### Skip a command
+
+AutoCLI automatically skips unsupported commands when [`cosmos_proto.method_added_in` protobuf annotation](https://docs.cosmos.network/main/build/building-modules/protobuf-annotations) is present.
+
+Additionally, a command can be manually skipped using the `autocliv1.RpcCommandOptions`:
+
+```go
+*autocliv1.RpcCommandOptions{
+ RpcMethod: "Params", // The name of the gRPC service
+ Skip: true,
+}
+```
+
+### Use AutoCLI for non module commands
+
+It is possible to use `AutoCLI` for non module commands. The trick is still to implement the `appmodule.Module` interface and append it to the `appOptions.ModuleOptions` map.
+
+For example, here is how the SDK does it for `cometbft` gRPC commands:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/client/v2.0.0-beta.1/client/grpc/cmtservice/autocli.go#L52-L71
+```
+
+## Summary
+
+`autocli` lets you generate CLI for your Cosmos SDK-based applications without any cobra boilerplate. It allows you to easily generate CLI commands and flags from your protobuf messages, and provides many options for customising the behavior of your CLI application.
diff --git a/copy-of-sdk-docs/learn/advanced/_category_.json b/copy-of-sdk-docs/learn/advanced/_category_.json
new file mode 100644
index 00000000..a49201e6
--- /dev/null
+++ b/copy-of-sdk-docs/learn/advanced/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Advanced",
+ "position": 3,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-begin_block.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-begin_block.png
new file mode 100644
index 00000000..745d4a5a
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-begin_block.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-checktx.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-checktx.png
new file mode 100644
index 00000000..38b217ac
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-checktx.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-commit.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-commit.png
new file mode 100644
index 00000000..b23c7312
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-commit.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-deliver_tx.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-deliver_tx.png
new file mode 100644
index 00000000..f0a54b4e
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-deliver_tx.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-initchain.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-initchain.png
new file mode 100644
index 00000000..167b4fad
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-initchain.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-prepareproposal.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-prepareproposal.png
new file mode 100644
index 00000000..146e804b
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-prepareproposal.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state-processproposal.png b/copy-of-sdk-docs/learn/advanced/baseapp_state-processproposal.png
new file mode 100644
index 00000000..fb601237
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state-processproposal.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/baseapp_state.png b/copy-of-sdk-docs/learn/advanced/baseapp_state.png
new file mode 100644
index 00000000..5cf54fdb
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/baseapp_state.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/blockprocessing-1.png b/copy-of-sdk-docs/learn/advanced/blockprocessing-1.png
new file mode 100644
index 00000000..d4167f33
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/blockprocessing-1.png differ
diff --git a/copy-of-sdk-docs/learn/advanced/blockprocessing.excalidraw b/copy-of-sdk-docs/learn/advanced/blockprocessing.excalidraw
new file mode 100644
index 00000000..84e2d5db
Binary files /dev/null and b/copy-of-sdk-docs/learn/advanced/blockprocessing.excalidraw differ
diff --git a/copy-of-sdk-docs/learn/beginner/00-app-anatomy.md b/copy-of-sdk-docs/learn/beginner/00-app-anatomy.md
new file mode 100644
index 00000000..988c7242
--- /dev/null
+++ b/copy-of-sdk-docs/learn/beginner/00-app-anatomy.md
@@ -0,0 +1,279 @@
+---
+sidebar_position: 1
+---
+
+# Anatomy of a Cosmos SDK Application
+
+:::note Synopsis
+This document describes the core parts of a Cosmos SDK application, represented throughout the document as a placeholder application named `app`.
+:::
+
+## Node Client
+
+The Daemon, or [Full-Node Client](../advanced/03-node.md), is the core process of a Cosmos SDK-based blockchain. Participants in the network run this process to initialize their state-machine, connect with other full-nodes, and update their state-machine as new blocks come in.
+
+```text
+ ^ +-------------------------------+ ^
+ | | | |
+ | | State-machine = Application | |
+ | | | | Built with Cosmos SDK
+ | | ^ + | |
+ | +----------- | ABCI | ----------+ v
+ | | + v | ^
+ | | | |
+Blockchain Node | | Consensus | |
+ | | | |
+ | +-------------------------------+ | CometBFT
+ | | | |
+ | | Networking | |
+ | | | |
+ v +-------------------------------+ v
+```
+
+The blockchain full-node presents itself as a binary, generally suffixed by `-d` for "daemon" (e.g. `appd` for `app` or `gaiad` for `gaia`). This binary is built by running a simple [`main.go`](../advanced/03-node.md#main-function) function placed in `./cmd/appd/`. This operation usually happens through the [Makefile](#dependencies-and-makefile).
+
+Once the main binary is built, the node can be started by running the [`start` command](../advanced/03-node.md#start-command). This command function primarily does three things:
+
+1. Create an instance of the state-machine defined in [`app.go`](#core-application-file).
+2. Initialize the state-machine with the latest known state, extracted from the `db` stored in the `~/.app/data` folder. At this point, the state-machine is at height `appBlockHeight`.
+3. Create and start a new CometBFT instance. Among other things, the node performs a handshake with its peers. It gets the latest `blockHeight` from them and replays blocks to sync to this height if it is greater than the local `appBlockHeight`. The node starts from genesis and CometBFT sends an `InitChain` message via the ABCI to the `app`, which triggers the [`InitChainer`](#initchainer).
+
+:::note
+When starting a CometBFT instance, the genesis file is the `0` height and the state within the genesis file is committed at block height `1`. When querying the state of the node, querying block height 0 will return an error.
+:::
+
+## Core Application File
+
+In general, the core of the state-machine is defined in a file called `app.go`. This file mainly contains the **type definition of the application** and functions to **create and initialize it**.
+
+### Type Definition of the Application
+
+The first thing defined in `app.go` is the `type` of the application. It is generally comprised of the following parts:
+
+* **Embedding [runtime.App](../../build/building-apps/00-runtime.md)** The runtime package manages the application's core components and modules through dependency injection. It provides declarative configuration for module management, state storage, and ABCI handling.
+ * `Runtime` wraps `BaseApp`, meaning when a transaction is relayed by CometBFT to the application, `app` uses `runtime`'s methods to route them to the appropriate module. `BaseApp` implements all the [ABCI methods](https://docs.cometbft.com/v0.38/spec/abci/) and the [routing logic](../advanced/00-baseapp.md#service-routers).
+ * It automatically configures the **[module manager](../../build/building-modules/01-module-manager.md#manager)** based on the app wiring configuration. The module manager facilitates operations related to these modules, like registering their [`Msg` service](../../build/building-modules/03-msg-services.md) and [gRPC `Query` service](#grpc-query-services), or setting the order of execution between modules for various functions like [`InitChainer`](#initchainer), [`PreBlocker`](#preblocker) and [`BeginBlocker` and `EndBlocker`](#beginblocker-and-endblocker).
+* [**An App Wiring configuration file**](../../build/building-apps/00-runtime.md) The app wiring configuration file contains the list of application's modules that `runtime` must instantiate. The instantiation of the modules is done using `depinject`. It also contains the order in which all modules' `InitGenesis` and `Pre/Begin/EndBlocker` methods should be executed.
+* **A reference to an [`appCodec`](../advanced/05-encoding.md).** The application's `appCodec` is used to serialize and deserialize data structures in order to store them, as stores can only persist `[]bytes`. The default codec is [Protocol Buffers](../advanced/05-encoding.md).
+* **A reference to a [`legacyAmino`](../advanced/05-encoding.md) codec.** Some parts of the Cosmos SDK have not been migrated to use the `appCodec` above, and are still hardcoded to use Amino. Other parts explicitly use Amino for backwards compatibility. For these reasons, the application still holds a reference to the legacy Amino codec. Please note that the Amino codec will be removed from the SDK in the upcoming releases.
+
+See an example of application type definition from `simapp`, the Cosmos SDK's own app used for demo and testing purposes:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app_di.go#L57-L90
+```
+
+### Constructor Function
+
+Also defined in `app.go` is the constructor function, which constructs a new application of the type defined in the preceding section. The function must fulfill the `AppCreator` signature in order to be used in the [`start` command](../advanced/03-node.md#start-command) of the application's daemon command.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/server/types/app.go#L67-L69
+```
+
+Here are the main actions performed by this function:
+
+* Instantiate a new [`codec`](../advanced/05-encoding.md) and initialize the `codec` of each of the application's modules using the [basic manager](../../build/building-modules/01-module-manager.md#basicmanager).
+* Instantiate a new application with a reference to a `baseapp` instance, a codec, and all the appropriate store keys.
+* Instantiate all the [`keeper`](#keeper) objects defined in the application's `type` using the `NewKeeper` function of each of the application's modules. Note that keepers must be instantiated in the correct order, as the `NewKeeper` of one module might require a reference to another module's `keeper`.
+* Instantiate the application's [module manager](../../build/building-modules/01-module-manager.md#manager) with the [`AppModule`](#application-module-interface) object of each of the application's modules.
+* With the module manager, initialize the application's [`Msg` services](../advanced/00-baseapp.md#msg-services), [gRPC `Query` services](../advanced/00-baseapp.md#grpc-query-services), [legacy `Msg` routes](../advanced/00-baseapp.md#routing), and [legacy query routes](../advanced/00-baseapp.md#query-routing). When a transaction is relayed to the application by CometBFT via the ABCI, it is routed to the appropriate module's [`Msg` service](#msg-services) using the routes defined here. Likewise, when a gRPC query request is received by the application, it is routed to the appropriate module's [`gRPC query service`](#grpc-query-services) using the gRPC routes defined here. The Cosmos SDK still supports legacy `Msg`s and legacy CometBFT queries, which are routed using the legacy `Msg` routes and the legacy query routes, respectively.
+* With the module manager, register the [application's modules' invariants](../../build/building-modules/07-invariants.md). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](../../build/building-modules/07-invariants.md#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry is triggered (usually the chain is halted). This is useful to make sure that no critical bug goes unnoticed, producing long-lasting effects that are hard to fix.
+* With the module manager, set the order of execution between the `InitGenesis`, `PreBlocker`, `BeginBlocker`, and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions.
+* Set the remaining application parameters:
+ * [`InitChainer`](#initchainer): used to initialize the application when it is first started.
+ * [`PreBlocker`](#preblocker): called before BeginBlock.
+ * [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endblocker): called at the beginning and at the end of every block.
+ * [`anteHandler`](../advanced/00-baseapp.md#antehandler): used to handle fees and signature verification.
+* Mount the stores.
+* Return the application.
+
+Note that the constructor function only creates an instance of the app, while the actual state is either carried over from the `~/.app/data` folder if the node is restarted, or generated from the genesis file if the node is started for the first time.
+
+See an example of application constructor from `simapp`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L190-L708
+```
+
+### InitChainer
+
+The `InitChainer` is a function that initializes the state of the application from a genesis file (i.e. token balances of genesis accounts). It is called when the application receives the `InitChain` message from the CometBFT engine, which happens when the node is started at `appBlockHeight == 0` (i.e. on genesis). The application must set the `InitChainer` in its [constructor](#constructor-function) via the [`SetInitChainer`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetInitChainer) method.
+
+In general, the `InitChainer` is mostly composed of the [`InitGenesis`](../../build/building-modules/08-genesis.md#initgenesis) function of each of the application's modules. This is done by calling the `InitGenesis` function of the module manager, which in turn calls the `InitGenesis` function of each of the modules it contains. Note that the order in which the modules' `InitGenesis` functions must be called has to be set in the module manager using the [module manager's](../../build/building-modules/01-module-manager.md) `SetOrderInitGenesis` method. This is done in the [application's constructor](#constructor-function), and the `SetOrderInitGenesis` has to be called before the `SetInitChainer`.
+
+See an example of an `InitChainer` from `simapp`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L765-L773
+```
+
+### PreBlocker
+
+There are two semantics around the new lifecycle method:
+
+* It runs before the `BeginBlocker` of all modules
+* It can modify consensus parameters in storage, and signal the caller through the return value.
+
+When it returns `ConsensusParamsChanged=true`, the caller must refresh the consensus parameter in the finalize context:
+
+```go
+app.finalizeBlockState.ctx = app.finalizeBlockState.ctx.WithConsensusParams(app.GetConsensusParams())
+```
+
+The new ctx must be passed to all the other lifecycle methods.
+
+### BeginBlocker and EndBlocker
+
+The Cosmos SDK offers developers the possibility to implement automatic execution of code as part of their application. This is implemented through two functions called `BeginBlocker` and `EndBlocker`. They are called when the application receives the `FinalizeBlock` messages from the CometBFT consensus engine, which happens respectively at the beginning and at the end of each block. The application must set the `BeginBlocker` and `EndBlocker` in its [constructor](#constructor-function) via the [`SetBeginBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetBeginBlocker) and [`SetEndBlocker`](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/baseapp#BaseApp.SetEndBlocker) methods.
+
+In general, the `BeginBlocker` and `EndBlocker` functions are mostly composed of the [`BeginBlock` and `EndBlock`](../../build/building-modules/06-beginblock-endblock.md) functions of each of the application's modules. This is done by calling the `BeginBlock` and `EndBlock` functions of the module manager, which in turn calls the `BeginBlock` and `EndBlock` functions of each of the modules it contains. Note that the order in which the modules' `BeginBlock` and `EndBlock` functions must be called has to be set in the module manager using the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods, respectively. This is done via the [module manager](../../build/building-modules/01-module-manager.md) in the [application's constructor](#application-constructor), and the `SetOrderBeginBlockers` and `SetOrderEndBlockers` methods have to be called before the `SetBeginBlocker` and `SetEndBlocker` functions.
+
+As a sidenote, it is important to remember that application-specific blockchains are deterministic. Developers must be careful not to introduce non-determinism in `BeginBlocker` or `EndBlocker`, and must also be careful not to make them too computationally expensive, as [gas](./04-gas-fees.md) does not constrain the cost of `BeginBlocker` and `EndBlocker` execution.
+
+See an example of `BeginBlocker` and `EndBlocker` functions from `simapp`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L752-L759
+```
+
+### Register Codec
+
+The `EncodingConfig` structure is the last important part of the `app.go` file. The goal of this structure is to define the codecs that will be used throughout the app.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/params/encoding.go#L9-L16
+```
+
+Here are descriptions of what each of the four fields means:
+
+* `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each application module implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations.
+ * You can read more about `Any` in [ADR-019](../../build/architecture/adr-019-protobuf-state-encoding.md).
+ * To go more into details, the Cosmos SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/cosmos/gogoproto). By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types) uses [global type registration](https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could register a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](../../build/architecture/adr-019-protobuf-state-encoding.md).
+* `Codec`: The default codec used throughout the Cosmos SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example, in the [CLI](#cli)). By default, the SDK uses Protobuf as `Codec`.
+* `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](../advanced/01-transactions.md).
+* `Amino`: Some legacy parts of the Cosmos SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases.
+
+An application should create its own encoding config.
+See an example of a `simappparams.EncodingConfig` from `simapp`:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/params/encoding.go#L11-L16
+```
+
+## Modules
+
+[Modules](../../build/building-modules/00-intro.md) are the heart and soul of Cosmos SDK applications. They can be considered as state-machines nested within the state-machine. When a transaction is relayed from the underlying CometBFT engine via the ABCI to the application, it is routed by [`baseapp`](../advanced/00-baseapp.md) to the appropriate module in order to be processed. This paradigm enables developers to easily build complex state-machines, as most of the modules they need often already exist. **For developers, most of the work involved in building a Cosmos SDK application revolves around building custom modules required by their application that do not exist yet, and integrating them with modules that do already exist into one coherent application**. In the application directory, the standard practice is to store modules in the `x/` folder (not to be confused with the Cosmos SDK's `x/` folder, which contains already-built modules).
+
+### Application Module Interface
+
+Modules must implement [interfaces](../../build/building-modules/01-module-manager.md#application-module-interfaces) defined in the Cosmos SDK, [`AppModuleBasic`](../../build/building-modules/01-module-manager.md#appmodulebasic) and [`AppModule`](../../build/building-modules/01-module-manager.md#appmodule). The former implements basic non-dependent elements of the module, such as the `codec`, while the latter handles the bulk of the module methods (including methods that require references to other modules' `keeper`s). Both the `AppModule` and `AppModuleBasic` types are, by convention, defined in a file called `module.go`.
+
+`AppModule` exposes a collection of useful methods on the module that facilitates the composition of modules into a coherent application. These methods are called from the [`module manager`](../../build/building-modules/01-module-manager.md#manager), which manages the application's collection of modules.
+
+### `Msg` Services
+
+Each application module defines two [Protobuf services](https://developers.google.com/protocol-buffers/docs/proto#services): one `Msg` service to handle messages, and one gRPC `Query` service to handle queries. If we consider the module as a state-machine, then a `Msg` service is a set of state transition RPC methods.
+Each Protobuf `Msg` service method is 1:1 related to a Protobuf request type, which must implement `sdk.Msg` interface.
+Note that `sdk.Msg`s are bundled in [transactions](../advanced/01-transactions.md), and each transaction contains one or multiple messages.
+
+When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction:
+
+1. Upon receiving the transaction, the application first unmarshals it from `[]byte`.
+2. Then, it verifies a few things about the transaction like [fee payment and signatures](./04-gas-fees.md#antehandler) before extracting the `Msg`(s) contained in the transaction.
+3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service.
+4. If the message is successfully processed, the state is updated.
+
+For more details, see [transaction lifecycle](./01-tx-lifecycle.md).
+
+Module developers create custom `Msg` services when they build their own module. The general practice is to define the `Msg` Protobuf service in a `tx.proto` file. For example, the `x/bank` module defines a service with two methods to transfer tokens:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/bank/v1beta1/tx.proto#L13-L36
+```
+
+Service methods use `keeper` in order to update the module state.
+
+Each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterMsgServer` function provided by the generated Protobuf code.
+
+### gRPC `Query` Services
+
+gRPC `Query` services allow users to query the state using [gRPC](https://grpc.io). They are enabled by default, and can be configured under the `grpc.enable` and `grpc.address` fields inside [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml).
+
+gRPC `Query` services are defined in the module's Protobuf definition files, specifically inside `query.proto`. The `query.proto` definition file exposes a single `Query` [Protobuf service](https://developers.google.com/protocol-buffers/docs/proto#services). Each gRPC query endpoint corresponds to a service method, starting with the `rpc` keyword, inside the `Query` service.
+
+Protobuf generates a `QueryServer` interface for each module, containing all the service methods. A module's [`keeper`](#keeper) then needs to implement this `QueryServer` interface, by providing the concrete implementation of each service method. This concrete implementation is the handler of the corresponding gRPC query endpoint.
+
+Finally, each module should also implement the `RegisterServices` method as part of the [`AppModule` interface](#application-module-interface). This method should call the `RegisterQueryServer` function provided by the generated Protobuf code.
+
+### Keeper
+
+[`Keepers`](../../build/building-modules/06-keeper.md) are the gatekeepers of their module's store(s). To read or write in a module's store, it is mandatory to go through one of its `keeper`'s methods. This is ensured by the [object-capabilities](../advanced/10-ocap.md) model of the Cosmos SDK. Only objects that hold the key to a store can access it, and only the module's `keeper` should hold the key(s) to the module's store(s).
+
+`Keepers` are generally defined in a file called `keeper.go`. It contains the `keeper`'s type definition and methods.
+
+The `keeper` type definition generally consists of the following:
+
+* **Key(s)** to the module's store(s) in the multistore.
+* Reference to **other module's `keepers`**. Only needed if the `keeper` needs to access other module's store(s) (either to read or write from them).
+* A reference to the application's **codec**. The `keeper` needs it to marshal structs before storing them, or to unmarshal them when it retrieves them, because stores only accept `[]bytes` as value.
+
+Along with the type definition, the next important component of the `keeper.go` file is the `keeper`'s constructor function, `NewKeeper`. This function instantiates a new `keeper` of the type defined above with a `codec`, stores `keys` and potentially references other modules' `keeper`s as parameters. The `NewKeeper` function is called from the [application's constructor](#constructor-function). The rest of the file defines the `keeper`'s methods, which are primarily getters and setters.
+
+### Command-Line, gRPC Services and REST Interfaces
+
+Each module defines command-line commands, gRPC services, and REST routes to be exposed to the end-user via the [application's interfaces](#application-interfaces). This enables end-users to create messages of the types defined in the module, or to query the subset of the state managed by the module.
+
+#### CLI
+
+Generally, the [commands related to a module](../../build/building-modules/09-module-interfaces.md#cli) are defined in a folder called `client/cli` in the module's folder. The CLI divides commands into two categories, transactions and queries, defined in `client/cli/tx.go` and `client/cli/query.go`, respectively. Both commands are built on top of the [Cobra Library](https://github.com/spf13/cobra):
+
+* Transactions commands let users generate new transactions so that they can be included in a block and eventually update the state. One command should be created for each [message type](#message-types) defined in the module. The command calls the constructor of the message with the parameters provided by the end-user, and wraps it into a transaction. The Cosmos SDK handles signing and the addition of other transaction metadata.
+* Queries let users query the subset of the state defined by the module. Query commands forward queries to the [application's query router](../advanced/00-baseapp.md#query-routing), which routes them to the appropriate [querier](#querier) the `queryRoute` parameter supplied.
+
+#### gRPC
+
+[gRPC](https://grpc.io) is a modern open-source high performance RPC framework that has support in multiple languages. It is the recommended way for external clients (such as wallets, browsers and other backend services) to interact with a node.
+
+Each module can expose gRPC endpoints called [service methods](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition), which are defined in the [module's Protobuf `query.proto` file](#grpc-query-services). A service method is defined by its name, input arguments, and output response. The module then needs to perform the following actions:
+
+* Define a `RegisterGRPCGatewayRoutes` method on `AppModuleBasic` to wire the client gRPC requests to the correct handler inside the module.
+* For each service method, define a corresponding handler. The handler implements the core logic necessary to serve the gRPC request, and is located in the `keeper/grpc_query.go` file.
+
+#### gRPC-gateway REST Endpoints
+
+Some external clients may not wish to use gRPC. In this case, the Cosmos SDK provides a gRPC gateway service, which exposes each gRPC service as a corresponding REST endpoint. Please refer to the [grpc-gateway](https://grpc-ecosystem.github.io/grpc-gateway/) documentation to learn more.
+
+The REST endpoints are defined in the Protobuf files, along with the gRPC services, using Protobuf annotations. Modules that want to expose REST queries should add `google.api.http` annotations to their `rpc` methods. By default, all REST endpoints defined in the SDK have a URL starting with the `/cosmos/` prefix.
+
+The Cosmos SDK also provides a development endpoint to generate [Swagger](https://swagger.io/) definition files for these REST endpoints. This endpoint can be enabled inside the [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml) config file, under the `api.swagger` key.
+
+## Application Interface
+
+[Interfaces](#command-line-grpc-services-and-rest-interfaces) let end-users interact with full-node clients. This means querying data from the full-node or creating and sending new transactions to be relayed by the full-node and eventually included in a block.
+
+The main interface is the [Command-Line Interface](../advanced/07-cli.md). The CLI of a Cosmos SDK application is built by aggregating [CLI commands](#cli) defined in each of the modules used by the application. The CLI of an application is the same as the daemon (e.g. `appd`), and is defined in a file called `appd/main.go`. The file contains the following:
+
+* **A `main()` function**, which is executed to build the `appd` interface client. This function prepares each command and adds them to the `rootCmd` before building them. At the root of `appd`, the function adds generic commands like `status`, `keys`, and `config`, query commands, tx commands, and `rest-server`.
+* **Query commands**, which are added by calling the `queryCmd` function. This function returns a Cobra command that contains the query commands defined in each of the application's modules (passed as an array of `sdk.ModuleClients` from the `main()` function), as well as some other lower level query commands such as block or validator queries. Query command are called by using the command `appd query [query]` of the CLI.
+* **Transaction commands**, which are added by calling the `txCmd` function. Similar to `queryCmd`, the function returns a Cobra command that contains the tx commands defined in each of the application's modules, as well as lower level tx commands like transaction signing or broadcasting. Tx commands are called by using the command `appd tx [tx]` of the CLI.
+
+See an example of an application's main command-line file from the [Cosmos Hub](https://github.com/cosmos/gaia).
+
+```go reference
+https://github.com/cosmos/gaia/blob/26ae7c2/cmd/gaiad/cmd/root.go#L39-L80
+```
+
+## Dependencies and Makefile
+
+This section is optional, as developers are free to choose their dependency manager and project building method. That said, the current most used framework for versioning control is [`go.mod`](https://github.com/golang/go/wiki/Modules). It ensures each of the libraries used throughout the application are imported with the correct version.
+
+The following is the `go.mod` of the [Cosmos Hub](https://github.com/cosmos/gaia), provided as an example.
+
+```go reference
+https://github.com/cosmos/gaia/blob/26ae7c2/go.mod#L1-L28
+```
+
+For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`Node Client`](#node-client) and [`Application Interface`](#application-interface).
+
+Here is an example of the [Cosmos Hub Makefile](https://github.com/cosmos/gaia/blob/main/Makefile).
diff --git a/copy-of-sdk-docs/learn/beginner/01-tx-lifecycle.md b/copy-of-sdk-docs/learn/beginner/01-tx-lifecycle.md
new file mode 100644
index 00000000..b004b355
--- /dev/null
+++ b/copy-of-sdk-docs/learn/beginner/01-tx-lifecycle.md
@@ -0,0 +1,284 @@
+---
+sidebar_position: 1
+---
+
+# Transaction Lifecycle
+
+:::note Synopsis
+This document describes the lifecycle of a transaction from creation to committed state changes. Transaction definition is described in a [different doc](../advanced/01-transactions.md). The transaction is referred to as `Tx`.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK Application](./00-app-anatomy.md)
+:::
+
+## Creation
+
+### Transaction Creation
+
+One of the main application interfaces is the command-line interface. The transaction `Tx` can be created by the user inputting a command in the following format from the [command-line](../advanced/07-cli.md), providing the type of transaction in `[command]`, arguments in `[args]`, and configurations such as gas prices in `[flags]`:
+
+```bash
+[appname] tx [command] [args] [flags]
+```
+
+This command automatically **creates** the transaction, **signs** it using the account's private key, and **broadcasts** it to the specified peer node.
+
+There are several required and optional flags for transaction creation. The `--from` flag specifies which [account](./03-accounts.md) the transaction is originating from. For example, if the transaction is sending coins, the funds are drawn from the specified `from` address.
+
+#### Gas and Fees
+
+Additionally, there are several [flags](../advanced/07-cli.md) users can use to indicate how much they are willing to pay in [fees](./04-gas-fees.md):
+
+* `--gas` refers to how much [gas](./04-gas-fees.md), which represents computational resources, `Tx` consumes. Gas is dependent on the transaction and is not precisely calculated until execution, but can be estimated by providing `auto` as the value for `--gas`.
+* `--gas-adjustment` (optional) can be used to scale `gas` up in order to avoid underestimating. For example, users can specify their gas adjustment as 1.5 to use 1.5 times the estimated gas.
+* `--gas-prices` specifies how much the user is willing to pay per unit of gas, which can be one or multiple denominations of tokens. For example, `--gas-prices=0.025uatom, 0.025upho` means the user is willing to pay 0.025uatom AND 0.025upho per unit of gas.
+* `--fees` specifies how much in fees the user is willing to pay in total.
+* `--timeout-height` specifies a block timeout height to prevent the tx from being committed past a certain height.
+
+The ultimate value of the fees paid is equal to the gas multiplied by the gas prices. In other words, `fees = ceil(gas * gasPrices)`. Thus, since fees can be calculated using gas prices and vice versa, the users specify only one of the two.
+
+Later, validators decide whether to include the transaction in their block by comparing the given or calculated `gas-prices` to their local `min-gas-prices`. `Tx` is rejected if its `gas-prices` is not high enough, so users are incentivized to pay more.
+
+#### Unordered Transactions
+
+With Cosmos SDK v0.53.0, users may send unordered transactions to chains that have this feature enabled.
+The following flags allow a user to build an unordered transaction from the CLI.
+
+* `--unordered` specifies that this transaction should be unordered. (transaction sequence must be unset)
+* `--timeout-duration` specifies the amount of time the unordered transaction should be valid in the mempool. The transaction's unordered nonce will be set to the time of transaction creation + timeout duration.
+
+:::warning
+
+Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value,
+the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions.
+Services should be aware that when the transaction is unordered, the transaction sequence will always be zero.
+
+:::
+
+#### CLI Example
+
+Users of the application `app` can enter the following command into their CLI to generate a transaction to send 1000uatom from a `senderAddress` to a `recipientAddress`. The command specifies how much gas they are willing to pay: an automatic estimate scaled up by 1.5 times, with a gas price of 0.025uatom per unit gas.
+
+```bash
+appd tx send 1000uatom --from --gas auto --gas-adjustment 1.5 --gas-prices 0.025uatom
+```
+
+#### Other Transaction Creation Methods
+
+The command-line is an easy way to interact with an application, but `Tx` can also be created using a [gRPC or REST interface](../advanced/06-grpc_rest.md) or some other entry point defined by the application developer. From the user's perspective, the interaction depends on the web interface or wallet they are using (e.g. creating `Tx` using [Lunie.io](https://lunie.io/#/) and signing it with a Ledger Nano S).
+
+## Addition to Mempool
+
+Each full-node (running CometBFT) that receives a `Tx` sends an [ABCI message](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/),
+`CheckTx`, to the application layer to check for validity, and receives an `abci.CheckTxResponse`. If the `Tx` passes the checks, it is held in the node's
+[**Mempool**](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/mempool), an in-memory pool of transactions unique to each node, pending inclusion in a block - honest nodes discard a `Tx` if it is found to be invalid. Prior to consensus, nodes continuously check incoming transactions and gossip them to their peers.
+
+### Types of Checks
+
+The full-nodes perform stateless, then stateful checks on `Tx` during `CheckTx`, with the goal to
+identify and reject an invalid transaction as early on as possible to avoid wasted computation.
+
+**_Stateless_** checks do not require nodes to access state - light clients or offline nodes can do
+them - and are thus less computationally expensive. Stateless checks include making sure addresses
+are not empty, enforcing nonnegative numbers, and other logic specified in the definitions.
+
+**_Stateful_** checks validate transactions and messages based on a committed state. Examples
+include checking that the relevant values exist and can be transacted with, the address
+has sufficient funds, and the sender is authorized or has the correct ownership to transact.
+At any given moment, full-nodes typically have [multiple versions](../advanced/00-baseapp.md#state-updates)
+of the application's internal state for different purposes. For example, nodes execute state
+changes while in the process of verifying transactions, but still need a copy of the last committed
+state in order to answer queries - they should not respond using state with uncommitted changes.
+
+In order to verify a `Tx`, full-nodes call `CheckTx`, which includes both _stateless_ and _stateful_
+checks. Further validation happens later in the [`DeliverTx`](#delivertx) stage. `CheckTx` goes
+through several steps, beginning with decoding `Tx`.
+
+### Decoding
+
+When `Tx` is received by the application from the underlying consensus engine (e.g. CometBFT), it is still in its [encoded](../advanced/05-encoding.md) `[]byte` form and needs to be unmarshaled in order to be processed. Then, the [`runTx`](../advanced/00-baseapp.md#runtx-antehandler-runmsgs-posthandler) function is called to run in `runTxModeCheck` mode, meaning the function runs all checks but exits before executing messages and writing state changes.
+
+### ValidateBasic (deprecated)
+
+Messages ([`sdk.Msg`](../advanced/01-transactions.md#messages)) are extracted from transactions (`Tx`). The `ValidateBasic` method of the `sdk.Msg` interface implemented by the module developer is run for each transaction.
+To discard obviously invalid messages, the `BaseApp` type calls the `ValidateBasic` method very early in the processing of the message in the [`CheckTx`](../advanced/00-baseapp.md#checktx) and [`DeliverTx`](../advanced/00-baseapp.md#delivertx) transactions.
+`ValidateBasic` can include only **stateless** checks (the checks that do not require access to the state).
+
+:::warning
+The `ValidateBasic` method on messages has been deprecated in favor of validating messages directly in their respective [`Msg` services](../../build/building-modules/03-msg-services.md#Validation).
+
+Read [RFC 001](https://docs.cosmos.network/main/rfc/rfc-001-tx-validation) for more details.
+:::
+
+:::note
+`BaseApp` still calls `ValidateBasic` on messages that implement that method for backwards compatibility.
+:::
+
+#### Guideline
+
+`ValidateBasic` should not be used anymore. Message validation should be performed in the `Msg` service when [handling a message](../../build/building-modules/msg-services#Validation) in a module Msg Server.
+
+### AnteHandler
+
+`AnteHandler`s even though optional, are in practice very often used to perform signature verification, gas calculation, fee deduction, and other core operations related to blockchain transactions.
+
+A copy of the cached context is provided to the `AnteHandler`, which performs limited checks specified for the transaction type. Using a copy allows the `AnteHandler` to do stateful checks for `Tx` without modifying the last committed state, and revert back to the original if the execution fails.
+
+For example, the [`auth`](https://github.com/cosmos/cosmos-sdk/blob/main/x/auth/README.md) module `AnteHandler` checks and increments sequence numbers, checks signatures and account numbers, and deducts fees from the first signer of the transaction - all state changes are made using the `checkState`.
+
+:::warning
+Ante handlers only run on a transaction. If a transaction embeds multiple messages (like some x/authz, x/gov transactions for instance), the ante handlers only have awareness of the outer message. Inner messages are mostly directly routed to the [message router](https://docs.cosmos.network/main/learn/advanced/baseapp#msg-service-router) and will skip the chain of ante handlers. Keep that in mind when designing your own ante handler.
+:::
+
+### Gas
+
+The [`Context`](../advanced/02-context.md), which keeps a `GasMeter` that tracks how much gas is used during the execution of `Tx`, is initialized. The user-provided amount of gas for `Tx` is known as `GasWanted`. If `GasConsumed`, the amount of gas consumed during execution, ever exceeds `GasWanted`, the execution stops and the changes made to the cached copy of the state are not committed. Otherwise, `CheckTx` sets `GasUsed` equal to `GasConsumed` and returns it in the result. After calculating the gas and fee values, validator-nodes check that the user-specified `gas-prices` is greater than their locally defined `min-gas-prices`.
+
+### Discard or Addition to Mempool
+
+If at any point during `CheckTx` the `Tx` fails, it is discarded and the transaction lifecycle ends
+there. Otherwise, if it passes `CheckTx` successfully, the default protocol is to relay it to peer
+nodes and add it to the Mempool so that the `Tx` becomes a candidate to be included in the next block.
+
+The **mempool** serves the purpose of keeping track of transactions seen by all full-nodes.
+Full-nodes keep a **mempool cache** of the last `mempool.cache_size` transactions they have seen, as a first line of
+defense to prevent replay attacks. Ideally, `mempool.cache_size` is large enough to encompass all
+of the transactions in the full mempool. If the mempool cache is too small to keep track of all
+the transactions, `CheckTx` is responsible for identifying and rejecting replayed transactions.
+
+Currently existing preventative measures include fees and a `sequence` (nonce) counter to distinguish
+replayed transactions from identical but valid ones. If an attacker tries to spam nodes with many
+copies of a `Tx`, full-nodes keeping a mempool cache reject all identical copies instead of running
+`CheckTx` on them. Even if the copies have incremented `sequence` numbers, attackers are
+disincentivized by the need to pay fees.
+
+Validator nodes keep a mempool to prevent replay attacks, just as full-nodes do, but also use it as
+a pool of unconfirmed transactions in preparation of block inclusion. Note that even if a `Tx`
+passes all checks at this stage, it is still possible to be found invalid later on, because
+`CheckTx` does not fully validate the transaction (that is, it does not actually execute the messages).
+
+## Inclusion in a Block
+
+Consensus, the process through which validator nodes come to agreement on which transactions to
+accept, happens in **rounds**. Each round begins with a proposer creating a block of the most
+recent transactions and ends with **validators**, special full-nodes with voting power responsible
+for consensus, agreeing to accept the block or go with a `nil` block instead. Validator nodes
+execute the consensus algorithm, such as [CometBFT](https://docs.cometbft.com/v0.37/spec/consensus/),
+confirming the transactions using ABCI requests to the application, in order to come to this agreement.
+
+The first step of consensus is the **block proposal**. One proposer amongst the validators is chosen
+by the consensus algorithm to create and propose a block - in order for a `Tx` to be included, it
+must be in this proposer's mempool.
+
+## State Changes
+
+The next step of consensus is to execute the transactions to fully validate them. All full-nodes
+that receive a block proposal from the correct proposer execute the transactions by calling the ABCI function `FinalizeBlock`.
+As mentioned throughout the documentation `BeginBlock`, `ExecuteTx` and `EndBlock` are called within FinalizeBlock.
+Although every full-node operates individually and locally, the outcome is always consistent and unequivocal. This is because the state changes brought about by the messages are predictable, and the transactions are specifically sequenced in the proposed block.
+
+```text
+ --------------------------
+ | Receive Block Proposal |
+ --------------------------
+ |
+ v
+ -------------------------
+ | FinalizeBlock |
+ -------------------------
+ |
+ v
+ -------------------
+ | BeginBlock |
+ -------------------
+ |
+ v
+ --------------------
+ | ExecuteTx(tx0) |
+ | ExecuteTx(tx1) |
+ | ExecuteTx(tx2) |
+ | ExecuteTx(tx3) |
+ | . |
+ | . |
+ | . |
+ -------------------
+ |
+ v
+ --------------------
+ | EndBlock |
+ --------------------
+ |
+ v
+ -------------------------
+ | Consensus |
+ -------------------------
+ |
+ v
+ -------------------------
+ | Commit |
+ -------------------------
+```
+
+### Transaction Execution
+
+The `FinalizeBlock` ABCI function defined in [`BaseApp`](../advanced/00-baseapp.md) does the bulk of the
+state transitions: it is run for each transaction in the block in sequential order as committed
+to during consensus. Under the hood, transaction execution is almost identical to `CheckTx` but calls the
+[`runTx`](../advanced/00-baseapp.md#runtx) function in deliver mode instead of check mode.
+Instead of using their `checkState`, full-nodes use `finalizeblock`:
+
+* **Decoding:** Since `FinalizeBlock` is an ABCI call, `Tx` is received in the encoded `[]byte` form.
+ Nodes first unmarshal the transaction, using the [`TxConfig`](./00-app-anatomy.md#register-codec) defined in the app, then call `runTx` in `execModeFinalize`, which is very similar to `CheckTx` but also executes and writes state changes.
+
+* **Checks and `AnteHandler`:** Full-nodes call `validateBasicMsgs` and `AnteHandler` again. This second check
+ happens because they may not have seen the same transactions during the addition to Mempool stage
+ and a malicious proposer may have included invalid ones. One difference here is that the
+ `AnteHandler` does not compare `gas-prices` to the node's `min-gas-prices` since that value is local
+ to each node - differing values across nodes yield nondeterministic results.
+
+* **`MsgServiceRouter`:** After `CheckTx` exits, `FinalizeBlock` continues to run
+ [`runMsgs`](../advanced/00-baseapp.md#runtx-antehandler-runmsgs-posthandler) to fully execute each `Msg` within the transaction.
+ Since the transaction may have messages from different modules, `BaseApp` needs to know which module
+ to find the appropriate handler. This is achieved using `BaseApp`'s `MsgServiceRouter` so that it can be processed by the module's Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md).
+ For `LegacyMsg` routing, the `Route` function is called via the [module manager](../../build/building-modules/01-module-manager.md) to retrieve the route name and find the legacy [`Handler`](../../build/building-modules/03-msg-services.md#handler-type) within the module.
+
+* **`Msg` service:** Protobuf `Msg` service is responsible for executing each message in the `Tx` and causes state transitions to persist in `finalizeBlockState`.
+
+* **PostHandlers:** [`PostHandler`](../advanced/00-baseapp.md#posthandler)s run after the execution of the message. If they fail, the state change of `runMsgs`, as well of `PostHandlers`, are both reverted.
+
+* **Gas:** While a `Tx` is being delivered, a `GasMeter` is used to keep track of how much
+ gas is being used; if execution completes, `GasUsed` is set and returned in the
+ `abci.ExecTxResult`. If execution halts because `BlockGasMeter` or `GasMeter` has run out or something else goes
+ wrong, a deferred function at the end appropriately errors or panics.
+
+If there are any failed state changes resulting from a `Tx` being invalid or `GasMeter` running out,
+the transaction processing terminates and any state changes are reverted. Invalid transactions in a
+block proposal cause validator nodes to reject the block and vote for a `nil` block instead.
+
+### Commit
+
+The final step is for nodes to commit the block and state changes. Validator nodes
+perform the previous step of executing state transitions in order to validate the transactions,
+then sign the block to confirm it. Full nodes that are not validators do not
+participate in consensus - i.e. they cannot vote - but listen for votes to understand whether or
+not they should commit the state changes.
+
+When they receive enough validator votes (2/3+ _precommits_ weighted by voting power), full nodes commit to a new block to be added to the blockchain and
+finalize the state transitions in the application layer. A new state root is generated to serve as
+a merkle proof for the state transitions. Applications use the [`Commit`](../advanced/00-baseapp.md#commit)
+ABCI method inherited from [Baseapp](../advanced/00-baseapp.md); it syncs all the state transitions by
+writing the `deliverState` into the application's internal state. As soon as the state changes are
+committed, `checkState` starts afresh from the most recently committed state and `deliverState`
+resets to `nil` in order to be consistent and reflect the changes.
+
+Note that not all blocks have the same number of transactions and it is possible for consensus to
+result in a `nil` block or one with none at all. In a public blockchain network, it is also possible
+for validators to be **byzantine**, or malicious, which may prevent a `Tx` from being committed in
+the blockchain. Possible malicious behaviors include the proposer deciding to censor a `Tx` by
+excluding it from the block or a validator voting against the block.
+
+At this point, the transaction lifecycle of a `Tx` is over: nodes have verified its validity,
+delivered it by executing its state changes, and committed those changes. The `Tx` itself,
+in `[]byte` form, is stored in a block and appended to the blockchain.
diff --git a/copy-of-sdk-docs/learn/beginner/02-query-lifecycle.md b/copy-of-sdk-docs/learn/beginner/02-query-lifecycle.md
new file mode 100644
index 00000000..4b11bfed
--- /dev/null
+++ b/copy-of-sdk-docs/learn/beginner/02-query-lifecycle.md
@@ -0,0 +1,147 @@
+---
+sidebar_position: 1
+---
+
+# Query Lifecycle
+
+:::note Synopsis
+This document describes the lifecycle of a query in a Cosmos SDK application, from the user interface to application stores and back. The query is referred to as `MyQuery`.
+:::
+
+:::note Pre-requisite Readings
+
+* [Transaction Lifecycle](./01-tx-lifecycle.md)
+:::
+
+## Query Creation
+
+A [**query**](../../build/building-modules/02-messages-and-queries.md#queries) is a request for information made by end-users of applications through an interface and processed by a full-node. Users can query information about the network, the application itself, and application state directly from the application's stores or modules. Note that queries are different from [transactions](../advanced/01-transactions.md) (view the lifecycle [here](./01-tx-lifecycle.md)), particularly in that they do not require consensus to be processed (as they do not trigger state-transitions); they can be fully handled by one full-node.
+
+For the purpose of explaining the query lifecycle, let's say the query, `MyQuery`, is requesting a list of delegations made by a certain delegator address in the application called `simapp`. As is to be expected, the [`staking`](../../../../x/staking/README.md) module handles this query. But first, there are a few ways `MyQuery` can be created by users.
+
+### CLI
+
+The main interface for an application is the command-line interface. Users connect to a full-node and run the CLI directly from their machines - the CLI interacts directly with the full-node. To create `MyQuery` from their terminal, users type the following command:
+
+```bash
+simd query staking delegations
+```
+
+This query command was defined by the [`staking`](../../../../x/staking/README.md) module developer and added to the list of subcommands by the application developer when creating the CLI.
+
+Note that the general format is as follows:
+
+```bash
+simd query [moduleName] [command] --flag
+```
+
+To provide values such as `--node` (the full-node the CLI connects to), the user can use the [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml) config file to set them or provide them as flags.
+
+The CLI understands a specific set of commands, defined in a hierarchical structure by the application developer: from the [root command](../advanced/07-cli.md#root-command) (`simd`), the type of command (`Myquery`), the module that contains the command (`staking`), and command itself (`delegations`). Thus, the CLI knows exactly which module handles this command and directly passes the call there.
+
+### gRPC
+
+Another interface through which users can make queries is [gRPC](https://grpc.io) requests to a [gRPC server](../advanced/06-grpc_rest.md#grpc-server). The endpoints are defined as [Protocol Buffers](https://developers.google.com/protocol-buffers) service methods inside `.proto` files, written in Protobuf's own language-agnostic interface definition language (IDL). The Protobuf ecosystem developed tools for code-generation from `*.proto` files into various languages. These tools allow to build gRPC clients easily.
+
+One such tool is [grpcurl](https://github.com/fullstorydev/grpcurl), and a gRPC request for `MyQuery` using this client looks like:
+
+```bash
+grpcurl \
+ -plaintext # We want results in plain text
+ -import-path ./proto \ # Import these .proto files
+ -proto ./proto/cosmos/staking/v1beta1/query.proto \ # Look into this .proto file for the Query protobuf service
+ -d '{"address":"$MY_DELEGATOR"}' \ # Query arguments
+ localhost:9090 \ # gRPC server endpoint
+ cosmos.staking.v1beta1.Query/Delegations # Fully-qualified service method name
+```
+
+### REST
+
+Another interface through which users can make queries is through HTTP Requests to a [REST server](../advanced/06-grpc_rest.md#rest-server). The REST server is fully auto-generated from Protobuf services, using [gRPC-gateway](https://github.com/grpc-ecosystem/grpc-gateway).
+
+An example HTTP request for `MyQuery` looks like:
+
+```bash
+GET http://localhost:1317/cosmos/staking/v1beta1/delegators/{delegatorAddr}/delegations
+```
+
+## How Queries are Handled by the CLI
+
+The preceding examples show how an external user can interact with a node by querying its state. To understand in more detail the exact lifecycle of a query, let's dig into how the CLI prepares the query, and how the node handles it. The interactions from the users' perspective are a bit different, but the underlying functions are almost identical because they are implementations of the same command defined by the module developer. This step of processing happens within the CLI, gRPC, or REST server, and heavily involves a `client.Context`.
+
+### Context
+
+The first thing that is created in the execution of a CLI command is a `client.Context`. A `client.Context` is an object that stores all the data needed to process a request on the user side. In particular, a `client.Context` stores the following:
+
+* **Codec**: The [encoder/decoder](../advanced/05-encoding.md) used by the application, used to marshal the parameters and query before making the CometBFT RPC request and unmarshal the returned response into a JSON object. The default codec used by the CLI is Protobuf.
+* **Account Decoder**: The account decoder from the [`auth`](../../../../x/auth/README.md) module, which translates `[]byte`s into accounts.
+* **RPC Client**: The CometBFT RPC Client, or node, to which requests are relayed.
+* **Keyring**: A [Key Manager](../beginner/03-accounts.md#keyring) used to sign transactions and handle other operations with keys.
+* **Output Writer**: A [Writer](https://pkg.go.dev/io/#Writer) used to output the response.
+* **Configurations**: The flags configured by the user for this command, including `--height`, specifying the height of the blockchain to query, and `--indent`, which indicates to add an indent to the JSON response.
+
+The `client.Context` also contains various functions such as `Query()`, which retrieves the RPC Client and makes an ABCI call to relay a query to a full-node.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/context.go#L27-70
+```
+
+The `client.Context`'s primary role is to store data used during interactions with the end-user and provide methods to interact with this data - it is used before and after the query is processed by the full-node. Specifically, in handling `MyQuery`, the `client.Context` is utilized to encode the query parameters, retrieve the full-node, and write the output. Prior to being relayed to a full-node, the query needs to be encoded into a `[]byte` form, as full-nodes are application-agnostic and do not understand specific types. The full-node (RPC Client) itself is retrieved using the `client.Context`, which knows which node the user CLI is connected to. The query is relayed to this full-node to be processed. Finally, the `client.Context` contains a `Writer` to write output when the response is returned. These steps are further described in later sections.
+
+### Arguments and Route Creation
+
+At this point in the lifecycle, the user has created a CLI command with all of the data they wish to include in their query. A `client.Context` exists to assist in the rest of the `MyQuery`'s journey. Now, the next step is to parse the command or request, extract the arguments, and encode everything. These steps all happen on the user side within the interface they are interacting with.
+
+#### Encoding
+
+In our case (querying an address's delegations), `MyQuery` contains an [address](./03-accounts.md#addresses) `delegatorAddress` as its only argument. However, the request can only contain `[]byte`s, as it is ultimately relayed to a consensus engine (e.g. CometBFT) of a full-node that has no inherent knowledge of the application types. Thus, the `codec` of `client.Context` is used to marshal the address.
+
+Here is what the code looks like for the CLI command:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L315-L318
+```
+
+#### gRPC Query Client Creation
+
+The Cosmos SDK leverages code generated from Protobuf services to make queries. The `staking` module's `MyQuery` service generates a `queryClient`, which the CLI uses to make queries. Here is the relevant code:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/staking/client/cli/query.go#L308-L343
+```
+
+Under the hood, the `client.Context` has a `Query()` function used to retrieve the pre-configured node and relay a query to it; the function takes the query fully-qualified service method name as path (in our case: `/cosmos.staking.v1beta1.Query/Delegations`), and arguments as parameters. It first retrieves the RPC Client (called the [**node**](../advanced/03-node.md)) configured by the user to relay this query to, and creates the `ABCIQueryOptions` (parameters formatted for the ABCI call). The node is then used to make the ABCI call, `ABCIQueryWithOptions()`.
+
+Here is what the code looks like:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/query.go#L79-L113
+```
+
+## RPC
+
+With a call to `ABCIQueryWithOptions()`, `MyQuery` is received by a [full-node](../advanced/05-encoding.md) which then processes the request. Note that, while the RPC is made to the consensus engine (e.g. CometBFT) of a full-node, queries are not part of consensus and so are not broadcasted to the rest of the network, as they do not require anything the network needs to agree upon.
+
+Read more about ABCI Clients and CometBFT RPC in the [CometBFT documentation](https://docs.cometbft.com/v0.37/spec/rpc/).
+
+## Application Query Handling
+
+When a query is received by the full-node after it has been relayed from the underlying consensus engine, it is at that point being handled within an environment that understands application-specific types and has a copy of the state. [`baseapp`](../advanced/00-baseapp.md) implements the ABCI [`Query()`](../advanced/00-baseapp.md#query) function and handles gRPC queries. The query route is parsed, and it matches the fully-qualified service method name of an existing service method (most likely in one of the modules), then `baseapp` relays the request to the relevant module.
+
+Since `MyQuery` has a Protobuf fully-qualified service method name from the `staking` module (recall `/cosmos.staking.v1beta1.Query/Delegations`), `baseapp` first parses the path, then uses its own internal `GRPCQueryRouter` to retrieve the corresponding gRPC handler, and routes the query to the module. The gRPC handler is responsible for recognizing this query, retrieving the appropriate values from the application's stores, and returning a response. Read more about query services [here](../../build/building-modules/04-query-services.md).
+
+Once a result is received from the querier, `baseapp` begins the process of returning a response to the user.
+
+## Response
+
+Since `Query()` is an ABCI function, `baseapp` returns the response as an [`abci.QueryResponse`](https://docs.cometbft.com/main/spec/abci/abci++_methods#query) type. The `client.Context` `Query()` routine receives the response and processes it.
+
+### CLI Response
+
+The application [`codec`](../advanced/05-encoding.md) is used to unmarshal the response to a JSON and the `client.Context` prints the output to the command line, applying any configurations such as the output type (text, JSON or YAML).
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/context.go#L350-L357
+```
+
+And that's a wrap! The result of the query is outputted to the console by the CLI.
diff --git a/copy-of-sdk-docs/learn/beginner/03-accounts.md b/copy-of-sdk-docs/learn/beginner/03-accounts.md
new file mode 100644
index 00000000..150436b9
--- /dev/null
+++ b/copy-of-sdk-docs/learn/beginner/03-accounts.md
@@ -0,0 +1,281 @@
+---
+sidebar_position: 1
+---
+
+# Accounts
+
+:::note Synopsis
+This document describes the in-built account and public key system of the Cosmos SDK.
+:::
+
+:::note Pre-requisite Readings
+
+
+* [Anatomy of a Cosmos SDK Application](./00-app-anatomy.md)
+
+:::
+
+## Account Definition
+
+In the Cosmos SDK, an _account_ designates a pair of _public key_ `PubKey` and _private key_ `PrivKey`. The `PubKey` can be derived to generate various `Addresses`, which are used to identify users (among other parties) in the application. `Addresses` are also associated with [`message`s](../../build/building-modules/02-messages-and-queries.md#messages) to identify the sender of the `message`. The `PrivKey` is used to generate [digital signatures](#signatures) to prove that an `Address` associated with the `PrivKey` approved of a given `message`.
+
+For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki). The BIP32 allows users to create an HD wallet (as specified in [BIP44](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)) - a set of accounts derived from an initial secret seed. A seed is usually created from a 12- or 24-word mnemonic. A single seed can derive any number of `PrivKey`s using a one-way cryptographic function. Then, a `PubKey` can be derived from the `PrivKey`. Naturally, the mnemonic is the most sensitive information, as private keys can always be re-generated if the mnemonic is preserved.
+
+```text
+ Account 0 Account 1 Account 2
+
++------------------+ +------------------+ +------------------+
+| | | | | |
+| Address 0 | | Address 1 | | Address 2 |
+| ^ | | ^ | | ^ |
+| | | | | | | | |
+| | | | | | | | |
+| | | | | | | | |
+| + | | + | | + |
+| Public key 0 | | Public key 1 | | Public key 2 |
+| ^ | | ^ | | ^ |
+| | | | | | | | |
+| | | | | | | | |
+| | | | | | | | |
+| + | | + | | + |
+| Private key 0 | | Private key 1 | | Private key 2 |
+| ^ | | ^ | | ^ |
++------------------+ +------------------+ +------------------+
+ | | |
+ | | |
+ | | |
+ +--------------------------------------------------------------------+
+ |
+ |
+ +---------+---------+
+ | |
+ | Master PrivKey |
+ | |
+ +-------------------+
+ |
+ |
+ +---------+---------+
+ | |
+ | Mnemonic (Seed) |
+ | |
+ +-------------------+
+```
+
+In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring).
+
+## Keys, accounts, addresses, and signatures
+
+The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation).
+
+In the node, all data is stored using Protocol Buffers serialization.
+
+The Cosmos SDK supports the following digital key schemes for creating digital signatures:
+
+* `secp256k1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256k1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256k1/secp256k1.go).
+* `secp256r1`, as implemented in the [Cosmos SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/secp256r1/pubkey.go).
+* `tm-ed25519`, as implemented in the [Cosmos SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation.
+
+| | Address length in bytes | Public key length in bytes | Used for transaction authentication | Used for consensus (cometbft) |
+| :----------: | :---------------------: | :------------------------: | :---------------------------------: | :-----------------------------: |
+| `secp256k1` | 20 | 33 | yes | no |
+| `secp256r1` | 32 | 33 | yes | no |
+| `tm-ed25519` | -- not used -- | 32 | no | yes |
+
+## Addresses
+
+`Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object.
+
+Each account is identified using `Address` which is a sequence of bytes derived from a public key. In the Cosmos SDK, we define 3 types of addresses that specify a context where an account is used:
+
+* `AccAddress` identifies users (the sender of a `message`).
+* `ValAddress` identifies validator operators.
+* `ConsAddress` identifies validator nodes that are participating in consensus. Validator nodes are derived using the **`ed25519`** curve.
+
+These types implement the `Address` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/address.go#L126-L134
+```
+
+Address construction algorithm is defined in [ADR-28](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md).
+Here is the standard way to obtain an account address from a `pub` public key:
+
+```go
+sdk.AccAddress(pub.Address().Bytes())
+```
+
+Of note, the `Marshal()` and `Bytes()` method both return the same raw `[]byte` form of the address. `Marshal()` is required for Protobuf compatibility.
+
+For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.it/wiki/Bech32) and implemented by the `String` method. The Bech32 method is the only supported format to use when interacting with a blockchain. The Bech32 human-readable part (Bech32 prefix) is used to denote an address type. Example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/types/address.go#L299-L316
+```
+
+| | Address Bech32 Prefix |
+| ------------------ | --------------------- |
+| Accounts | cosmos |
+| Validator Operator | cosmosvaloper |
+| Consensus Nodes | cosmosvalcons |
+
+### Public Keys
+
+Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/types/types.go#L8-L17
+```
+
+A compressed format is used for `secp256k1` and `secp256r1` serialization.
+
+* The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate.
+* Otherwise the first byte is a `0x03`.
+
+This prefix is followed by the `x`-coordinate.
+
+Public Keys are not used to reference accounts (or users) and in general are not used when composing transaction messages (with few exceptions: `MsgCreateValidator`, `Validator` and `Multisig` messages).
+For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshalJSON](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/codec/json.go#L14-L34) function). Example:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/keys/output.go#L23-L39
+```
+
+## Keyring
+
+A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L58-L106
+```
+
+The default implementation of `Keyring` comes from the third-party [`99designs/keyring`](https://github.com/99designs/keyring) library.
+
+A few notes on the `Keyring` methods:
+
+* `Sign(uid string, msg []byte) ([]byte, types.PubKey, error)` strictly deals with the signature of the `msg` bytes. You must prepare and encode the transaction into a canonical `[]byte` form. Because protobuf is not deterministic, it has been decided in [ADR-020](../../build/architecture/adr-020-protobuf-transaction-encoding.md) that the canonical `payload` to sign is the `SignDoc` struct, deterministically encoded using [ADR-027](../../build/architecture/adr-027-deterministic-protobuf-serialization.md). Note that signature verification is not implemented in the Cosmos SDK by default, it is deferred to the [`anteHandler`](../advanced/00-baseapp.md#antehandler).
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/proto/cosmos/tx/v1beta1/tx.proto#L50-L67
+```
+
+* `NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (*Record, error)` creates a new account based on the [`bip44 path`](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki) and persists it on disk. The `PrivKey` is **never stored unencrypted**, instead it is [encrypted with a passphrase](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/armor.go) before being persisted. In the context of this method, the key type and sequence number refer to the segment of the BIP44 derivation path (for example, `0`, `1`, `2`, ...) that is used to derive a private and a public key from the mnemonic. Using the same mnemonic and derivation path, the same `PrivKey`, `PubKey` and `Address` is generated. The following keys are supported by the keyring:
+
+* `secp256k1`
+* `ed25519`
+
+* `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function.
+
+### Create New Key Type
+
+To create a new key type for using in keyring, `keyring.SignatureAlgo` interface must be fulfilled.
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/signing_algorithms.go#L11-L16
+```
+
+The interface consists of three methods where `Name()` returns the name of the algorithm as a `hd.PubKeyType` and `Derive()` and `Generate()` must return the following functions respectively:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/hd/algo.go#L28-L31
+```
+
+Once the `keyring.SignatureAlgo` has been implemented it must be added to the [list of supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) of the keyring.
+
+For simplicity the implementation of a new key type should be done inside the `crypto/hd` package.
+There is an example of a working `secp256k1` implementation in [algo.go](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/hd/algo.go#L38).
+
+
+#### Implementing secp256r1 algo
+
+Here is an example of how secp256r1 could be implemented.
+
+First a new function to create a private key from a secret number is needed in the secp256r1 package. This function could look like this:
+
+```go
+// cosmos-sdk/crypto/keys/secp256r1/privkey.go
+
+// NewPrivKeyFromSecret creates a private key derived for the secret number
+// represented in big-endian. The `secret` must be a valid ECDSA field element.
+func NewPrivKeyFromSecret(secret []byte) (*PrivKey, error) {
+ var d = new(big.Int).SetBytes(secret)
+ if d.Cmp(secp256r1.Params().N) >= 1 {
+ return nil, errorsmod.Wrap(errors.ErrInvalidRequest, "secret not in the curve base field")
+ }
+ sk := new(ecdsa.PrivKey)
+ return &PrivKey{&ecdsaSK{*sk}}, nil
+}
+```
+
+After that `secp256r1Algo` can be implemented.
+
+```go
+// cosmos-sdk/crypto/hd/secp256r1Algo.go
+
+package hd
+
+import (
+ "github.com/cosmos/go-bip39"
+
+ "github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1"
+ "github.com/cosmos/cosmos-sdk/crypto/types"
+)
+
+// Secp256r1Type uses the secp256r1 ECDSA parameters.
+const Secp256r1Type = PubKeyType("secp256r1")
+
+var Secp256r1 = secp256r1Algo{}
+
+type secp256r1Algo struct{}
+
+func (s secp256r1Algo) Name() PubKeyType {
+ return Secp256r1Type
+}
+
+// Derive derives and returns the secp256r1 private key for the given seed and HD path.
+func (s secp256r1Algo) Derive() DeriveFn {
+ return func(mnemonic string, bip39Passphrase, hdPath string) ([]byte, error) {
+ seed, err := bip39.NewSeedWithErrorChecking(mnemonic, bip39Passphrase)
+ if err != nil {
+ return nil, err
+ }
+
+ masterPriv, ch := ComputeMastersFromSeed(seed)
+ if len(hdPath) == 0 {
+ return masterPriv[:], nil
+ }
+ derivedKey, err := DerivePrivateKeyForPath(masterPriv, ch, hdPath)
+
+ return derivedKey, err
+ }
+}
+
+// Generate generates a secp256r1 private key from the given bytes.
+func (s secp256r1Algo) Generate() GenerateFn {
+ return func(bz []byte) types.PrivKey {
+ key, err := secp256r1.NewPrivKeyFromSecret(bz)
+ if err != nil {
+ panic(err)
+ }
+ return key
+ }
+}
+```
+
+Finally, the algo must be added to the list of [supported algos](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/crypto/keyring/keyring.go#L209) by the keyring.
+
+```go
+// cosmos-sdk/crypto/keyring/keyring.go
+
+func newKeystore(kr keyring.Keyring, cdc codec.Codec, backend string, opts ...Option) keystore {
+ // Default options for keybase, these can be overwritten using the
+ // Option function
+ options := Options{
+ SupportedAlgos: SigningAlgoList{hd.Secp256k1, hd.Secp256r1}, // added here
+ SupportedAlgosLedger: SigningAlgoList{hd.Secp256k1},
+ }
+...
+```
+
+Hereafter to create new keys using your algo, you must specify it with the flag `--algo` :
+
+`simd keys add myKey --algo secp256r1`
diff --git a/copy-of-sdk-docs/learn/beginner/04-gas-fees.md b/copy-of-sdk-docs/learn/beginner/04-gas-fees.md
new file mode 100644
index 00000000..5aea1238
--- /dev/null
+++ b/copy-of-sdk-docs/learn/beginner/04-gas-fees.md
@@ -0,0 +1,101 @@
+---
+sidebar_position: 1
+---
+
+# Gas and Fees
+
+:::note Synopsis
+This document describes the default strategies to handle gas and fees within a Cosmos SDK application.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK Application](./00-app-anatomy.md)
+
+:::
+
+## Introduction to `Gas` and `Fees`
+
+In the Cosmos SDK, `gas` is a special unit that is used to track the consumption of resources during execution. `gas` is typically consumed whenever read and writes are made to the store, but it can also be consumed if expensive computation needs to be done. It serves two main purposes:
+
+* Make sure blocks are not consuming too many resources and are finalized. This is implemented by default in the Cosmos SDK via the [block gas meter](#block-gas-meter).
+* Prevent spam and abuse from end-user. To this end, `gas` consumed during [`message`](../../build/building-modules/02-messages-and-queries.md#messages) execution is typically priced, resulting in a `fee` (`fees = gas * gas-prices`). `fees` generally have to be paid by the sender of the `message`. Note that the Cosmos SDK does not enforce `gas` pricing by default, as there may be other ways to prevent spam (e.g. bandwidth schemes). Still, most applications implement `fee` mechanisms to prevent spam by using the [`AnteHandler`](#antehandler).
+
+## Gas Meter
+
+In the Cosmos SDK, `gas` is a simple alias for `uint64`, and is managed by an object called a _gas meter_. Gas meters implement the `GasMeter` interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/store/types/gas.go#L40-L51
+```
+
+where:
+
+* `GasConsumed()` returns the amount of gas that was consumed by the gas meter instance.
+* `GasConsumedToLimit()` returns the amount of gas that was consumed by the gas meter instance, or the limit if it is reached.
+* `GasRemaining()` returns the gas left in the GasMeter.
+* `Limit()` returns the limit of the gas meter instance. `0` if the gas meter is infinite.
+* `ConsumeGas(amount Gas, descriptor string)` consumes the amount of `gas` provided. If the `gas` overflows, it panics with the `descriptor` message. If the gas meter is not infinite, it panics if `gas` consumed goes above the limit.
+* `RefundGas()` deducts the given amount from the gas consumed. This functionality enables refunding gas to the transaction or block gas pools so that EVM-compatible chains can fully support the go-ethereum StateDB interface.
+* `IsPastLimit()` returns `true` if the amount of gas consumed by the gas meter instance is strictly above the limit, `false` otherwise.
+* `IsOutOfGas()` returns `true` if the amount of gas consumed by the gas meter instance is above or equal to the limit, `false` otherwise.
+
+The gas meter is generally held in [`ctx`](../advanced/02-context.md), and consuming gas is done with the following pattern:
+
+```go
+ctx.GasMeter().ConsumeGas(amount, "description")
+```
+
+By default, the Cosmos SDK makes use of two different gas meters, the [main gas meter](#main-gas-meter) and the [block gas meter](#block-gas-meter).
+
+### Main Gas Meter
+
+`ctx.GasMeter()` is the main gas meter of the application. The main gas meter is initialized in `FinalizeBlock` via `setFinalizeBlockState`, and then tracks gas consumption during execution sequences that lead to state-transitions, i.e. those originally triggered by [`FinalizeBlock`](../advanced/00-baseapp.md#finalizeblock). At the beginning of each transaction execution, the main gas meter **must be set to 0** in the [`AnteHandler`](#antehandler), so that it can track gas consumption per-transaction.
+
+Gas consumption can be done manually, generally by the module developer in the [`BeginBlocker`, `EndBlocker`](../../build/building-modules/06-beginblock-endblock.md) or [`Msg` service](../../build/building-modules/03-msg-services.md), but most of the time it is done automatically whenever there is a read or write to the store. This automatic gas consumption logic is implemented in a special store called [`GasKv`](../advanced/04-store.md#gaskv-store).
+
+### Block Gas Meter
+
+`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit.
+
+During the genesis phase, gas consumption is unlimited to accommodate initialization transactions.
+
+```go
+app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter()))
+```
+
+Following the genesis block, the block gas meter is set to a finite value by the SDK. This transition is facilitated by the consensus engine (e.g., CometBFT) calling the `RequestFinalizeBlock` function, which in turn triggers the SDK's `FinalizeBlock` method. Within `FinalizeBlock`, `internalFinalizeBlock` is executed, performing necessary state updates and function executions. The block gas meter, initialized each with a finite limit, is then incorporated into the context for transaction execution, ensuring gas consumption does not exceed the block's gas limit and is reset at the end of each block.
+
+Modules within the Cosmos SDK can consume block gas at any point during their execution by utilizing the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis.
+
+```go
+gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context())
+app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(gasMeter))
+```
+
+The above shows the general mechanism for setting the block gas meter with a finite limit based on the block's consensus parameters.
+
+## AnteHandler
+
+The `AnteHandler` is run for every transaction during `CheckTx` and `FinalizeBlock`, before a Protobuf `Msg` service method for each `sdk.Msg` in the transaction.
+
+The anteHandler is not implemented in the core Cosmos SDK but in a module. That said, most applications today use the default implementation defined in the [`auth` module](https://github.com/cosmos/cosmos-sdk/tree/main/x/auth). Here is what the `anteHandler` is intended to do in a normal Cosmos SDK application:
+
+* Verify that the transactions are of the correct type. Transaction types are defined in the module that implements the `anteHandler`, and they follow the transaction interface:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/types/tx_msg.go#L53-L58
+```
+
+ This enables developers to play with various types for the transaction of their application. In the default `auth` module, the default transaction type is `Tx`:
+
+```protobuf reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0-rc.2/proto/cosmos/tx/v1beta1/tx.proto#L15-L28
+```
+
+* Verify signatures for each [`message`](../../build/building-modules/02-messages-and-queries.md#messages) contained in the transaction. Each `message` should be signed by one or multiple sender(s), and these signatures must be verified in the `anteHandler`.
+* During `CheckTx`, verify that the gas prices provided with the transaction are greater than the local `min-gas-prices` (as a reminder, gas-prices can be deducted from the following equation: `fees = gas * gas-prices`). `min-gas-prices` is a parameter local to each full-node and used during `CheckTx` to discard transactions that do not provide a minimum amount of fees. This ensures that the mempool cannot be spammed with garbage transactions.
+* Verify that the sender of the transaction has enough funds to cover for the `fees`. When the end-user generates a transaction, they must indicate 2 of the 3 following parameters (the third one being implicit): `fees`, `gas` and `gas-prices`. This signals how much they are willing to pay for nodes to execute their transaction. The provided `gas` value is stored in a parameter called `GasWanted` for later use.
+* Set `newCtx.GasMeter` to 0, with a limit of `GasWanted`. **This step is crucial**, as it not only makes sure the transaction cannot consume infinite gas, but also that `ctx.GasMeter` is reset in-between each transaction (`ctx` is set to `newCtx` after `anteHandler` is run, and the `anteHandler` is run each time a transaction executes).
+
+As explained above, the `anteHandler` returns a maximum limit of `gas` the transaction can consume during execution called `GasWanted`. The actual amount consumed in the end is denominated `GasUsed`, and we must therefore have `GasUsed =< GasWanted`. Both `GasWanted` and `GasUsed` are relayed to the underlying consensus engine when [`FinalizeBlock`](../advanced/00-baseapp.md#finalizeblock) returns.
diff --git a/copy-of-sdk-docs/learn/beginner/_category_.json b/copy-of-sdk-docs/learn/beginner/_category_.json
new file mode 100644
index 00000000..d09097fa
--- /dev/null
+++ b/copy-of-sdk-docs/learn/beginner/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Beginner",
+ "position": 2,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/learn/intro/00-overview.md b/copy-of-sdk-docs/learn/intro/00-overview.md
new file mode 100644
index 00000000..f1e896f3
--- /dev/null
+++ b/copy-of-sdk-docs/learn/intro/00-overview.md
@@ -0,0 +1,43 @@
+---
+sidebar_position: 1
+---
+
+# What is the Cosmos SDK
+
+The [Cosmos SDK](https://github.com/cosmos/cosmos-sdk) is an open-source toolkit for building multi-asset public Proof-of-Stake (PoS) blockchains, like the Cosmos Hub, as well as permissioned Proof-of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**.
+
+The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains.
+We further this modular approach by allowing developers to plug and play with different consensus engines this can range from the [CometBFT](https://github.com/cometbft/cometbft) or [Rollkit](https://rollkit.dev/).
+
+SDK-based blockchains have the choice to use the predefined modules or to build their own modules. What this means is that developers can build a blockchain that is tailored to their specific use case, without having to worry about the low-level details of building a blockchain from scratch. Predefined modules include staking, governance, and token issuance, among others.
+
+What's more, the Cosmos SDK is a capabilities-based system that allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [Object-Capability Model](../advanced/10-ocap.md).
+
+How you can look at this is if we imagine that the SDK is like a lego kit. You can choose to build the basic house from the instructions or you can choose to modify your house and add more floors, more doors, more windows. The choice is yours.
+
+## What are Application-Specific Blockchains
+
+One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building decentralized applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralized platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance.
+
+Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance.
+
+Learn more about [application-specific blockchains](./01-why-app-specific.md).
+
+## What is Modularity
+
+Today there is a lot of talk around modularity and discussions between monolithic and modular. Originally the Cosmos SDK was built with a vision of modularity in mind. Modularity is derived from splitting a blockchain into customizable layers of execution, consensus, settlement and data availability, which is what the Cosmos SDK enables. This means that developers can plug and play, making their blockchain customisable by using different software for different layers. For example you can choose to build a vanilla chain and use the Cosmos SDK with CometBFT. CometBFT will be your consensus layer and the chain itself would be the settlement and execution layer. Another route could be to use the SDK with Rollkit and Celestia as your consensus and data availability layer. The benefit of modularity is that you can customize your chain to your specific use case.
+
+## Why the Cosmos SDK
+
+The Cosmos SDK is the most advanced framework for building custom modular application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralized application with the Cosmos SDK:
+
+* It allows you to plug and play and customize your consensus layer. As above you can use Rollkit and Celestia as your consensus and data availability layer. This offers a lot of flexibility and customisation.
+* Previously the default consensus engine available within the Cosmos SDK is [CometBFT](https://github.com/cometbft/cometbft). CometBFT is the most mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems.
+* The Cosmos SDK is open-source and designed to make it easy to build blockchains out of composable [modules](../../build/modules). As the ecosystem of open-source Cosmos SDK modules grows, it will become increasingly easier to build complex decentralized platforms with it.
+* The Cosmos SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains.
+* Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK.
+
+## Getting started with the Cosmos SDK
+
+* Learn more about the [architecture of a Cosmos SDK application](./02-sdk-app-architecture.md)
+* Learn how to build an application-specific blockchain from scratch with the [Cosmos SDK Tutorial](https://cosmos.network/docs/tutorial)
diff --git a/copy-of-sdk-docs/learn/intro/01-why-app-specific.md b/copy-of-sdk-docs/learn/intro/01-why-app-specific.md
new file mode 100644
index 00000000..df16c19a
--- /dev/null
+++ b/copy-of-sdk-docs/learn/intro/01-why-app-specific.md
@@ -0,0 +1,79 @@
+---
+sidebar_position: 1
+---
+
+# Application-Specific Blockchains
+
+:::note Synopsis
+This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts.
+:::
+
+## What are application-specific blockchains
+
+Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralized application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interact with the nodes.
+
+```text
+ ^ +-------------------------------+ ^
+ | | | | Built with Cosmos SDK
+ | | State-machine = Application | |
+ | | | v
+ | +-------------------------------+
+ | | | ^
+Blockchain node | | Consensus | |
+ | | | |
+ | +-------------------------------+ | CometBFT
+ | | | |
+ | | Networking | |
+ | | | |
+ v +-------------------------------+ v
+```
+
+## What are the shortcomings of Smart Contracts
+
+Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralized applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize.
+
+Virtual-machine blockchains came in with a new value proposition. Their state-machine incorporates a virtual-machine that is able to interpret turing-complete programs called Smart Contracts. These Smart Contracts are very good for use cases like one-time events (e.g. ICOs), but they can fall short for building complex decentralized platforms. Here is why:
+
+* Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails.
+* Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severely restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpreted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of 10x in performance when the virtual-machine is removed).
+* Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralized application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it.
+
+Application-Specific Blockchains are designed to address these shortcomings.
+
+## Application-Specific Blockchains Benefits
+
+### Flexibility
+
+Application-specific blockchains give maximum flexibility to developers:
+
+* In Cosmos blockchains, the state-machine is typically connected to the underlying consensus engine via an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/). This interface can be wrapped in any programming language, meaning developers can build their state-machine in the programming language of their choice.
+
+* Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). Typically the choice will be made based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...).
+* The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only CometBFT is production-ready, but in the future other consensus engines are expected to emerge.
+* Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms.
+* Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...).
+* Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains.
+
+The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers.
+
+### Performance
+
+Decentralized applications built with Smart Contracts are inherently capped in performance by the underlying environment. For a decentralized application to optimise performance, it needs to be built as an application-specific blockchain. Next are some of the benefits an application-specific blockchain brings in terms of performance:
+
+* Developers of application-specific blockchains can choose to operate with a novel consensus engine such as CometBFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput.
+* An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage.
+* Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them.
+
+### Security
+
+Security is hard to quantify, and greatly varies from platform to platform. That said here are some important benefits an application-specific blockchain can bring in terms of security:
+
+* Developers can choose proven programming languages like Go when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature.
+* Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries.
+* Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application.
+
+### Sovereignty
+
+One of the major benefits of application-specific blockchains is sovereignty. A decentralized application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on a virtual-machine blockchain where many decentralized applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen.
+
+The fundamental issue here is that the governance of the application and the governance of the network are not aligned. This issue is solved by application-specific blockchains. Because application-specific blockchains specialize to operate a single application, stakeholders of the application have full control over the entire chain. This ensures that the community will not be stuck if a bug is discovered, and that it has the freedom to choose how it is going to evolve.
diff --git a/copy-of-sdk-docs/learn/intro/02-sdk-app-architecture.md b/copy-of-sdk-docs/learn/intro/02-sdk-app-architecture.md
new file mode 100644
index 00000000..532c2743
--- /dev/null
+++ b/copy-of-sdk-docs/learn/intro/02-sdk-app-architecture.md
@@ -0,0 +1,93 @@
+---
+sidebar_position: 1
+---
+
+# Blockchain Architecture
+
+## State machine
+
+At its core, a blockchain is a [replicated deterministic state machine](https://en.wikipedia.org/wiki/State_machine_replication).
+
+A state machine is a computer science concept whereby a machine can have multiple states, but only one at any given time. There is a `state`, which describes the current state of the system, and `transactions`, that trigger state transitions.
+
+Given a state S and a transaction T, the state machine will return a new state S'.
+
+```text
++--------+ +--------+
+| | | |
+| S +---------------->+ S' |
+| | apply(T) | |
++--------+ +--------+
+```
+
+In practice, the transactions are bundled in blocks to make the process more efficient. Given a state S and a block of transactions B, the state machine will return a new state S'.
+
+```text
++--------+ +--------+
+| | | |
+| S +----------------------------> | S' |
+| | For each T in B: apply(T) | |
++--------+ +--------+
+```
+
+In a blockchain context, the state machine is deterministic. This means that if a node is started at a given state and replays the same sequence of transactions, it will always end up with the same final state.
+
+The Cosmos SDK gives developers maximum flexibility to define the state of their application, transaction types and state transition functions. The process of building state-machines with the Cosmos SDK will be described more in depth in the following sections. But first, let us see how the state-machine is replicated using **CometBFT**.
+
+## CometBFT
+
+Thanks to the Cosmos SDK, developers just have to define the state machine, and [*CometBFT*](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) will handle replication over the network for them.
+
+```text
+ ^ +-------------------------------+ ^
+ | | | | Built with Cosmos SDK
+ | | State-machine = Application | |
+ | | | v
+ | +-------------------------------+
+ | | | ^
+Blockchain node | | Consensus | |
+ | | | |
+ | +-------------------------------+ | CometBFT
+ | | | |
+ | | Networking | |
+ | | | |
+ v +-------------------------------+ v
+```
+
+[CometBFT](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft) is an application-agnostic engine that is responsible for handling the *networking* and *consensus* layers of a blockchain. In practice, this means that CometBFT is responsible for propagating and ordering transaction bytes. CometBFT relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions.
+
+The CometBFT [consensus algorithm](https://docs.cometbft.com/v0.37/introduction/what-is-cometbft#consensus-overview) works with a set of special nodes called *Validators*. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a `prevote` and a `precommit` on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine.
+
+## ABCI
+
+CometBFT passes transactions to the application through an interface called the [ABCI](https://docs.cometbft.com/v0.37/spec/abci/), which the application must implement.
+
+```text
+ +---------------------+
+ | |
+ | Application |
+ | |
+ +--------+---+--------+
+ ^ |
+ | | ABCI
+ | v
+ +--------+---+--------+
+ | |
+ | |
+ | CometBFT |
+ | |
+ | |
+ +---------------------+
+```
+
+Note that **CometBFT only handles transaction bytes**. It has no knowledge of what these bytes mean. All CometBFT does is order these transaction bytes deterministically. CometBFT passes the bytes to the application via the ABCI, and expects a return code to inform it if the messages contained in the transactions were successfully processed or not.
+
+Here are the most important messages of the ABCI:
+
+* `CheckTx`: When a transaction is received by CometBFT, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. A special handler called the [`AnteHandler`](../beginner/04-gas-fees.md#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.cometbft.com/v0.37/spec/p2p/legacy-docs/messages/mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet.
+* `DeliverTx`: When a [valid block](https://docs.cometbft.com/v0.37/spec/core/data_structures#block) is received by CometBFT, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again, along with the actual [`Msg` service](../../build/building-modules/03-msg-services.md) RPC for each message in the transaction.
+* `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transactions or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite.
+
+Find a more detailed view of the ABCI methods from the [CometBFT docs](https://docs.cometbft.com/v0.37/spec/abci/).
+
+Any application built on CometBFT needs to implement the ABCI interface in order to communicate with the underlying local CometBFT engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](./03-sdk-design.md#baseapp).
diff --git a/copy-of-sdk-docs/learn/intro/03-sdk-design.md b/copy-of-sdk-docs/learn/intro/03-sdk-design.md
new file mode 100644
index 00000000..6ecffbe0
--- /dev/null
+++ b/copy-of-sdk-docs/learn/intro/03-sdk-design.md
@@ -0,0 +1,64 @@
+---
+sidebar_position: 1
+---
+
+# Main Components of the Cosmos SDK
+
+The Cosmos SDK is a framework that facilitates the development of secure state-machines on top of CometBFT. At its core, the Cosmos SDK is a boilerplate implementation of the [ABCI](./02-sdk-app-architecture.md#abci) in Golang. It comes with a [`multistore`](../advanced/04-store.md#multistore) to persist data and a [`router`](../advanced/00-baseapp.md#routing) to handle transactions.
+
+Here is a simplified view of how transactions are handled by an application built on top of the Cosmos SDK when transferred from CometBFT via `DeliverTx`:
+
+1. Decode `transactions` received from the CometBFT consensus engine (remember that CometBFT only deals with `[]bytes`).
+2. Extract `messages` from `transactions` and do basic sanity checks.
+3. Route each message to the appropriate module so that it can be processed.
+4. Commit state changes.
+
+## `baseapp`
+
+`baseapp` is the boilerplate implementation of a Cosmos SDK application. It comes with an implementation of the ABCI to handle the connection with the underlying consensus engine. Typically, a Cosmos SDK application extends `baseapp` by embedding it in [`app.go`](../beginner/00-app-anatomy.md#core-application-file).
+
+Here is an example of this from `simapp`, the Cosmos SDK demonstration app:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/simapp/app.go#L137-L180
+```
+
+The goal of `baseapp` is to provide a secure interface between the store and the extensible state machine while defining as little about the state machine as possible (staying true to the ABCI).
+
+For more on `baseapp`, please click [here](../advanced/00-baseapp.md).
+
+## Multistore
+
+The Cosmos SDK provides a [`multistore`](../advanced/04-store.md#multistore) for persisting state. The multistore allows developers to declare any number of [`KVStores`](../advanced/04-store.md#base-layer-kvstores). These `KVStores` only accept the `[]byte` type as value and therefore any custom structure needs to be marshalled using [a codec](../advanced/05-encoding.md) before being stored.
+
+The multistore abstraction is used to divide the state in distinct compartments, each managed by its own module. For more on the multistore, click [here](../advanced/04-store.md#multistore)
+
+## Modules
+
+The power of the Cosmos SDK lies in its modularity. Cosmos SDK applications are built by aggregating a collection of interoperable modules. Each module defines a subset of the state and contains its own message/transaction processor, while the Cosmos SDK is responsible for routing each message to its respective module.
+
+Here is a simplified view of how a transaction is processed by the application of each full-node when it is received in a valid block:
+
+```mermaid
+ flowchart TD
+ A[Transaction relayed from the full-node's CometBFT engine to the node's application via DeliverTx] --> B[APPLICATION]
+ B -->|"Using baseapp's methods: Decode the Tx, extract and route the message(s)"| C[Message routed to the correct module to be processed]
+ C --> D1[AUTH MODULE]
+ C --> D2[BANK MODULE]
+ C --> D3[STAKING MODULE]
+ C --> D4[GOV MODULE]
+ D1 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"]
+ D2 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"]
+ D3 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"]
+ D4 -->|Handle message, Update state| E["Return result to CometBFT (0=Ok, 1=Err)"]
+```
+
+Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](../advanced/10-ocap.md). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities.
+
+Cosmos SDK modules are defined in the `x/` folder of the Cosmos SDK. Some core modules include:
+
+* `x/auth`: Used to manage accounts and signatures.
+* `x/bank`: Used to enable tokens and token transfers.
+* `x/staking` + `x/slashing`: Used to build Proof-of-Stake blockchains.
+
+In addition to the already existing modules in `x/`, which anyone can use in their app, the Cosmos SDK lets you build your own custom modules. You can check an [example of that in the tutorial](https://tutorials.cosmos.network/).
diff --git a/copy-of-sdk-docs/learn/intro/Maincomps.excalidraw b/copy-of-sdk-docs/learn/intro/Maincomps.excalidraw
new file mode 100644
index 00000000..289d1010
--- /dev/null
+++ b/copy-of-sdk-docs/learn/intro/Maincomps.excalidraw
@@ -0,0 +1,603 @@
+{
+ "type": "excalidraw",
+ "version": 2,
+ "source": "https://excalidraw.com",
+ "elements": [
+ {
+ "id": "TT806C8wYC1giNDrB3j0H",
+ "type": "rectangle",
+ "x": 392.3992464191551,
+ "y": 377.59281643418194,
+ "width": 368.5810298094963,
+ "height": 300.3445584269905,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#ffec99",
+ "fillStyle": "hachure",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b20",
+ "roundness": {
+ "type": 3
+ },
+ "seed": 1095376796,
+ "version": 379,
+ "versionNonce": 395388196,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946215725,
+ "link": null,
+ "locked": false
+ },
+ {
+ "id": "sTDd-IcaEk93yvorkOjjx",
+ "type": "rectangle",
+ "x": 425.6105707309967,
+ "y": 407.3907865247813,
+ "width": 291.7422935286128,
+ "height": 57.093323969660304,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#ebfbee",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b21",
+ "roundness": {
+ "type": 3
+ },
+ "seed": 534261156,
+ "version": 200,
+ "versionNonce": 320694564,
+ "isDeleted": false,
+ "boundElements": [
+ {
+ "type": "text",
+ "id": "DfQ_v0mZK9I65EtQ6glTr"
+ }
+ ],
+ "updated": 1717946141898,
+ "link": null,
+ "locked": false
+ },
+ {
+ "id": "DfQ_v0mZK9I65EtQ6glTr",
+ "type": "text",
+ "x": 540.1377462428617,
+ "y": 425.93744850961144,
+ "width": 62.68794250488281,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#b2f2bb",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b22",
+ "roundness": null,
+ "seed": 1825368092,
+ "version": 129,
+ "versionNonce": 1358928420,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717945861493,
+ "link": null,
+ "locked": false,
+ "text": "baseapp",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "center",
+ "verticalAlign": "middle",
+ "containerId": "sTDd-IcaEk93yvorkOjjx",
+ "originalText": "baseapp",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "0eOjlptq2QPkgMZD4ilw_",
+ "type": "rectangle",
+ "x": 423.5441903728455,
+ "y": 483.4335837047473,
+ "width": 305.81281311550566,
+ "height": 100.72456256899451,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#e7f5ff",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b23",
+ "roundness": {
+ "type": 3
+ },
+ "seed": 774424100,
+ "version": 711,
+ "versionNonce": 1241388444,
+ "isDeleted": false,
+ "boundElements": [
+ {
+ "type": "text",
+ "id": "To8Ifauc4u3pXYXE-BuBm"
+ },
+ {
+ "id": "5U3m__cEk0384Je1xS8Lt",
+ "type": "arrow"
+ }
+ ],
+ "updated": 1717946136493,
+ "link": null,
+ "locked": false
+ },
+ {
+ "id": "To8Ifauc4u3pXYXE-BuBm",
+ "type": "text",
+ "x": 537.3546267767897,
+ "y": 488.4335837047473,
+ "width": 78.19194030761719,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#b2f2bb",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b24",
+ "roundness": null,
+ "seed": 268281380,
+ "version": 653,
+ "versionNonce": 240902940,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946115508,
+ "link": null,
+ "locked": false,
+ "text": "multistore",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "center",
+ "verticalAlign": "top",
+ "containerId": "0eOjlptq2QPkgMZD4ilw_",
+ "originalText": "multistore",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "6ZMBBGC0e67HCiZuw1ZGQ",
+ "type": "rectangle",
+ "x": 433.0074470871197,
+ "y": 611.2583420078661,
+ "width": 296.0816922807304,
+ "height": 40.43217567449267,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#ebfbee",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b25",
+ "roundness": {
+ "type": 3
+ },
+ "seed": 73209500,
+ "version": 210,
+ "versionNonce": 506281508,
+ "isDeleted": false,
+ "boundElements": [
+ {
+ "type": "text",
+ "id": "lDvSHg5T_n2nFJyxXar85"
+ },
+ {
+ "id": "5U3m__cEk0384Je1xS8Lt",
+ "type": "arrow"
+ }
+ ],
+ "updated": 1717946145151,
+ "link": null,
+ "locked": false
+ },
+ {
+ "id": "lDvSHg5T_n2nFJyxXar85",
+ "type": "text",
+ "x": 550.5683127587349,
+ "y": 621.4744298451124,
+ "width": 60.9599609375,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#b2f2bb",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b26",
+ "roundness": null,
+ "seed": 169830436,
+ "version": 101,
+ "versionNonce": 99685404,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946143284,
+ "link": null,
+ "locked": false,
+ "text": "Modules",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "center",
+ "verticalAlign": "middle",
+ "containerId": "6ZMBBGC0e67HCiZuw1ZGQ",
+ "originalText": "Modules",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "5U3m__cEk0384Je1xS8Lt",
+ "type": "arrow",
+ "x": 730.0891393678501,
+ "y": 627.8029150748303,
+ "width": 33.89886827099872,
+ "height": 77.8473208768944,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#b2f2bb",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b27",
+ "roundness": {
+ "type": 2
+ },
+ "seed": 2017356060,
+ "version": 847,
+ "versionNonce": 601341212,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946143287,
+ "link": null,
+ "locked": false,
+ "points": [
+ [
+ 0,
+ 0
+ ],
+ [
+ 33.89886827099872,
+ -59.624776904124815
+ ],
+ [
+ 0.2678641205010308,
+ -77.8473208768944
+ ]
+ ],
+ "lastCommittedPoint": null,
+ "startBinding": {
+ "elementId": "6ZMBBGC0e67HCiZuw1ZGQ",
+ "focus": 0.9211394284163724,
+ "gap": 1
+ },
+ "endBinding": {
+ "elementId": "0eOjlptq2QPkgMZD4ilw_",
+ "focus": -0.504700685555249,
+ "gap": 1
+ },
+ "startArrowhead": null,
+ "endArrowhead": "arrow"
+ },
+ {
+ "id": "ECiME4kCyLcElqpESHieN",
+ "type": "text",
+ "x": 779.3728577032684,
+ "y": 549.0028937731206,
+ "width": 230.17587280273438,
+ "height": 40,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#b2f2bb",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b28",
+ "roundness": null,
+ "seed": 1031090332,
+ "version": 173,
+ "versionNonce": 153810724,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946206425,
+ "link": null,
+ "locked": false,
+ "text": "Each KVstore \nmanaged by keeper of Module",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "center",
+ "verticalAlign": "top",
+ "containerId": null,
+ "originalText": "Each KVstore \nmanaged by keeper of Module",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "9gSP2Ihxnhrj8VPzU3iMs",
+ "type": "rectangle",
+ "x": 440.01400715336973,
+ "y": 528.7255798511883,
+ "width": 82.2687246664696,
+ "height": 43.508786429962356,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b29",
+ "roundness": {
+ "type": 3
+ },
+ "seed": 862728356,
+ "version": 81,
+ "versionNonce": 2003221028,
+ "isDeleted": false,
+ "boundElements": [
+ {
+ "type": "text",
+ "id": "bo-ZnZOJ2RMYEwiQDJwhQ"
+ }
+ ],
+ "updated": 1717946171042,
+ "link": null,
+ "locked": false
+ },
+ {
+ "id": "bo-ZnZOJ2RMYEwiQDJwhQ",
+ "type": "text",
+ "x": 451.95639103201466,
+ "y": 540.4799730661695,
+ "width": 58.38395690917969,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b29V",
+ "roundness": null,
+ "seed": 1054504484,
+ "version": 32,
+ "versionNonce": 374592932,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946171043,
+ "link": null,
+ "locked": false,
+ "text": "kvstore",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "center",
+ "verticalAlign": "middle",
+ "containerId": "9gSP2Ihxnhrj8VPzU3iMs",
+ "originalText": "kvstore",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "sS09HXQCLT5o584RLcoh0",
+ "type": "rectangle",
+ "x": 535.7029587057802,
+ "y": 526.7472119897728,
+ "width": 85.49840063365426,
+ "height": 45.291996146440965,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b2A",
+ "roundness": {
+ "type": 3
+ },
+ "seed": 1969890340,
+ "version": 163,
+ "versionNonce": 795200668,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946178372,
+ "link": null,
+ "locked": false
+ },
+ {
+ "type": "rectangle",
+ "version": 243,
+ "versionNonce": 1959742876,
+ "index": "b2B",
+ "isDeleted": false,
+ "id": "dOSADw14E7lwG6QVycTWj",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "angle": 0,
+ "x": 634.8832415027643,
+ "y": 525.0060952065161,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "width": 81.61054425609542,
+ "height": 44.80601409924611,
+ "seed": 964534684,
+ "groupIds": [],
+ "frameId": null,
+ "roundness": {
+ "type": 3
+ },
+ "boundElements": [],
+ "updated": 1717946186317,
+ "link": null,
+ "locked": false
+ },
+ {
+ "id": "Jn2VZB4Laog2zIHreQ13v",
+ "type": "text",
+ "x": 550.053971904952,
+ "y": 541.2988719488441,
+ "width": 58.38395690917969,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b2C",
+ "roundness": null,
+ "seed": 268605596,
+ "version": 81,
+ "versionNonce": 271008028,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946183225,
+ "link": null,
+ "locked": false,
+ "text": "kvstore",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "left",
+ "verticalAlign": "top",
+ "containerId": null,
+ "originalText": "kvstore",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "bmEWq6ldGd19BN7P3CPgk",
+ "type": "text",
+ "x": 649.2096160538688,
+ "y": 540.0169508007317,
+ "width": 58.38395690917969,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b2D",
+ "roundness": null,
+ "seed": 1351980700,
+ "version": 78,
+ "versionNonce": 1793931548,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946190092,
+ "link": null,
+ "locked": false,
+ "text": "kvstore",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "left",
+ "verticalAlign": "top",
+ "containerId": null,
+ "originalText": "kvstore",
+ "autoResize": true,
+ "lineHeight": 1.25
+ },
+ {
+ "id": "W3LH6VESuV13qvhxI7mcM",
+ "type": "text",
+ "x": 458.21179209642423,
+ "y": 348.25404197872706,
+ "width": 219.0238800048828,
+ "height": 20,
+ "angle": 0,
+ "strokeColor": "#1e1e1e",
+ "backgroundColor": "#fff5f5",
+ "fillStyle": "solid",
+ "strokeWidth": 1,
+ "strokeStyle": "solid",
+ "roughness": 1,
+ "opacity": 100,
+ "groupIds": [],
+ "frameId": null,
+ "index": "b2E",
+ "roundness": null,
+ "seed": 100014108,
+ "version": 34,
+ "versionNonce": 554727332,
+ "isDeleted": false,
+ "boundElements": null,
+ "updated": 1717946232701,
+ "link": null,
+ "locked": false,
+ "text": "Main components of the sdk",
+ "fontSize": 16,
+ "fontFamily": 1,
+ "textAlign": "center",
+ "verticalAlign": "top",
+ "containerId": null,
+ "originalText": "Main components of the sdk",
+ "autoResize": true,
+ "lineHeight": 1.25
+ }
+ ],
+ "appState": {
+ "gridSize": null,
+ "viewBackgroundColor": "#ffffff"
+ },
+ "files": {}
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/learn/intro/_category_.json b/copy-of-sdk-docs/learn/intro/_category_.json
new file mode 100644
index 00000000..bb0bcd14
--- /dev/null
+++ b/copy-of-sdk-docs/learn/intro/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Introduction",
+ "position": 1,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/learn/intro/main-components.png b/copy-of-sdk-docs/learn/intro/main-components.png
new file mode 100644
index 00000000..fa82eb9b
Binary files /dev/null and b/copy-of-sdk-docs/learn/intro/main-components.png differ
diff --git a/copy-of-sdk-docs/learn/learn.md b/copy-of-sdk-docs/learn/learn.md
new file mode 100644
index 00000000..ff14d726
--- /dev/null
+++ b/copy-of-sdk-docs/learn/learn.md
@@ -0,0 +1,11 @@
+---
+sidebar_position: 0
+---
+# Learn
+
+* [Introduction](./intro/00-overview.md) - Dive into the fundamentals of Cosmos SDK with an insightful introduction,
+laying the groundwork for understanding blockchain development. In this section we provide a High-Level Overview of the SDK, then dive deeper into Core concepts such as Application-Specific Blockchains, Blockchain Architecture, and finally we begin to explore the main components of the SDK.
+* [Beginner](./beginner/00-app-anatomy.md) - Start your journey with beginner-friendly resources in the Cosmos SDK's "Learn"
+section, providing a gentle entry point for newcomers to blockchain development. Here we focus on a little more detail, covering the Anatomy of a Cosmos SDK Application, Transaction Lifecycles, Accounts and lastly, Gas and Fees.
+* [Advanced](./advanced/00-baseapp.md) - Level up your Cosmos SDK expertise with advanced topics, tailored for experienced
+developers diving into intricate blockchain application development. We cover the Cosmos SDK on a lower level as we dive into the core of the SDK with BaseApp, Transactions, Context, Node Client (Daemon), Store, Encoding, gRPC, REST, and CometBFT Endpoints, CLI, Events, Telemetry, Object-Capability Model, RunTx recovery middleware, Cosmos Blockchain Simulator, Protobuf Documentation, In-Place Store Migrations, Configuration and AutoCLI.
diff --git a/copy-of-sdk-docs/tutorials/_category_.json b/copy-of-sdk-docs/tutorials/_category_.json
new file mode 100644
index 00000000..f27bca92
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Advanced Tutorials",
+ "position": 2,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/tutorials/transactions/00-building-a-transaction.md b/copy-of-sdk-docs/tutorials/transactions/00-building-a-transaction.md
new file mode 100644
index 00000000..3751a2c2
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/transactions/00-building-a-transaction.md
@@ -0,0 +1,190 @@
+# Building a Transaction
+
+These are the steps to build, sign and broadcast a transaction using v2 semantics.
+
+1. Correctly set up imports
+
+```go
+import (
+ "context"
+ "fmt"
+ "log"
+
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+
+ apisigning "cosmossdk.io/api/cosmos/tx/signing/v1beta1"
+ "cosmossdk.io/client/v2/broadcast/comet"
+ "cosmossdk.io/client/v2/tx"
+ "cosmossdk.io/core/transaction"
+ "cosmossdk.io/math"
+ banktypes "cosmossdk.io/x/bank/types"
+ codectypes "github.com/cosmos/cosmos-sdk/codec/types"
+ cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec"
+ "github.com/cosmos/cosmos-sdk/crypto/keyring"
+ authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+
+ "github.com/cosmos/cosmos-sdk/codec"
+ addrcodec "github.com/cosmos/cosmos-sdk/codec/address"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+)
+
+```
+
+2. Create a gRPC connection
+
+```go
+clientConn, err := grpc.NewClient("127.0.0.1:9090", grpc.WithTransportCredentials(insecure.NewCredentials()))
+if err != nil {
+ log.Fatal(err)
+}
+```
+
+3. Setup codec and interface registry
+
+```go
+ // Setup interface registry and register necessary interfaces
+ interfaceRegistry := codectypes.NewInterfaceRegistry()
+ banktypes.RegisterInterfaces(interfaceRegistry)
+ authtypes.RegisterInterfaces(interfaceRegistry)
+ cryptocodec.RegisterInterfaces(interfaceRegistry)
+
+ // Create a ProtoCodec for encoding/decoding
+ protoCodec := codec.NewProtoCodec(interfaceRegistry)
+
+```
+
+4. Initialize keyring
+
+```go
+
+ ckr, err := keyring.New("autoclikeyring", "test", home, nil, protoCodec)
+ if err != nil {
+ log.Fatal("error creating keyring", err)
+ }
+ kr, err := keyring.NewAutoCLIKeyring(ckr, addrcodec.NewBech32Codec("cosmos"))
+ if err != nil {
+ log.Fatal("error creating auto cli keyring", err)
+ }
+
+
+```
+
+5. Setup transaction parameters
+
+```go
+
+ // Setup transaction parameters
+ txParams := tx.TxParameters{
+ ChainID: "simapp-v2-chain",
+ SignMode: apisigning.SignMode_SIGN_MODE_DIRECT,
+ AccountConfig: tx.AccountConfig{
+ FromAddress: "cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy",
+ FromName: "alice",
+ },
+ }
+
+ // Configure gas settings
+ gasConfig, err := tx.NewGasConfig(100, 100, "0stake")
+ if err != nil {
+ log.Fatal("error creating gas config: ", err)
+ }
+ txParams.GasConfig = gasConfig
+
+ // Create auth query client
+ authClient := authtypes.NewQueryClient(clientConn)
+
+ // Retrieve account information for the sender
+ fromAccount, err := getAccount("cosmos1t0fmn0lyp2v99ga55mm37mpnqrlnc4xcs2hhhy", authClient, protoCodec)
+ if err != nil {
+ log.Fatal("error getting from account: ", err)
+ }
+
+ // Update txParams with the correct account number and sequence
+ txParams.AccountConfig.AccountNumber = fromAccount.GetAccountNumber()
+ txParams.AccountConfig.Sequence = fromAccount.GetSequence()
+
+ // Retrieve account information for the recipient
+ toAccount, err := getAccount("cosmos1e2wanzh89mlwct7cs7eumxf7mrh5m3ykpsh66m", authClient, protoCodec)
+ if err != nil {
+ log.Fatal("error getting to account: ", err)
+ }
+
+ // Configure transaction settings
+ txConf, _ := tx.NewTxConfig(tx.ConfigOptions{
+ AddressCodec: addrcodec.NewBech32Codec("cosmos"),
+ Cdc: protoCodec,
+ ValidatorAddressCodec: addrcodec.NewBech32Codec("cosmosval"),
+ EnabledSignModes: []apisigning.SignMode{apisigning.SignMode_SIGN_MODE_DIRECT},
+ })
+```
+
+6. Build the transaction
+
+```go
+// Create a transaction factory
+ f, err := tx.NewFactory(kr, codec.NewProtoCodec(codectypes.NewInterfaceRegistry()), nil, txConf, addrcodec.NewBech32Codec("cosmos"), clientConn, txParams)
+ if err != nil {
+ log.Fatal("error creating factory", err)
+ }
+
+ // Define the transaction message
+ msgs := []transaction.Msg{
+ &banktypes.MsgSend{
+ FromAddress: fromAccount.GetAddress().String(),
+ ToAddress: toAccount.GetAddress().String(),
+ Amount: sdk.Coins{
+ sdk.NewCoin("stake", math.NewInt(1000000)),
+ },
+ },
+ }
+
+ // Build and sign the transaction
+ tx, err := f.BuildsSignedTx(context.Background(), msgs...)
+ if err != nil {
+ log.Fatal("error building signed tx", err)
+ }
+
+
+```
+
+7. Broadcast the transaction
+
+```go
+// Create a broadcaster for the transaction
+ c, err := comet.NewCometBFTBroadcaster("http://127.0.0.1:26657", comet.BroadcastSync, protoCodec)
+ if err != nil {
+ log.Fatal("error creating comet broadcaster", err)
+ }
+
+ // Broadcast the transaction
+ res, err := c.Broadcast(context.Background(), tx.Bytes())
+ if err != nil {
+ log.Fatal("error broadcasting tx", err)
+ }
+
+```
+
+8. Helpers
+
+```go
+// getAccount retrieves account information using the provided address
+func getAccount(address string, authClient authtypes.QueryClient, codec codec.Codec) (sdk.AccountI, error) {
+ // Query account info
+ accountQuery, err := authClient.Account(context.Background(), &authtypes.QueryAccountRequest{
+ Address: string(address),
+ })
+ if err != nil {
+ return nil, fmt.Errorf("error getting account: %w", err)
+ }
+
+ // Unpack the account information
+ var account sdk.AccountI
+ err = codec.InterfaceRegistry().UnpackAny(accountQuery.Account, &account)
+ if err != nil {
+ return nil, fmt.Errorf("error unpacking account: %w", err)
+ }
+
+ return account, nil
+}
+```
\ No newline at end of file
diff --git a/copy-of-sdk-docs/tutorials/transactions/_category_.json b/copy-of-sdk-docs/tutorials/transactions/_category_.json
new file mode 100644
index 00000000..5b0cdfc1
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/transactions/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Transaction Tutorials",
+ "position": 2,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/tutorials/tutorials.md b/copy-of-sdk-docs/tutorials/tutorials.md
new file mode 100644
index 00000000..e6828c9f
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/tutorials.md
@@ -0,0 +1,12 @@
+---
+sidebar_position: 0
+---
+# Tutorials
+
+## Advanced Tutorials
+
+This section provides a concise overview of tutorials focused on implementing vote extensions in the Cosmos SDK. Vote extensions are a powerful feature for enhancing the security and fairness of blockchain applications, particularly in scenarios like implementing oracles and mitigating auction front-running.
+
+* **Implementing Oracle with Vote Extensions** - This tutorial details how to use vote extensions for the implementation of a secure and reliable oracle within a blockchain application. It demonstrates the use of vote extensions to securely include oracle data submissions in blocks, ensuring the data's integrity and reliability for the blockchain.
+
+* **Mitigating Auction Front-Running with Vote Extensions** - Explore how to prevent auction front-running using vote extensions. This tutorial outlines the creation of a module aimed at mitigating front-running in nameservice auctions, emphasising the `ExtendVote`, `PrepareProposal`, and `ProcessProposal` functions to facilitate a fair auction process.
\ No newline at end of file
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/_category_.json b/copy-of-sdk-docs/tutorials/vote-extensions/_category_.json
new file mode 100644
index 00000000..a2aecebd
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Vote Extensions Tutorials",
+ "position": 1,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/00-getting-started.md b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/00-getting-started.md
new file mode 100644
index 00000000..a68a6e15
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/00-getting-started.md
@@ -0,0 +1,40 @@
+# Getting Started
+
+## Table of Contents
+
+- [Getting Started](#overview-of-the-project)
+- [Understanding Front-Running](./01-understanding-frontrunning.md)
+- [Mitigating Front-running with Vote Extensions](./02-mitigating-front-running-with-vote-extesions.md)
+- [Demo of Mitigating Front-Running](./03-demo-of-mitigating-front-running.md)
+
+## Getting Started
+
+### Overview of the Project
+
+This tutorial outlines the development of a module designed to mitigate front-running in nameservice auctions. The following functions are central to this module:
+
+* `ExtendVote`: Gathers bids from the mempool and includes them in the vote extension to ensure a fair and transparent auction process.
+* `PrepareProposal`: Processes the vote extensions from the previous block, creating a special transaction that encapsulates bids to be included in the current proposal.
+* `ProcessProposal`: Validates that the first transaction in the proposal is the special transaction containing the vote extensions and ensures the integrity of the bids.
+
+In this advanced tutorial, we will be working with an example application that facilitates the auctioning of nameservices. To see what frontrunning and nameservices are [here](./01-understanding-frontrunning.md) This application provides a practical use case to explore the prevention of auction front-running, also known as "bid sniping", where a validator takes advantage of seeing a bid in the mempool to place their own higher bid before the original bid is processed.
+
+The tutorial will guide you through using the Cosmos SDK to mitigate front-running using vote extensions. The module will be built on top of the base blockchain provided in the `tutorials/base` directory and will use the `auction` module as a foundation. By the end of this tutorial, you will have a better understanding of how to prevent front-running in blockchain auctions, specifically in the context of nameservice auctioning.
+
+## What are Vote extensions?
+
+Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release.
+
+More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions).
+
+## Requirements and Setup
+
+Before diving into the advanced tutorial on auction front-running simulation, ensure you meet the following requirements:
+
+* [Golang >1.21.5](https://golang.org/doc/install) installed
+* Familiarity with the concepts of front-running and MEV, as detailed in [Understanding Front-Running](./01-understanding-frontrunning.md)
+* Understanding of Vote Extensions as described [here](https://docs.cosmos.network/main/build/abci/vote-extensions)
+
+You will also need a foundational blockchain to build upon coupled with your own module. The `tutorials/base` directory has the necessary blockchain code to start your custom project with the Cosmos SDK. For the module, you can use the `auction` module provided in the `tutorials/auction/x/auction` directory as a reference but please be aware that all of the code needed to implement vote extensions is already implemented in this module.
+
+This will set up a strong base for your blockchain, enabling the integration of advanced features such as auction front-running simulation.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/01-understanding-frontrunning.md b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/01-understanding-frontrunning.md
new file mode 100644
index 00000000..31602b0e
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/01-understanding-frontrunning.md
@@ -0,0 +1,41 @@
+# Understanding Front-Running and more
+
+## Introduction
+
+Blockchain technology is vulnerable to practices that can affect the fairness and security of the network. Two such practices are front-running and Maximal Extractable Value (MEV), which are important for blockchain participants to understand.
+
+## What is Front-Running?
+
+Front-running is when someone, such as a validator, uses their ability to see pending transactions to execute their own transactions first, benefiting from the knowledge of upcoming transactions. In nameservice auctions, a front-runner might place a higher bid before the original bid is confirmed, unfairly winning the auction.
+
+## Nameservices and Nameservice Auctions
+
+Nameservices are human-readable identifiers on a blockchain, akin to internet domain names, that correspond to specific addresses or resources. They simplify interactions with typically long and complex blockchain addresses, allowing users to have a memorable and unique identifier for their blockchain address or smart contract.
+
+Nameservice auctions are the process by which these identifiers are bid on and acquired. To combat front-running—where someone might use knowledge of pending bids to place a higher bid first—mechanisms such as commit-reveal schemes, auction extensions, and fair sequencing are implemented. These strategies ensure a transparent and fair bidding process, reducing the potential for Maximal Extractable Value (MEV) exploitation.
+
+## What is Maximal Extractable Value (MEV)?
+
+MEV is the highest value that can be extracted by manipulating the order of transactions within a block, beyond the standard block rewards and fees. This has become more prominent with the growth of decentralised finance (DeFi), where transaction order can greatly affect profits.
+
+## Implications of MEV
+
+MEV can lead to:
+
+- **Network Security**: Potential centralisation, as those with more computational power might dominate the process, increasing the risk of attacks.
+- **Market Fairness**: An uneven playing field where only a few can gain at the expense of the majority.
+- **User Experience**: Higher fees and network congestion due to the competition for MEV.
+
+## Mitigating MEV and Front-Running
+
+Some solutions being developed to mitigate MEV and front-running, including:
+
+- **Time-delayed Transactions**: Random delays to make transaction timing unpredictable.
+- **Private Transaction Pools**: Concealing transactions until they are mined.
+- **Fair Sequencing Services**: Processing transactions in the order they are received.
+
+For this tutorial, we will be exploring the last solution, fair sequencing services, in the context of nameservice auctions.
+
+## Conclusion
+
+MEV and front-running are challenges to blockchain integrity and fairness. Ongoing innovation and implementation of mitigation strategies are crucial for the ecosystem's health and success.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extensions.md b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extensions.md
new file mode 100644
index 00000000..a3d7549e
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extensions.md
@@ -0,0 +1,331 @@
+# Mitigating Front-running with Vote Extensions
+
+## Table of Contents
+
+* [Prerequisites](#prerequisites)
+* [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions)
+* [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers)
+
+## Prerequisites
+
+Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance.
+
+In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions.
+
+### Implementing Structs for Vote Extensions
+
+First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions:
+
+```go
+package abci
+
+import (
+ //import the necessary files
+)
+
+type PrepareProposalHandler struct {
+ logger log.Logger
+ txConfig client.TxConfig
+ cdc codec.Codec
+ mempool *mempool.ThresholdMempool
+ txProvider provider.TxProvider
+ keyname string
+ runProvider bool
+}
+```
+
+The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal.
+
+```go
+type ProcessProposalHandler struct {
+ TxConfig client.TxConfig
+ Codec codec.Codec
+ Logger log.Logger
+}
+```
+
+After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions.
+
+```go
+type VoteExtHandler struct {
+ logger log.Logger
+ currentBlock int64
+ mempool *mempool.ThresholdMempool
+ cdc codec.Codec
+}
+```
+
+This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote.
+
+```go
+type InjectedVoteExt struct {
+ VoteExtSigner []byte
+ Bids [][]byte
+}
+
+type InjectedVotes struct {
+ Votes []InjectedVoteExt
+}
+```
+
+These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format.
+
+```go
+type AppVoteExtension struct {
+ Height int64
+ Bids [][]byte
+}
+```
+
+This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application.
+
+```go
+type SpecialTransaction struct {
+ Height int
+ Bids [][]byte
+}
+```
+
+This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running.
+
+### Implementing Handlers and Configuring Handlers
+
+To establish the `VoteExtensionHandler`, follow these steps:
+
+1. Navigate to the `abci/proposal.go` file. This is where we will implement the `VoteExtensionHandler``.
+
+2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`.
+
+```go
+func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler {
+ return &VoteExtHandler{
+ logger: lg,
+ mempool: mp,
+ cdc: cdc,
+ }
+}
+```
+
+3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block.
+
+```go
+func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler {
+ return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) {
+ h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height))
+
+ voteExtBids := [][]byte{}
+
+ // Get mempool txs
+ itr := h.mempool.SelectPending(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+ sdkMsgs := tmptx.GetMsgs()
+
+ // Iterate through msgs, check for any bids
+ for _, msg := range sdkMsgs {
+ switch msg := msg.(type) {
+ case *nstypes.MsgBid:
+ // Marshal sdk bids to []byte
+ bz, err := h.cdc.Marshal(msg)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err))
+ break
+ }
+ voteExtBids = append(voteExtBids, bz)
+ default:
+ }
+ }
+
+ // Move tx to ready pool
+ err := h.mempool.Update(context.Background(), tmptx)
+
+ // Remove tx from app side mempool
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err))
+ }
+
+ itr = itr.Next()
+ }
+
+ // Create vote extension
+ voteExt := AppVoteExtension{
+ Height: req.Height,
+ Bids: voteExtBids,
+ }
+
+ // Encode Vote Extension
+ bz, err := json.Marshal(voteExt)
+ if err != nil {
+ return nil, fmt.Errorf("Error marshalling VE: %w", err)
+ }
+
+ return &abci.ResponseExtendVote{VoteExtension: bz}, nil
+}
+```
+
+4. Configure the handler in `app/app.go` as shown below
+
+```go
+bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)
+voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)
+bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler())
+```
+
+To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called.
+
+To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`:
+
+```go
+if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt))
+}
+```
+
+This is how the whole function should look:
+
+```go
+func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) {
+ h.logger.Info(fmt.Sprintf(" :: Prepare Proposal"))
+ var proposalTxs [][]byte
+
+ var txs []sdk.Tx
+
+ // Get Vote Extensions
+ if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt))
+ }
+
+ itr := h.mempool.Select(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+
+ txs = append(txs, tmptx)
+ itr = itr.Next()
+ }
+ h.logger.Info(fmt.Sprintf(" :: Number of Transactions available from mempool: %v", len(txs)))
+
+ if h.runProvider {
+ tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf(" :: Error Building Custom Proposal: %v", err))
+ }
+ txs = tmpMsgs
+ }
+
+ for _, sdkTxs := range txs {
+ txBytes, err := h.txConfig.TxEncoder()(sdkTxs)
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("~Error encoding transaction: %v", err.Error()))
+ }
+ proposalTxs = append(proposalTxs, txBytes)
+ }
+
+ h.logger.Info(fmt.Sprintf(" :: Number of Transactions in proposal: %v", len(proposalTxs)))
+
+ return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil
+ }
+}
+```
+
+As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as ` :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again.
+
+5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids.
+
+```go
+func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) {
+ h.Logger.Info(fmt.Sprintf(" :: Process Proposal"))
+
+ // The first transaction will always be the Special Transaction
+ numTxs := len(req.Txs)
+
+ h.Logger.Info(fmt.Sprintf(":: Number of transactions :: %v", numTxs))
+
+ if numTxs >= 1 {
+ var st SpecialTransaction
+ err = json.Unmarshal(req.Txs[0], &st)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf(":: Error unmarshalling special Tx in Process Proposal :: %v", err))
+ }
+ if len(st.Bids) > 0 {
+ h.Logger.Info(fmt.Sprintf(":: There are bids in the Special Transaction"))
+ var bids []nstypes.MsgBid
+ for i, b := range st.Bids {
+ var bid nstypes.MsgBid
+ h.Codec.Unmarshal(b, &bid)
+ h.Logger.Info(fmt.Sprintf(":: Special Transaction Bid No %v :: %v", i, bid))
+ bids = append(bids, bid)
+ }
+ // Validate Bids in Tx
+ txs := req.Txs[1:]
+ ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf(":: Error validating bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ if !ok {
+ h.Logger.Error(fmt.Sprintf(":: Unable to validate bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ h.Logger.Info(":: Successfully validated bids in Process Proposal")
+ }
+ }
+
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil
+ }
+}
+```
+
+6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids.
+
+```go
+func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) {
+ log.Info(fmt.Sprintf(" :: Process Vote Extensions"))
+
+ // Create empty response
+ st := SpecialTransaction{
+ 0,
+ [][]byte{},
+ }
+
+ // Get Vote Ext for H-1 from Req
+ voteExt := req.GetLocalLastCommit()
+ votes := voteExt.Votes
+
+ // Iterate through votes
+ var ve AppVoteExtension
+ for _, vote := range votes {
+ // Unmarshal to AppExt
+ err := json.Unmarshal(vote.VoteExtension, &ve)
+ if err != nil {
+ log.Error(fmt.Sprintf(" :: Error unmarshalling Vote Extension"))
+ }
+
+ st.Height = int(ve.Height)
+
+ // If Bids in VE, append to Special Transaction
+ if len(ve.Bids) > 0 {
+ log.Info(" :: Bids in VE")
+ for _, b := range ve.Bids {
+ st.Bids = append(st.Bids, b)
+ }
+ }
+ }
+
+ return st, nil
+}
+```
+
+7. Configure the `ProcessProposalHandler()` in app/app.go:
+
+```go
+processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}
+bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler())
+```
+
+This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called.
+
+To test if the proposal processing and vote extensions are working correctly, you can check the logs for any relevant messages. If the logs do not provide enough information, you can also reinitialize your local testing environment by running `./scripts/single_node/setup.sh` script.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extensions.md.bak b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extensions.md.bak
new file mode 100644
index 00000000..421b6ed8
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extensions.md.bak
@@ -0,0 +1,331 @@
+# Mitigating Front-running with Vote Extensions
+
+## Table of Contents
+
+* [Prerequisites](#prerequisites)
+* [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions)
+* [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers)
+
+## Prerequisites
+
+Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance.
+
+In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions.
+
+### Implementing Structs for Vote Extensions
+
+First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions:
+
+```go
+package abci
+
+import (
+ //import the necessary files
+)
+
+type PrepareProposalHandler struct {
+ logger log.Logger
+ txConfig client.TxConfig
+ cdc codec.Codec
+ mempool *mempool.ThresholdMempool
+ txProvider provider.TxProvider
+ keyname string
+ runProvider bool
+}
+```
+
+The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal.
+
+```go
+type ProcessProposalHandler struct {
+ TxConfig client.TxConfig
+ Codec codec.Codec
+ Logger log.Logger
+}
+```
+
+After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions.
+
+```go
+type VoteExtHandler struct {
+ logger log.Logger
+ currentBlock int64
+ mempool *mempool.ThresholdMempool
+ cdc codec.Codec
+}
+```
+
+This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote.
+
+```go
+type InjectedVoteExt struct {
+ VoteExtSigner []byte
+ Bids [][]byte
+}
+
+type InjectedVotes struct {
+ Votes []InjectedVoteExt
+}
+```
+
+These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format.
+
+```go
+type AppVoteExtension struct {
+ Height int64
+ Bids [][]byte
+}
+```
+
+This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application.
+
+```go
+type SpecialTransaction struct {
+ Height int
+ Bids [][]byte
+}
+```
+
+This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running.
+
+### Implementing Handlers and Configuring Handlers
+
+To establish the `VoteExtensionHandler`, follow these steps:
+
+1. Navigate to the `abci/proposal.go` file. This is where we will implement the `VoteExtensionHandler``.
+
+2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`.
+
+```go
+func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler {
+ return &VoteExtHandler{
+ logger: lg,
+ mempool: mp,
+ cdc: cdc,
+ }
+}
+```
+
+3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block.
+
+```go
+func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler {
+ return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) {
+ h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height))
+
+ voteExtBids := [][]byte{}
+
+ // Get mempool txs
+ itr := h.mempool.SelectPending(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+ sdkMsgs := tmptx.GetMsgs()
+
+ // Iterate through msgs, check for any bids
+ for _, msg := range sdkMsgs {
+ switch msg := msg.(type) {
+ case *nstypes.MsgBid:
+ // Marshal sdk bids to []byte
+ bz, err := h.cdc.Marshal(msg)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err))
+ break
+ }
+ voteExtBids = append(voteExtBids, bz)
+ default:
+ }
+ }
+
+ // Move tx to ready pool
+ err := h.mempool.Update(context.Background(), tmptx)
+
+ // Remove tx from app side mempool
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err))
+ }
+
+ itr = itr.Next()
+ }
+
+ // Create vote extension
+ voteExt := AppVoteExtension{
+ Height: req.Height,
+ Bids: voteExtBids,
+ }
+
+ // Encode Vote Extension
+ bz, err := json.Marshal(voteExt)
+ if err != nil {
+ return nil, fmt.Errorf("Error marshalling VE: %w", err)
+ }
+
+ return &abci.ResponseExtendVote{VoteExtension: bz}, nil
+}
+```
+
+4. Configure the handler in `app/app.go` as shown below
+
+```go
+bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)
+voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)
+bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler())
+```
+
+To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called.
+
+To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`:
+
+```go
+if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt))
+}
+```
+
+This is how the whole function should look:
+
+```go
+func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) {
+ h.logger.Info(fmt.Sprintf("🛠️ :: Prepare Proposal"))
+ var proposalTxs [][]byte
+
+ var txs []sdk.Tx
+
+ // Get Vote Extensions
+ if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt))
+ }
+
+ itr := h.mempool.Select(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+
+ txs = append(txs, tmptx)
+ itr = itr.Next()
+ }
+ h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions available from mempool: %v", len(txs)))
+
+ if h.runProvider {
+ tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf("❌️ :: Error Building Custom Proposal: %v", err))
+ }
+ txs = tmpMsgs
+ }
+
+ for _, sdkTxs := range txs {
+ txBytes, err := h.txConfig.TxEncoder()(sdkTxs)
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("❌~Error encoding transaction: %v", err.Error()))
+ }
+ proposalTxs = append(proposalTxs, txBytes)
+ }
+
+ h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions in proposal: %v", len(proposalTxs)))
+
+ return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil
+ }
+}
+```
+
+As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as `🛠️ :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again.
+
+5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids.
+
+```go
+func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) {
+ h.Logger.Info(fmt.Sprintf("⚙️ :: Process Proposal"))
+
+ // The first transaction will always be the Special Transaction
+ numTxs := len(req.Txs)
+
+ h.Logger.Info(fmt.Sprintf("⚙️:: Number of transactions :: %v", numTxs))
+
+ if numTxs >= 1 {
+ var st SpecialTransaction
+ err = json.Unmarshal(req.Txs[0], &st)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf("❌️:: Error unmarshalling special Tx in Process Proposal :: %v", err))
+ }
+ if len(st.Bids) > 0 {
+ h.Logger.Info(fmt.Sprintf("⚙️:: There are bids in the Special Transaction"))
+ var bids []nstypes.MsgBid
+ for i, b := range st.Bids {
+ var bid nstypes.MsgBid
+ h.Codec.Unmarshal(b, &bid)
+ h.Logger.Info(fmt.Sprintf("⚙️:: Special Transaction Bid No %v :: %v", i, bid))
+ bids = append(bids, bid)
+ }
+ // Validate Bids in Tx
+ txs := req.Txs[1:]
+ ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf("❌️:: Error validating bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ if !ok {
+ h.Logger.Error(fmt.Sprintf("❌️:: Unable to validate bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ h.Logger.Info("⚙️:: Successfully validated bids in Process Proposal")
+ }
+ }
+
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil
+ }
+}
+```
+
+6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids.
+
+```go
+func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) {
+ log.Info(fmt.Sprintf("🛠️ :: Process Vote Extensions"))
+
+ // Create empty response
+ st := SpecialTransaction{
+ 0,
+ [][]byte{},
+ }
+
+ // Get Vote Ext for H-1 from Req
+ voteExt := req.GetLocalLastCommit()
+ votes := voteExt.Votes
+
+ // Iterate through votes
+ var ve AppVoteExtension
+ for _, vote := range votes {
+ // Unmarshal to AppExt
+ err := json.Unmarshal(vote.VoteExtension, &ve)
+ if err != nil {
+ log.Error(fmt.Sprintf("❌ :: Error unmarshalling Vote Extension"))
+ }
+
+ st.Height = int(ve.Height)
+
+ // If Bids in VE, append to Special Transaction
+ if len(ve.Bids) > 0 {
+ log.Info("🛠️ :: Bids in VE")
+ for _, b := range ve.Bids {
+ st.Bids = append(st.Bids, b)
+ }
+ }
+ }
+
+ return st, nil
+}
+```
+
+7. Configure the `ProcessProposalHandler()` in app/app.go:
+
+```go
+processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}
+bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler())
+```
+
+This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called.
+
+To test if the proposal processing and vote extensions are working correctly, you can check the logs for any relevant messages. If the logs do not provide enough information, you can also reinitialize your local testing environment by running `./scripts/single_node/setup.sh` script.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extesions.md b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extesions.md
new file mode 100644
index 00000000..55c84fa7
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extesions.md
@@ -0,0 +1,331 @@
+# Mitigating Front-running with Vote Extensions
+
+## Table of Contents
+
+- [Prerequisites](#prerequisites)
+- [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions)
+- [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers)
+
+## Prerequisites
+
+Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance.
+
+In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions.
+
+### Implementing Structs for Vote Extensions
+
+First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions:
+
+```go
+package abci
+
+import (
+ //import the necessary files
+)
+
+type PrepareProposalHandler struct {
+ logger log.Logger
+ txConfig client.TxConfig
+ cdc codec.Codec
+ mempool *mempool.ThresholdMempool
+ txProvider provider.TxProvider
+ keyname string
+ runProvider bool
+}
+```
+
+The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal.
+
+```go
+type ProcessProposalHandler struct {
+ TxConfig client.TxConfig
+ Codec codec.Codec
+ Logger log.Logger
+}
+```
+
+After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions.
+
+```go
+type VoteExtHandler struct {
+ logger log.Logger
+ currentBlock int64
+ mempool *mempool.ThresholdMempool
+ cdc codec.Codec
+}
+```
+
+This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote.
+
+```go
+type InjectedVoteExt struct {
+ VoteExtSigner []byte
+ Bids [][]byte
+}
+
+type InjectedVotes struct {
+ Votes []InjectedVoteExt
+}
+```
+
+These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format.
+
+```go
+type AppVoteExtension struct {
+ Height int64
+ Bids [][]byte
+}
+```
+
+This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application.
+
+```go
+type SpecialTransaction struct {
+ Height int
+ Bids [][]byte
+}
+```
+
+This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running.
+
+### Implementing Handlers and Configuring Handlers
+
+To establish the `VoteExtensionHandler`, follow these steps:
+
+1. Navigate to the `abci/proposal.go` file. This is where we will implement the `VoteExtensionHandler``.
+
+2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`.
+
+```go
+func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler {
+ return &VoteExtHandler{
+ logger: lg,
+ mempool: mp,
+ cdc: cdc,
+ }
+}
+```
+
+3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block.
+
+```go
+func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler {
+ return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) {
+ h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height))
+
+ voteExtBids := [][]byte{}
+
+ // Get mempool txs
+ itr := h.mempool.SelectPending(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+ sdkMsgs := tmptx.GetMsgs()
+
+ // Iterate through msgs, check for any bids
+ for _, msg := range sdkMsgs {
+ switch msg := msg.(type) {
+ case *nstypes.MsgBid:
+ // Marshal sdk bids to []byte
+ bz, err := h.cdc.Marshal(msg)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err))
+ break
+ }
+ voteExtBids = append(voteExtBids, bz)
+ default:
+ }
+ }
+
+ // Move tx to ready pool
+ err := h.mempool.Update(context.Background(), tmptx)
+
+ // Remove tx from app side mempool
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err))
+ }
+
+ itr = itr.Next()
+ }
+
+ // Create vote extension
+ voteExt := AppVoteExtension{
+ Height: req.Height,
+ Bids: voteExtBids,
+ }
+
+ // Encode Vote Extension
+ bz, err := json.Marshal(voteExt)
+ if err != nil {
+ return nil, fmt.Errorf("Error marshalling VE: %w", err)
+ }
+
+ return &abci.ResponseExtendVote{VoteExtension: bz}, nil
+}
+```
+
+4. Configure the handler in `app/app.go` as shown below
+
+```go
+bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)
+voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)
+bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler())
+```
+
+To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called.
+
+To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`:
+
+```go
+if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt))
+}
+```
+
+This is how the whole function should look:
+
+```go
+func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) {
+ h.logger.Info(fmt.Sprintf(" :: Prepare Proposal"))
+ var proposalTxs [][]byte
+
+ var txs []sdk.Tx
+
+ // Get Vote Extensions
+ if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf(" :: Get vote extensions: %v", voteExt))
+ }
+
+ itr := h.mempool.Select(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+
+ txs = append(txs, tmptx)
+ itr = itr.Next()
+ }
+ h.logger.Info(fmt.Sprintf(" :: Number of Transactions available from mempool: %v", len(txs)))
+
+ if h.runProvider {
+ tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf(" :: Error Building Custom Proposal: %v", err))
+ }
+ txs = tmpMsgs
+ }
+
+ for _, sdkTxs := range txs {
+ txBytes, err := h.txConfig.TxEncoder()(sdkTxs)
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("~Error encoding transaction: %v", err.Error()))
+ }
+ proposalTxs = append(proposalTxs, txBytes)
+ }
+
+ h.logger.Info(fmt.Sprintf(" :: Number of Transactions in proposal: %v", len(proposalTxs)))
+
+ return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil
+ }
+}
+```
+
+As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as ` :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again.
+
+5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids.
+
+```go
+func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) {
+ h.Logger.Info(fmt.Sprintf(" :: Process Proposal"))
+
+ // The first transaction will always be the Special Transaction
+ numTxs := len(req.Txs)
+
+ h.Logger.Info(fmt.Sprintf(":: Number of transactions :: %v", numTxs))
+
+ if numTxs >= 1 {
+ var st SpecialTransaction
+ err = json.Unmarshal(req.Txs[0], &st)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf(":: Error unmarshalling special Tx in Process Proposal :: %v", err))
+ }
+ if len(st.Bids) > 0 {
+ h.Logger.Info(fmt.Sprintf(":: There are bids in the Special Transaction"))
+ var bids []nstypes.MsgBid
+ for i, b := range st.Bids {
+ var bid nstypes.MsgBid
+ h.Codec.Unmarshal(b, &bid)
+ h.Logger.Info(fmt.Sprintf(":: Special Transaction Bid No %v :: %v", i, bid))
+ bids = append(bids, bid)
+ }
+ // Validate Bids in Tx
+ txs := req.Txs[1:]
+ ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf(":: Error validating bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ if !ok {
+ h.Logger.Error(fmt.Sprintf(":: Unable to validate bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ h.Logger.Info(":: Successfully validated bids in Process Proposal")
+ }
+ }
+
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil
+ }
+}
+```
+
+6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids.
+
+```go
+func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) {
+ log.Info(fmt.Sprintf(" :: Process Vote Extensions"))
+
+ // Create empty response
+ st := SpecialTransaction{
+ 0,
+ [][]byte{},
+ }
+
+ // Get Vote Ext for H-1 from Req
+ voteExt := req.GetLocalLastCommit()
+ votes := voteExt.Votes
+
+ // Iterate through votes
+ var ve AppVoteExtension
+ for _, vote := range votes {
+ // Unmarshal to AppExt
+ err := json.Unmarshal(vote.VoteExtension, &ve)
+ if err != nil {
+ log.Error(fmt.Sprintf(" :: Error unmarshalling Vote Extension"))
+ }
+
+ st.Height = int(ve.Height)
+
+ // If Bids in VE, append to Special Transaction
+ if len(ve.Bids) > 0 {
+ log.Info(" :: Bids in VE")
+ for _, b := range ve.Bids {
+ st.Bids = append(st.Bids, b)
+ }
+ }
+ }
+
+ return st, nil
+}
+```
+
+7. Configure the `ProcessProposalHandler()` in app/app.go:
+
+```go
+processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}
+bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler())
+```
+
+This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called.
+
+To test if the proposal processing and vote extensions are working correctly, you can check the logs for any relevant messages. If the logs do not provide enough information, you can also reinitialize your local testing environment by running `./scripts/single_node/setup.sh` script.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extesions.md.bak b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extesions.md.bak
new file mode 100644
index 00000000..56c2d402
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/02-mitigating-front-running-with-vote-extesions.md.bak
@@ -0,0 +1,331 @@
+# Mitigating Front-running with Vote Extensions
+
+## Table of Contents
+
+- [Prerequisites](#prerequisites)
+- [Implementing Structs for Vote Extensions](#implementing-structs-for-vote-extensions)
+- [Implementing Handlers and Configuring Handlers](#implementing-handlers-and-configuring-handlers)
+
+## Prerequisites
+
+Before implementing vote extensions to mitigate front-running, ensure you have a module ready to implement the vote extensions with. If you need to create or reference a similar module, see `x/auction` for guidance.
+
+In this section, we will discuss the steps to mitigate front-running using vote extensions. We will introduce new types within the `abci/types.go` file. These types will be used to handle the process of preparing proposals, processing proposals, and handling vote extensions.
+
+### Implementing Structs for Vote Extensions
+
+First, copy the following structs into the `abci/types.go` and each of these structs serves a specific purpose in the process of mitigating front-running using vote extensions:
+
+```go
+package abci
+
+import (
+ //import the necessary files
+)
+
+type PrepareProposalHandler struct {
+ logger log.Logger
+ txConfig client.TxConfig
+ cdc codec.Codec
+ mempool *mempool.ThresholdMempool
+ txProvider provider.TxProvider
+ keyname string
+ runProvider bool
+}
+```
+
+The `PrepareProposalHandler` struct is used to handle the preparation of a proposal in the consensus process. It contains several fields: logger for logging information and errors, txConfig for transaction configuration, cdc (Codec) for encoding and decoding transactions, mempool for referencing the set of unconfirmed transactions, txProvider for building the proposal with transactions, keyname for the name of the key used for signing transactions, and runProvider, a boolean flag indicating whether the provider should be run to build the proposal.
+
+```go
+type ProcessProposalHandler struct {
+ TxConfig client.TxConfig
+ Codec codec.Codec
+ Logger log.Logger
+}
+```
+
+After the proposal has been prepared and vote extensions have been included, the `ProcessProposalHandler` is used to process the proposal. This includes validating the proposal and the included vote extensions. The `ProcessProposalHandler` allows you to access the transaction configuration and codec, which are necessary for processing the vote extensions.
+
+```go
+type VoteExtHandler struct {
+ logger log.Logger
+ currentBlock int64
+ mempool *mempool.ThresholdMempool
+ cdc codec.Codec
+}
+```
+
+This struct is used to handle vote extensions. It contains a logger for logging events, the current block number, a mempool for storing transactions, and a codec for encoding and decoding. Vote extensions are a key part of the process to mitigate front-running, as they allow for additional information to be included with each vote.
+
+```go
+type InjectedVoteExt struct {
+ VoteExtSigner []byte
+ Bids [][]byte
+}
+
+type InjectedVotes struct {
+ Votes []InjectedVoteExt
+}
+```
+
+These structs are used to handle injected vote extensions. They include the signer of the vote extension and the bids associated with the vote extension. Each byte array in Bids is a serialised form of a bid transaction. Injected vote extensions are used to add additional information to a vote after it has been created, which can be useful for adding context or additional data to a vote. The serialised bid transactions provide a way to include complex transaction data in a compact, efficient format.
+
+```go
+type AppVoteExtension struct {
+ Height int64
+ Bids [][]byte
+}
+```
+
+This struct is used for application vote extensions. It includes the height of the block and the bids associated with the vote extension. Application vote extensions are used to add additional information to a vote at the application level, which can be useful for adding context or additional data to a vote that is specific to the application.
+
+```go
+type SpecialTransaction struct {
+ Height int
+ Bids [][]byte
+}
+```
+
+This struct is used for special transactions. It includes the height of the block and the bids associated with the transaction. Special transactions are used for transactions that need to be handled differently from regular transactions, such as transactions that are part of the process to mitigate front-running.
+
+### Implementing Handlers and Configuring Handlers
+
+To establish the `VoteExtensionHandler`, follow these steps:
+
+1. Navigate to the `abci/proposal.go` file. This is where we will implement the `VoteExtensionHandler``.
+
+2. Implement the `NewVoteExtensionHandler` function. This function is a constructor for the `VoteExtHandler` struct. It takes a logger, a mempool, and a codec as parameters and returns a new instance of `VoteExtHandler`.
+
+```go
+func NewVoteExtensionHandler(lg log.Logger, mp *mempool.ThresholdMempool, cdc codec.Codec) *VoteExtHandler {
+ return &VoteExtHandler{
+ logger: lg,
+ mempool: mp,
+ cdc: cdc,
+ }
+}
+```
+
+3. Implement the `ExtendVoteHandler()` method. This method should handle the logic of extending votes, including inspecting the mempool and submitting a list of all pending bids. This will allow you to access the list of unconfirmed transactions in the abci.`RequestPrepareProposal` during the ensuing block.
+
+```go
+func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler {
+ return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) {
+ h.logger.Info(fmt.Sprintf("Extending votes at block height : %v", req.Height))
+
+ voteExtBids := [][]byte{}
+
+ // Get mempool txs
+ itr := h.mempool.SelectPending(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+ sdkMsgs := tmptx.GetMsgs()
+
+ // Iterate through msgs, check for any bids
+ for _, msg := range sdkMsgs {
+ switch msg := msg.(type) {
+ case *nstypes.MsgBid:
+ // Marshal sdk bids to []byte
+ bz, err := h.cdc.Marshal(msg)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf("Error marshalling VE Bid : %v", err))
+ break
+ }
+ voteExtBids = append(voteExtBids, bz)
+ default:
+ }
+ }
+
+ // Move tx to ready pool
+ err := h.mempool.Update(context.Background(), tmptx)
+
+ // Remove tx from app side mempool
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("Unable to update mempool tx: %v", err))
+ }
+
+ itr = itr.Next()
+ }
+
+ // Create vote extension
+ voteExt := AppVoteExtension{
+ Height: req.Height,
+ Bids: voteExtBids,
+ }
+
+ // Encode Vote Extension
+ bz, err := json.Marshal(voteExt)
+ if err != nil {
+ return nil, fmt.Errorf("Error marshalling VE: %w", err)
+ }
+
+ return &abci.ResponseExtendVote{VoteExtension: bz}, nil
+}
+```
+
+4. Configure the handler in `app/app.go` as shown below
+
+```go
+bApp := baseapp.NewBaseApp(AppName, logger, db, txConfig.TxDecoder(), baseAppOptions...)
+voteExtHandler := abci2.NewVoteExtensionHandler(logger, mempool, appCodec)
+bApp.SetExtendVoteHandler(voteExtHandler.ExtendVoteHandler())
+```
+
+To give a bit of context on what is happening above, we first create a new instance of `VoteExtensionHandler` with the necessary dependencies (logger, mempool, and codec). Then, we set this handler as the `ExtendVoteHandler` for our application. This means that whenever a vote needs to be extended, our custom `ExtendVoteHandler()` method will be called.
+
+To test if vote extensions have been propagated, add the following to the `PrepareProposalHandler`:
+
+```go
+if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt))
+}
+```
+
+This is how the whole function should look:
+
+```go
+func (h *PrepareProposalHandler) PrepareProposalHandler() sdk.PrepareProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) {
+ h.logger.Info(fmt.Sprintf("🛠️ :: Prepare Proposal"))
+ var proposalTxs [][]byte
+
+ var txs []sdk.Tx
+
+ // Get Vote Extensions
+ if req.Height > 2 {
+ voteExt := req.GetLocalLastCommit()
+ h.logger.Info(fmt.Sprintf("🛠️ :: Get vote extensions: %v", voteExt))
+ }
+
+ itr := h.mempool.Select(context.Background(), nil)
+ for itr != nil {
+ tmptx := itr.Tx()
+
+ txs = append(txs, tmptx)
+ itr = itr.Next()
+ }
+ h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions available from mempool: %v", len(txs)))
+
+ if h.runProvider {
+ tmpMsgs, err := h.txProvider.BuildProposal(ctx, txs)
+ if err != nil {
+ h.logger.Error(fmt.Sprintf("❌️ :: Error Building Custom Proposal: %v", err))
+ }
+ txs = tmpMsgs
+ }
+
+ for _, sdkTxs := range txs {
+ txBytes, err := h.txConfig.TxEncoder()(sdkTxs)
+ if err != nil {
+ h.logger.Info(fmt.Sprintf("❌~Error encoding transaction: %v", err.Error()))
+ }
+ proposalTxs = append(proposalTxs, txBytes)
+ }
+
+ h.logger.Info(fmt.Sprintf("🛠️ :: Number of Transactions in proposal: %v", len(proposalTxs)))
+
+ return &abci.ResponsePrepareProposal{Txs: proposalTxs}, nil
+ }
+}
+```
+
+As mentioned above, we check if vote extensions have been propagated, you can do this by checking the logs for any relevant messages such as `🛠️ :: Get vote extensions:`. If the logs do not provide enough information, you can also reinitialise your local testing environment by running the `./scripts/single_node/setup.sh` script again.
+
+5. Implement the `ProcessProposalHandler()`. This function is responsible for processing the proposal. It should handle the logic of processing vote extensions, including inspecting the proposal and validating the bids.
+
+```go
+func (h *ProcessProposalHandler) ProcessProposalHandler() sdk.ProcessProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestProcessProposal) (resp *abci.ResponseProcessProposal, err error) {
+ h.Logger.Info(fmt.Sprintf("⚙️ :: Process Proposal"))
+
+ // The first transaction will always be the Special Transaction
+ numTxs := len(req.Txs)
+
+ h.Logger.Info(fmt.Sprintf("⚙️:: Number of transactions :: %v", numTxs))
+
+ if numTxs >= 1 {
+ var st SpecialTransaction
+ err = json.Unmarshal(req.Txs[0], &st)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf("❌️:: Error unmarshalling special Tx in Process Proposal :: %v", err))
+ }
+ if len(st.Bids) > 0 {
+ h.Logger.Info(fmt.Sprintf("⚙️:: There are bids in the Special Transaction"))
+ var bids []nstypes.MsgBid
+ for i, b := range st.Bids {
+ var bid nstypes.MsgBid
+ h.Codec.Unmarshal(b, &bid)
+ h.Logger.Info(fmt.Sprintf("⚙️:: Special Transaction Bid No %v :: %v", i, bid))
+ bids = append(bids, bid)
+ }
+ // Validate Bids in Tx
+ txs := req.Txs[1:]
+ ok, err := ValidateBids(h.TxConfig, bids, txs, h.Logger)
+ if err != nil {
+ h.Logger.Error(fmt.Sprintf("❌️:: Error validating bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ if !ok {
+ h.Logger.Error(fmt.Sprintf("❌️:: Unable to validate bids in Process Proposal :: %v", err))
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ h.Logger.Info("⚙️:: Successfully validated bids in Process Proposal")
+ }
+ }
+
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil
+ }
+}
+```
+
+6. Implement the `ProcessVoteExtensions()` function. This function should handle the logic of processing vote extensions, including validating the bids.
+
+```go
+func processVoteExtensions(req *abci.RequestPrepareProposal, log log.Logger) (SpecialTransaction, error) {
+ log.Info(fmt.Sprintf("🛠️ :: Process Vote Extensions"))
+
+ // Create empty response
+ st := SpecialTransaction{
+ 0,
+ [][]byte{},
+ }
+
+ // Get Vote Ext for H-1 from Req
+ voteExt := req.GetLocalLastCommit()
+ votes := voteExt.Votes
+
+ // Iterate through votes
+ var ve AppVoteExtension
+ for _, vote := range votes {
+ // Unmarshal to AppExt
+ err := json.Unmarshal(vote.VoteExtension, &ve)
+ if err != nil {
+ log.Error(fmt.Sprintf("❌ :: Error unmarshalling Vote Extension"))
+ }
+
+ st.Height = int(ve.Height)
+
+ // If Bids in VE, append to Special Transaction
+ if len(ve.Bids) > 0 {
+ log.Info("🛠️ :: Bids in VE")
+ for _, b := range ve.Bids {
+ st.Bids = append(st.Bids, b)
+ }
+ }
+ }
+
+ return st, nil
+}
+```
+
+7. Configure the `ProcessProposalHandler()` in app/app.go:
+
+```go
+processPropHandler := abci2.ProcessProposalHandler{app.txConfig, appCodec, logger}
+bApp.SetProcessProposal(processPropHandler.ProcessProposalHandler())
+```
+
+This sets the `ProcessProposalHandler()` for our application. This means that whenever a proposal needs to be processed, our custom `ProcessProposalHandler()` method will be called.
+
+To test if the proposal processing and vote extensions are working correctly, you can check the logs for any relevant messages. If the logs do not provide enough information, you can also reinitialize your local testing environment by running `./scripts/single_node/setup.sh` script.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/03-demo-of-mitigating-front-running.md b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/03-demo-of-mitigating-front-running.md
new file mode 100644
index 00000000..24c688c9
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/03-demo-of-mitigating-front-running.md
@@ -0,0 +1,106 @@
+# Demo of Mitigating Front-Running with Vote Extensions
+
+The purpose of this demo is to test the implementation of the `VoteExtensionHandler` and `PrepareProposalHandler` that we have just added to the codebase. These handlers are designed to mitigate front-running by ensuring that all validators have a consistent view of the mempool when preparing proposals.
+
+In this demo, we are using a 3 validator network. The Beacon validator is special because it has a custom transaction provider enabled. This means that it can potentially manipulate the order of transactions in a proposal to its advantage (i.e., front-running).
+
+1. Bootstrap the validator network: This sets up a network with 3 validators. The script `./scripts/configure.sh is used to configure the network and the validators.
+
+```shell
+cd scripts
+configure.sh
+```
+
+If this doesn't work please ensure you have run `make build` in the `tutorials/nameservice/base` directory.
+
+
+2. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script ``./scripts/reserve.sh "bob.cosmos"` is used to send this transaction.
+
+```shell
+reserve.sh "bob.cosmos"
+```
+
+3. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain.
+
+```shell
+whois.sh "bob.cosmos"
+```
+
+It should return:
+
+```{
+ "name": {
+ "name": "bob.cosmos",
+ "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w",
+ "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht",
+ "amount": [
+ {
+ "denom": "uatom",
+ "amount": "1000"
+ }
+ ]
+ }
+}
+```
+
+To detect front-running attempts by the beacon, scrutinise the logs during the `ProcessProposal` stage. Open the logs for each validator, including the beacon, `val1`, and `val2`, to observe the following behavior. Open the log file of the validator node. The location of this file can vary depending on your setup, but it's typically located in a directory like `$HOME/cosmos/nodes/#{validator}/logs`. The directory in this case will be under the validator so, `beacon` `val1` or `val2`. Run the following to tail the logs of the validator or beacon:
+
+```shell
+tail -f $HOME/cosmos/nodes/#{validator}/logs
+```
+
+```shell
+2:47PM ERR :: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server
+2:47PM ERR :: Unable to validate bids in Process Proposal :: module=server
+2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0
+```
+
+
+4. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys.
+
+```shell
+list-beacon-keys.sh
+```
+
+We should receive something similar to the following:
+
+```shell
+[
+ {
+ "name": "alice",
+ "type": "local",
+ "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}"
+ },
+ {
+ "name": "barbara",
+ "type": "local",
+ "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}"
+ },
+ {
+ "name": "beacon-key",
+ "type": "local",
+ "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}"
+ },
+ {
+ "name": "cindy",
+ "type": "local",
+ "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}"
+ }
+]
+```
+
+This allows us to match up the addresses and see that the bid was not front run by the beacon due as the resolve address is Alice's address and not the beacons address.
+
+By running this demo, we can verify that the `VoteExtensionHandler` and `PrepareProposalHandler` are working as expected and that they are able to prevent front-running.
+
+## Conclusion
+
+In this tutorial, we've tackled front-running and MEV, focusing on nameservice auctions' vulnerability to these issues. We've explored vote extensions, a key feature of ABCI 2.0, and integrated them into a Cosmos SDK application.
+
+Through practical exercises, you've implemented vote extensions, and tested their effectiveness in creating a fair auction system. You've gained practical insights by configuring a validator network and analysing blockchain logs.
+
+Keep experimenting with these concepts, engage with the community, and stay updated on new advancements. The knowledge you've acquired here is crucial for developing secure and fair blockchain applications.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/03-demo-of-mitigating-front-running.md.bak b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/03-demo-of-mitigating-front-running.md.bak
new file mode 100644
index 00000000..63f37b4a
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/03-demo-of-mitigating-front-running.md.bak
@@ -0,0 +1,106 @@
+# Demo of Mitigating Front-Running with Vote Extensions
+
+The purpose of this demo is to test the implementation of the `VoteExtensionHandler` and `PrepareProposalHandler` that we have just added to the codebase. These handlers are designed to mitigate front-running by ensuring that all validators have a consistent view of the mempool when preparing proposals.
+
+In this demo, we are using a 3 validator network. The Beacon validator is special because it has a custom transaction provider enabled. This means that it can potentially manipulate the order of transactions in a proposal to its advantage (i.e., front-running).
+
+1. Bootstrap the validator network: This sets up a network with 3 validators. The script `./scripts/configure.sh is used to configure the network and the validators.
+
+```shell
+cd scripts
+configure.sh
+```
+
+If this doesn't work please ensure you have run `make build` in the `tutorials/nameservice/base` directory.
+
+
+2. Have alice attempt to reserve `bob.cosmos`: This is a normal transaction that alice wants to execute. The script ``./scripts/reserve.sh "bob.cosmos"` is used to send this transaction.
+
+```shell
+reserve.sh "bob.cosmos"
+```
+
+3. Query to verify the name has been reserved: This is to check the result of the transaction. The script `./scripts/whois.sh "bob.cosmos"` is used to query the state of the blockchain.
+
+```shell
+whois.sh "bob.cosmos"
+```
+
+It should return:
+
+```{
+ "name": {
+ "name": "bob.cosmos",
+ "owner": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w",
+ "resolve_address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht",
+ "amount": [
+ {
+ "denom": "uatom",
+ "amount": "1000"
+ }
+ ]
+ }
+}
+```
+
+To detect front-running attempts by the beacon, scrutinise the logs during the `ProcessProposal` stage. Open the logs for each validator, including the beacon, `val1`, and `val2`, to observe the following behavior. Open the log file of the validator node. The location of this file can vary depending on your setup, but it's typically located in a directory like `$HOME/cosmos/nodes/#{validator}/logs`. The directory in this case will be under the validator so, `beacon` `val1` or `val2`. Run the following to tail the logs of the validator or beacon:
+
+```shell
+tail -f $HOME/cosmos/nodes/#{validator}/logs
+```
+
+```shell
+2:47PM ERR ❌️:: Detected invalid proposal bid :: name:"bob.cosmos" resolveAddress:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" owner:"cosmos1wmuwv38pdur63zw04t0c78r2a8dyt08hf9tpvd" amount: module=server
+2:47PM ERR ❌️:: Unable to validate bids in Process Proposal :: module=server
+2:47PM ERR prevote step: state machine rejected a proposed block; this should not happen:the proposer may be misbehaving; prevoting nil err=null height=142 module=consensus round=0
+```
+
+
+4. List the Beacon's keys: This is to verify the addresses of the validators. The script `./scripts/list-beacon-keys.sh` is used to list the keys.
+
+```shell
+list-beacon-keys.sh
+```
+
+We should receive something similar to the following:
+
+```shell
+[
+ {
+ "name": "alice",
+ "type": "local",
+ "address": "cosmos1h6zy2kn9efxtw5z22rc5k9qu7twl70z24kr3ht",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A32cvBUkNJz+h2vld4A5BxvU5Rd+HyqpR3aGtvEhlm4C\"}"
+ },
+ {
+ "name": "barbara",
+ "type": "local",
+ "address": "cosmos1nq9wuvuju4jdmpmzvxmg8zhhu2ma2y2l2pnu6w",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"Ag9PFsNyTQPoJdbyCWia5rZH9CrvSrjMsk7Oz4L3rXQ5\"}"
+ },
+ {
+ "name": "beacon-key",
+ "type": "local",
+ "address": "cosmos1ez9a6x7lz4gvn27zr368muw8jeyas7sv84lfup",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"AlzJZMWyN7lass710TnAhyuFKAFIaANJyw5ad5P2kpcH\"}"
+ },
+ {
+ "name": "cindy",
+ "type": "local",
+ "address": "cosmos1m5j6za9w4qc2c5ljzxmm2v7a63mhjeag34pa3g",
+ "pubkey": "{\"@type\":\"/cosmos.crypto.secp256k1.PubKey\",\"key\":\"A6F1/3yot5OpyXoSkBbkyl+3rqBkxzRVSJfvSpm/AvW5\"}"
+ }
+]
+```
+
+This allows us to match up the addresses and see that the bid was not front run by the beacon due as the resolve address is Alice's address and not the beacons address.
+
+By running this demo, we can verify that the `VoteExtensionHandler` and `PrepareProposalHandler` are working as expected and that they are able to prevent front-running.
+
+## Conclusion
+
+In this tutorial, we've tackled front-running and MEV, focusing on nameservice auctions' vulnerability to these issues. We've explored vote extensions, a key feature of ABCI 2.0, and integrated them into a Cosmos SDK application.
+
+Through practical exercises, you've implemented vote extensions, and tested their effectiveness in creating a fair auction system. You've gained practical insights by configuring a validator network and analysing blockchain logs.
+
+Keep experimenting with these concepts, engage with the community, and stay updated on new advancements. The knowledge you've acquired here is crucial for developing secure and fair blockchain applications.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/_category_.json b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/_category_.json
new file mode 100644
index 00000000..aab0cfdf
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/auction-frontrunning/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": " Mitigating Auction Front-Running Tutorial",
+ "position": 0,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/oracle/00-getting-started.md b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/00-getting-started.md
new file mode 100644
index 00000000..59ea65be
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/00-getting-started.md
@@ -0,0 +1,36 @@
+# Getting Started
+
+## Table of Contents
+
+* [What is an Oracle?](./01-what-is-an-oracle.md)
+* [Implementing Vote Extensions](./02-implementing-vote-extensions.md)
+* [Testing the Oracle Module](./03-testing-oracle.md)
+
+## Prerequisites
+
+Before you start with this tutorial, make sure you have:
+
+* A working chain project. This tutorial won't cover the steps of creating a new chain/module.
+* Familiarity with the Cosmos SDK. If you're not, we suggest you start with [Cosmos SDK Tutorials](https://tutorials.cosmos.network), as ABCI++ is considered an advanced topic.
+* Read and understood [What is an Oracle?](01-what-is-an-oracle.md). This provides necessary background information for understanding the Oracle module.
+* Basic understanding of Go programming language.
+
+## What are Vote extensions?
+
+Vote extensions is arbitrary information which can be inserted into a block. This feature is part of ABCI 2.0, which is available for use in the SDK 0.50 release and part of the 0.38 CometBFT release.
+
+More information about vote extensions can be seen [here](https://docs.cosmos.network/main/build/abci/vote-extensions).
+
+## Overview of the project
+
+We’ll go through the creation of a simple price oracle module focusing on the vote extensions implementation, ignoring the details inside the price oracle itself.
+
+We’ll go through the implementation of:
+
+* `ExtendVote` to get information from external price APIs.
+* `VerifyVoteExtension` to check that the format of the provided votes is correct.
+* `PrepareProposal` to process the vote extensions from the previous block and include them into the proposal as a transaction.
+* `ProcessProposal` to check that the first transaction in the proposal is actually a “special tx” that contains the price information.
+* `PreBlocker` to make price information available during FinalizeBlock.
+
+If you would like to see the complete working oracle module please see [here](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle)
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/oracle/01-what-is-an-oracle.md b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/01-what-is-an-oracle.md
new file mode 100644
index 00000000..9d50ddb3
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/01-what-is-an-oracle.md
@@ -0,0 +1,13 @@
+# What is an Oracle?
+
+An oracle in blockchain technology is a system that provides external data to a blockchain network. It acts as a source of information that is not natively accessible within the blockchain's closed environment. This can range from financial market prices to real-world event, making it crucial for decentralised applications.
+
+## Oracle in the Cosmos SDK
+
+In the Cosmos SDK, an oracle module can be implemented to provide external data to the blockchain. This module can use features like vote extensions to submit additional data during the consensus process, which can then be used by the blockchain to update its state with information from the outside world.
+
+For instance, a price oracle module in the Cosmos SDK could supply timely and accurate asset price information, which is vital for various financial operations within the blockchain ecosystem.
+
+## Conclusion
+
+Oracles are essential for blockchains to interact with external data, enabling them to respond to real-world information and events. Their implementation is key to the reliability and robustness of blockchain networks.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/oracle/02-implementing-vote-extensions.md b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/02-implementing-vote-extensions.md
new file mode 100644
index 00000000..aa610b5d
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/02-implementing-vote-extensions.md
@@ -0,0 +1,219 @@
+# Implementing Vote Extensions
+
+## Implement ExtendVote
+
+First we’ll create the `OracleVoteExtension` struct, this is the object that will be marshaled as bytes and signed by the validator.
+
+In our example we’ll use JSON to marshal the vote extension for simplicity but we recommend to find an encoding that produces a smaller output, given that large vote extensions could impact CometBFT’s performance. Custom encodings and compressed bytes can be used out of the box.
+
+```go
+// OracleVoteExtension defines the canonical vote extension structure.
+type OracleVoteExtension struct {
+ Height int64
+ Prices map[string]math.LegacyDec
+}
+```
+
+Then we’ll create a `VoteExtensionsHandler` struct that contains everything we need to query for prices.
+
+```go
+type VoteExtHandler struct {
+ logger log.Logger
+ currentBlock int64 // current block height
+ lastPriceSyncTS time.Time // last time we synced prices
+ providerTimeout time.Duration // timeout for fetching prices from providers
+ providers map[string]Provider // mapping of provider name to provider (e.g. Binance -> BinanceProvider)
+ providerPairs map[string][]keeper.CurrencyPair // mapping of provider name to supported pairs (e.g. Binance -> [ATOM/USD])
+
+ Keeper keeper.Keeper // keeper of our oracle module
+}
+```
+
+Finally, a function that returns `sdk.ExtendVoteHandler` is needed too, and this is where our vote extension logic will live.
+
+```go
+func (h *VoteExtHandler) ExtendVoteHandler() sdk.ExtendVoteHandler {
+ return func(ctx sdk.Context, req *abci.RequestExtendVote) (*abci.ResponseExtendVote, error) {
+ // here we'd have a helper function that gets all the prices and does a weighted average using the volume of each market
+ prices := h.getAllVolumeWeightedPrices()
+
+ voteExt := OracleVoteExtension{
+ Height: req.Height,
+ Prices: prices,
+ }
+
+ bz, err := json.Marshal(voteExt)
+ if err != nil {
+ return nil, fmt.Errorf("failed to marshal vote extension: %w", err)
+ }
+
+ return &abci.ResponseExtendVote{VoteExtension: bz}, nil
+ }
+}
+```
+
+As you can see above, the creation of a vote extension is pretty simple and we just have to return bytes. CometBFT will handle the signing of these bytes for us. We ignored the process of getting the prices but you can see a more complete example [here:](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle/abci/vote_extensions.go)
+
+Here we’ll do some simple checks like:
+
+* Is the vote extension unmarshaled correctly?
+* Is the vote extension for the right height?
+* Some other validation, for example, are the prices from this extension too deviated from my own prices? Or maybe checks that can detect malicious behavior.
+
+```go
+func (h *VoteExtHandler) VerifyVoteExtensionHandler() sdk.VerifyVoteExtensionHandler {
+ return func(ctx sdk.Context, req *abci.RequestVerifyVoteExtension) (*abci.ResponseVerifyVoteExtension, error) {
+ var voteExt OracleVoteExtension
+ err := json.Unmarshal(req.VoteExtension, &voteExt)
+ if err != nil {
+ return nil, fmt.Errorf("failed to unmarshal vote extension: %w", err)
+ }
+
+ if voteExt.Height != req.Height {
+ return nil, fmt.Errorf("vote extension height does not match request height; expected: %d, got: %d", req.Height, voteExt.Height)
+ }
+
+ // Verify incoming prices from a validator are valid. Note, verification during
+ // VerifyVoteExtensionHandler MUST be deterministic. For brevity and demo
+ // purposes, we omit implementation.
+ if err := h.verifyOraclePrices(ctx, voteExt.Prices); err != nil {
+ return nil, fmt.Errorf("failed to verify oracle prices from validator %X: %w", req.ValidatorAddress, err)
+ }
+
+ return &abci.ResponseVerifyVoteExtension{Status: abci.ResponseVerifyVoteExtension_ACCEPT}, nil
+ }
+}
+```
+
+## Implement PrepareProposal
+
+```go
+type ProposalHandler struct {
+ logger log.Logger
+ keeper keeper.Keeper // our oracle module keeper
+ valStore baseapp.ValidatorStore // to get the current validators' pubkeys
+}
+```
+
+And we create the struct for our “special tx”, that will contain the prices and the votes so validators can later re-check in ProcessPRoposal that they get the same result than the block’s proposer. With this we could also check if all the votes have been used by comparing the votes received in ProcessProposal.
+
+```go
+type StakeWeightedPrices struct {
+ StakeWeightedPrices map[string]math.LegacyDec
+ ExtendedCommitInfo abci.ExtendedCommitInfo
+}
+```
+
+Now we create the `PrepareProposalHandler`. In this step we’ll first check if the vote extensions’ signatures are correct using a helper function called ValidateVoteExtensions from the baseapp package.
+
+```go
+func (h *ProposalHandler) PrepareProposal() sdk.PrepareProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestPrepareProposal) (*abci.ResponsePrepareProposal, error) {
+ err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), req.LocalLastCommit)
+ if err != nil {
+ return nil, err
+ }
+...
+```
+
+Then we proceed to make the calculations only if the current height if higher than the height at which vote extensions have been enabled. Remember that vote extensions are made available to the block proposer on the next block at which they are produced/enabled.
+
+```go
+...
+ proposalTxs := req.Txs
+
+ if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight {
+ stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, req.LocalLastCommit)
+ if err != nil {
+ return nil, errors.New("failed to compute stake-weighted oracle prices")
+ }
+
+ injectedVoteExtTx := StakeWeightedPrices{
+ StakeWeightedPrices: stakeWeightedPrices,
+ ExtendedCommitInfo: req.LocalLastCommit,
+ }
+...
+```
+
+Finally we inject the result as a transaction at a specific location, usually at the beginning of the block:
+
+## Implement ProcessProposal
+
+Now we can implement the method that all validators will execute to ensure the proposer is doing his work correctly.
+
+Here, if vote extensions are enabled, we’ll check if the tx at index 0 is an injected vote extension
+
+```go
+func (h *ProposalHandler) ProcessProposal() sdk.ProcessProposalHandler {
+ return func(ctx sdk.Context, req *abci.RequestProcessProposal) (*abci.ResponseProcessProposal, error) {
+ if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight {
+ var injectedVoteExtTx StakeWeightedPrices
+ if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil {
+ h.logger.Error("failed to decode injected vote extension tx", "err", err)
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+...
+```
+
+Then we re-validate the vote extensions signatures using
+baseapp.ValidateVoteExtensions, re-calculate the results (just like in PrepareProposal) and compare them with the results we got from the injected tx.
+
+```go
+ err := baseapp.ValidateVoteExtensions(ctx, h.valStore, req.Height, ctx.ChainID(), injectedVoteExtTx.ExtendedCommitInfo)
+ if err != nil {
+ return nil, err
+ }
+
+ // Verify the proposer's stake-weighted oracle prices by computing the same
+ // calculation and comparing the results. We omit verification for brevity
+ // and demo purposes.
+ stakeWeightedPrices, err := h.computeStakeWeightedOraclePrices(ctx, injectedVoteExtTx.ExtendedCommitInfo)
+ if err != nil {
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+
+ if err := compareOraclePrices(injectedVoteExtTx.StakeWeightedPrices, stakeWeightedPrices); err != nil {
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_REJECT}, nil
+ }
+ }
+
+ return &abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}, nil
+ }
+}
+```
+
+Important: In this example we avoided using the mempool and other basics, please refer to the DefaultProposalHandler for a complete implementation: [https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go](https://github.com/cosmos/cosmos-sdk/blob/v0.50.1/baseapp/abci_utils.go)
+
+## Implement PreBlocker
+
+Now validators are extending their vote, verifying other votes and including the result in the block. But how do we actually make use of this result? This is done in the PreBlocker which is code that is run before any other code during FinalizeBlock so we make sure we make this information available to the chain and its modules during the entire block execution (from BeginBlock).
+
+At this step we know that the injected tx is well-formatted and has been verified by the validators participating in consensus, so making use of it is straightforward. Just check if vote extensions are enabled, pick up the first transaction and use a method in your module’s keeper to set the result.
+
+```go
+func (h *ProposalHandler) PreBlocker(ctx sdk.Context, req *abci.RequestFinalizeBlock) (*sdk.ResponsePreBlock, error) {
+ res := &sdk.ResponsePreBlock{}
+ if len(req.Txs) == 0 {
+ return res, nil
+ }
+
+ if req.Height > ctx.ConsensusParams().Abci.VoteExtensionsEnableHeight {
+ var injectedVoteExtTx StakeWeightedPrices
+ if err := json.Unmarshal(req.Txs[0], &injectedVoteExtTx); err != nil {
+ h.logger.Error("failed to decode injected vote extension tx", "err", err)
+ return nil, err
+ }
+
+ // set oracle prices using the passed in context, which will make these prices available in the current block
+ if err := h.keeper.SetOraclePrices(ctx, injectedVoteExtTx.StakeWeightedPrices); err != nil {
+ return nil, err
+ }
+ }
+ return res, nil
+}
+
+```
+
+## Conclusion
+
+In this tutorial, we've created a simple price oracle module that incorporates vote extensions. We've seen how to implement `ExtendVote`, `VerifyVoteExtension`, `PrepareProposal`, `ProcessProposal`, and `PreBlocker` to handle the voting and verification process of vote extensions, as well as how to make use of the results during the block execution.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/oracle/03-testing-oracle.md b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/03-testing-oracle.md
new file mode 100644
index 00000000..905ca0d7
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/03-testing-oracle.md
@@ -0,0 +1,57 @@
+# Testing the Oracle Module
+
+We will guide you through the process of testing the Oracle module in your application. The Oracle module uses vote extensions to provide current price data. If you would like to see the complete working oracle module please see [here](https://github.com/cosmos/sdk-tutorials/blob/master/tutorials/oracle/base/x/oracle).
+
+## Step 1: Compile and Install the Application
+
+First, we need to compile and install the application. Please ensure you are in the `tutorials/oracle/base` directory. Run the following command in your terminal:
+
+```shell
+make install
+```
+
+This command compiles the application and moves the resulting binary to a location in your system's PATH.
+
+## Step 2: Initialise the Application
+
+Next, we need to initialise the application. Run the following command in your terminal:
+
+```shell
+make init
+```
+
+This command runs the script `tutorials/oracle/base/scripts/init.sh`, which sets up the necessary configuration for your application to run. This includes creating the `app.toml` configuration file and initialising the blockchain with a genesis block.
+
+## Step 3: Start the Application
+
+Now, we can start the application. Run the following command in your terminal:
+
+```shell
+exampled start
+```
+
+This command starts your application, begins the blockchain node, and starts processing transactions.
+
+## Step 4: Query the Oracle Prices
+
+Finally, we can query the current prices from the Oracle module. Run the following command in your terminal:
+
+```shell
+exampled q oracle prices
+```
+
+This command queries the current prices from the Oracle module. The expected output shows that the vote extensions were successfully included in the block and the Oracle module was able to retrieve the price data.
+
+## Understanding Vote Extensions in Oracle
+
+In the Oracle module, the `ExtendVoteHandler` function is responsible for creating the vote extensions. This function fetches the current prices from the provider, creates a `OracleVoteExtension` struct with these prices, and then marshals this struct into bytes. These bytes are then set as the vote extension.
+
+In the context of testing, the Oracle module uses a mock provider to simulate the behavior of a real price provider. This mock provider is defined in the mockprovider package and is used to return predefined prices for specific currency pairs.
+
+## Conclusion
+
+In this tutorial, we've delved into the concept of Oracle's in blockchain technology, focusing on their role in providing external data to a blockchain network. We've explored vote extensions, a powerful feature of ABCI++, and integrated them into a Cosmos SDK application to create a price oracle module.
+
+Through hands-on exercises, you've implemented vote extensions, and tested their effectiveness in providing timely and accurate asset price information. You've gained practical insights by setting up a mock provider for testing and analysing the process of extending votes, verifying vote extensions, and preparing and processing proposals.
+
+Keep experimenting with these concepts, engage with the community, and stay updated on new advancements. The knowledge you've acquired here is crucial for developing robust and reliable blockchain applications that can interact with real-world data.
diff --git a/copy-of-sdk-docs/tutorials/vote-extensions/oracle/_category_.json b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/_category_.json
new file mode 100644
index 00000000..b63ffe2f
--- /dev/null
+++ b/copy-of-sdk-docs/tutorials/vote-extensions/oracle/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Oracle Tutorial",
+ "position": 1,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/user/run-node/00-keyring.md b/copy-of-sdk-docs/user/run-node/00-keyring.md
new file mode 100644
index 00000000..95f754d9
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/00-keyring.md
@@ -0,0 +1,145 @@
+---
+sidebar_position: 1
+---
+
+# Setting up the keyring
+
+:::note Synopsis
+This document describes how to configure and use the keyring and its various backends for an [**application**](../../learn/beginner/00-app-anatomy.md).
+:::
+
+The keyring holds the private/public key pairs used to interact with a node. For instance, a validator key needs to be set up before running the blockchain node, so that blocks can be correctly signed. The private key can be stored in different locations, called "backends," such as a file or the operating system's own key storage.
+
+## Available backends for the keyring
+
+Starting with the v0.38.0 release, Cosmos SDK comes with a new keyring implementation
+that provides a set of commands to manage cryptographic keys in a secure fashion. The
+new keyring supports multiple storage backends, some of which may not be available on
+all operating systems.
+
+### The `os` backend
+
+The `os` backend relies on operating system-specific defaults to handle key storage
+securely. Typically, an operating system's credential subsystem handles password prompts,
+private keys storage, and user sessions according to the user's password policies. Here
+is a list of the most popular operating systems and their respective password managers:
+
+* macOS: [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac)
+* Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management)
+* GNU/Linux:
+ * [libsecret](https://gitlab.gnome.org/GNOME/libsecret)
+ * [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html)
+ * [keyctl](https://www.kernel.org/doc/html/latest/security/keys/core.html)
+
+GNU/Linux distributions that use GNOME as the default desktop environment typically come with
+[Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are
+commonly provided with [KDE Wallet Manager](https://userbase.kde.org/KDE_Wallet_Manager).
+Whilst the former is in fact a `libsecret` convenient frontend, the latter is a `kwallet`
+client. `keyctl` is a secure backend that leverages the Linux kernel security key management system
+to store cryptographic keys securely in memory.
+
+`os` is the default option since operating system's default credentials managers are
+designed to meet users' most common needs and provide them with a comfortable
+experience without compromising on security.
+
+The recommended backends for headless environments are `file` and `pass`.
+
+### The `file` backend
+
+The `file` backend more closely resembles the keybase implementation used prior to
+v0.38.1. It stores the keyring encrypted within the app's configuration directory. This
+keyring will request a password each time it is accessed, which may occur multiple
+times in a single command resulting in repeated password prompts. If using bash scripts
+to execute commands using the `file` option you may want to utilize the following format
+for multiple prompts:
+
+```shell
+# assuming that KEYPASSWD is set in the environment
+$ gaiacli config keyring-backend file # use file backend
+$ (echo $KEYPASSWD; echo $KEYPASSWD) | gaiacli keys add me # multiple prompts
+$ echo $KEYPASSWD | gaiacli keys show me # single prompt
+```
+
+:::tip
+The first time you add a key to an empty keyring, you will be prompted to type the password twice.
+:::
+
+### The `pass` backend
+
+The `pass` backend uses the [pass](https://www.passwordstore.org/) utility to manage on-disk
+encryption of keys' sensitive data and metadata. Keys are stored inside `gpg` encrypted files
+within app-specific directories. `pass` is available for the most popular UNIX
+operating systems as well as GNU/Linux distributions. Please refer to its manual page for
+information on how to download and install it.
+
+:::tip
+**pass** uses [GnuPG](https://gnupg.org/) for encryption. `gpg` automatically invokes the `gpg-agent`
+daemon upon execution, which handles the caching of GnuPG credentials. Please refer to `gpg-agent`
+man page for more information on how to configure cache parameters such as credentials TTL and
+passphrase expiration.
+:::
+
+The password store must be set up prior to first use:
+
+```shell
+pass init
+```
+
+Replace `` with your GPG key ID. You can use your personal GPG key or an alternative
+one you may want to use specifically to encrypt the password store.
+
+### The `kwallet` backend
+
+The `kwallet` backend uses `KDE Wallet Manager`, which comes installed by default on the
+GNU/Linux distributions that ship KDE as the default desktop environment. Please refer to
+[KWallet API documentation](https://api.kde.org/frameworks/kwallet/html/index.html) for more
+information.
+
+### The `keyctl` backend
+
+The *Kernel Key Retention Service* is a security facility that
+has been added to the Linux kernel relatively recently. It allows sensitive
+cryptographic data such as passwords, private key, authentication tokens, etc
+to be stored securely in memory.
+
+The `keyctl` backend is available on Linux platforms only.
+
+### The `test` backend
+
+The `test` backend is a password-less variation of the `file` backend. Keys are stored
+unencrypted on disk.
+
+**Provided for testing purposes only. The `test` backend is not recommended for use in production environments**.
+
+### The `memory` backend
+
+The `memory` backend stores keys in memory. The keys are immediately deleted after the program has exited.
+
+**Provided for testing purposes only. The `memory` backend is not recommended for use in production environments**.
+
+### Setting backend using an env variable
+
+You can set the keyring-backend using env variable: `BINNAME_KEYRING_BACKEND`. For example, if your binary name is `gaia-v5` then set: `export GAIA_V5_KEYRING_BACKEND=pass`
+
+## Adding keys to the keyring
+
+:::warning
+Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets.
+:::
+
+Applications developed using the Cosmos SDK come with the `keys` subcommand. For the purpose of this tutorial, we're running the `simd` CLI, which is an application built using the Cosmos SDK for testing and educational purposes. For more information, see [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp).
+
+You can use `simd keys` for help about the keys command and `simd keys [command] --help` for more information about a particular subcommand.
+
+To create a new key in the keyring, run the `add` subcommand with a `` argument. For the purpose of this tutorial, we will solely use the `test` backend, and call our new key `my_validator`. This key will be used in the next section.
+
+```bash
+$ simd keys add my_validator --keyring-backend test
+
+# Put the generated address in a variable for later use.
+MY_VALIDATOR_ADDRESS=$(simd keys show my_validator -a --keyring-backend test)
+```
+
+This command generates a new 24-word mnemonic phrase, persists it to the relevant backend, and outputs information about the keypair. If this keypair will be used to hold value-bearing tokens, be sure to write down the mnemonic phrase somewhere safe!
+
+By default, the keyring generates a `secp256k1` keypair. The keyring also supports `ed25519` keys, which may be created by passing the `--algo ed25519` flag. A keyring can of course hold both types of keys simultaneously, and the Cosmos SDK's `x/auth` module supports natively these two public key algorithms.
diff --git a/copy-of-sdk-docs/user/run-node/01-run-node.md b/copy-of-sdk-docs/user/run-node/01-run-node.md
new file mode 100644
index 00000000..88aa38f2
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/01-run-node.md
@@ -0,0 +1,218 @@
+---
+sidebar_position: 1
+---
+
+# Running a Node
+
+:::note Synopsis
+Now that the application is ready and the keyring populated, it's time to see how to run the blockchain node. In this section, the application we are running is called [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp), and its corresponding CLI binary `simd`.
+:::
+
+:::note Pre-requisite Readings
+
+* [Anatomy of a Cosmos SDK Application](../../learn/beginner/00-app-anatomy.md)
+* [Setting up the keyring](./00-keyring.md)
+
+:::
+
+## Initialize the Chain
+
+:::warning
+Make sure you can build your own binary, and replace `simd` with the name of your binary in the snippets.
+:::
+
+Before actually running the node, we need to initialize the chain, and most importantly, its genesis file. This is done with the `init` subcommand:
+
+```bash
+# The argument is the custom username of your node, it should be human-readable.
+simd init --chain-id my-test-chain
+```
+
+The command above creates all the configuration files needed for your node to run, as well as a default genesis file, which defines the initial state of the network.
+
+:::tip
+All these configuration files are in `~/.simapp` by default, but you can overwrite the location of this folder by passing the `--home` flag to each command,
+or set an `$APPD_HOME` environment variable (where `APPD` is the name of the binary).
+:::
+
+The `~/.simapp` folder has the following structure:
+
+```bash
+. # ~/.simapp
+ |- data # Contains the databases used by the node.
+ |- config/
+ |- app.toml # Application-related configuration file.
+ |- config.toml # CometBFT-related configuration file.
+ |- genesis.json # The genesis file.
+ |- node_key.json # Private key to use for node authentication in the p2p protocol.
+ |- priv_validator_key.json # Private key to use as a validator in the consensus protocol.
+```
+
+## Updating Some Default Settings
+
+If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. A few examples are listed here.
+
+```bash
+# to change the chain-id
+jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json
+
+# to enable the api server
+sed -i '/\[api\]/,+3 s/enable = false/enable = true/' app.toml
+
+# to change the voting_period
+jq '.app_state.gov.voting_params.voting_period = "600s"' genesis.json > temp.json && mv temp.json genesis.json
+
+# to change the inflation
+jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > temp.json && mv temp.json genesis.json
+```
+
+### Client Interaction
+
+When instantiating a node, GRPC and REST are defaulted to localhost to avoid unknown exposure of your node to the public. It is recommended not to expose these endpoints without a proxy that can handle load balancing or authentication set up between your node and the public.
+
+:::tip
+A commonly used tool for this is [nginx](https://nginx.org).
+:::
+
+
+## Adding Genesis Accounts
+
+Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](./00-keyring.md#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend).
+
+Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence:
+
+```bash
+simd genesis add-genesis-account $MY_VALIDATOR_ADDRESS 100000000000stake
+```
+
+Recall that `$MY_VALIDATOR_ADDRESS` is a variable that holds the address of the `my_validator` key in the [keyring](./00-keyring.md#adding-keys-to-the-keyring). Also note that the tokens in the Cosmos SDK have the `{amount}{denom}` format: `amount` is an 18-digit-precision decimal number, and `denom` is the unique token identifier with its denomination key (e.g. `atom` or `uatom`). Here, we are granting `stake` tokens, as `stake` is the token identifier used for staking in [`simapp`](https://github.com/cosmos/cosmos-sdk/tree/main/simapp). For your own chain with its own staking denom, that token identifier should be used instead.
+
+Now that your account has some tokens, you need to add a validator to your chain. Validators are special full-nodes that participate in the consensus process (implemented in the [underlying consensus engine](../../learn/intro/02-sdk-app-architecture.md#cometbft)) in order to add new blocks to the chain. Any account can declare its intention to become a validator operator, but only those with sufficient delegation get to enter the active set (for example, only the top 125 validator candidates with the most delegation get to be validators in the Cosmos Hub). For this guide, you will add your local node (created via the `init` command above) as a validator of your chain. Validators can be declared before a chain is first started via a special transaction included in the genesis file called a `gentx`:
+
+```bash
+# Create a gentx.
+simd genesis gentx my_validator 100000000stake --chain-id my-test-chain --keyring-backend test
+
+# Add the gentx to the genesis file.
+simd genesis collect-gentxs
+```
+
+A `gentx` does three things:
+
+1. Registers the `validator` account you created as a validator operator account (i.e., the account that controls the validator).
+2. Self-delegates the provided `amount` of staking tokens.
+3. Link the operator account with a CometBFT node pubkey that will be used for signing blocks. If no `--pubkey` flag is provided, it defaults to the local node pubkey created via the `simd init` command above.
+
+For more information on `gentx`, use the following command:
+
+```bash
+simd genesis gentx --help
+```
+
+## Configuring the Node Using `app.toml` and `config.toml`
+
+The Cosmos SDK automatically generates two configuration files inside `~/.simapp/config`:
+
+* `config.toml`: used to configure the CometBFT, learn more on [CometBFT's documentation](https://docs.cometbft.com/v0.37/core/configuration),
+* `app.toml`: generated by the Cosmos SDK, and used to configure your app, such as state pruning strategies, telemetry, gRPC and REST servers configuration, state sync...
+
+Both files are heavily commented, please refer to them directly to tweak your node.
+
+One example config to tweak is the `minimum-gas-prices` field inside `app.toml`, which defines the minimum gas prices the validator node is willing to accept for processing a transaction. Depending on the chain, it might be an empty string or not. If it's empty, make sure to edit the field with some value, for example `10token`, or else the node will halt on startup. For the purpose of this tutorial, let's set the minimum gas price to 0:
+
+```toml
+ # The minimum gas prices a validator is willing to accept for processing a
+ # transaction. A transaction's fees must meet the minimum of any denomination
+ # specified in this config (e.g. 0.25token1;0.0001token2).
+ minimum-gas-prices = "0stake"
+```
+
+:::tip
+When running a node (not a validator!) and not wanting to run the application mempool, set the `max-txs` field to `-1`.
+
+```toml
+[mempool]
+# Setting max-txs to 0 will allow for an unbounded amount of transactions in the mempool.
+# Setting max_txs to negative 1 (-1) will disable transactions from being inserted into the mempool.
+# Setting max_txs to a positive number (> 0) will limit the number of transactions in the mempool, by the specified amount.
+#
+# Note, this configuration only applies to SDK built-in app-side mempool
+# implementations.
+max-txs = "-1"
+```
+
+:::
+
+## Run a Localnet
+
+Now that everything is set up, you can finally start your node:
+
+```bash
+simd start
+```
+
+You should see blocks come in.
+
+The previous command allows you to run a single node. This is enough for the next section on interacting with this node, but you may wish to run multiple nodes at the same time, and see how consensus happens between them.
+
+The naive way would be to run the same commands again in separate terminal windows. This is possible, however, in the Cosmos SDK, we leverage the power of [Docker Compose](https://docs.docker.com/compose/) to run a localnet. If you need inspiration on how to set up your own localnet with Docker Compose, you can have a look at the Cosmos SDK's [`docker-compose.yml`](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/docker-compose.yml).
+
+### Standalone App/CometBFT
+
+By default, the Cosmos SDK runs CometBFT in-process with the application
+If you want to run the application and CometBFT in separate processes,
+start the application with the `--with-comet=false` flag
+and set `rpc.laddr` in `config.toml` to the CometBFT node's RPC address.
+
+## Logging
+
+Logging provides a way to see what is going on with a node. The default logging level is info. This is a global level and all info logs will be outputted to the terminal. If you would like to filter specific logs to the terminal instead of all, then setting `module:log_level` is how this can work.
+
+Example:
+
+In config.toml:
+
+```toml
+log_level: "state:info,p2p:info,consensus:info,x/staking:info,x/ibc:info,*error"
+```
+
+## State Sync
+
+State sync is the act in which a node syncs the latest or close to the latest state of a blockchain. This is useful for users who don't want to sync all the blocks in history. Read more in [CometBFT documentation](https://docs.cometbft.com/v0.37/core/state-sync).
+
+State sync works thanks to snapshots. Read how the SDK handles snapshots [here](https://github.com/cosmos/cosmos-sdk/blob/825245d/store/snapshots/README.md).
+
+### Local State Sync
+
+Local state sync works similar to normal state sync except that it works off a local snapshot of state instead of one provided via the p2p network. The steps to start local state sync are similar to normal state sync with a few different designs.
+
+1. As mentioned in https://docs.cometbft.com/v0.37/core/state-sync, one must set a height and hash in the config.toml along with a few rpc servers (the aforementioned link has instructions on how to do this).
+2. Run `` to restore a local snapshot (note: first load it from a file with the *load* command).
+3. Bootstrapping Comet state to start the node after the snapshot has been ingested. This can be done with the bootstrap command ` comet bootstrap-state`
+
+### Snapshots Commands
+
+The Cosmos SDK provides commands for managing snapshots.
+These commands can be added in an app with the following snippet in `cmd//root.go`:
+
+```go
+import (
+ "github.com/cosmos/cosmos-sdk/client/snapshot"
+)
+
+func initRootCmd(/* ... */) {
+ // ...
+ rootCmd.AddCommand(
+ snapshot.Cmd(appCreator),
+ )
+}
+```
+
+Then the following commands are available at ` snapshots [command]`:
+
+* **list**: list local snapshots
+* **load**: Load a snapshot archive file into snapshot store
+* **restore**: Restore app state from local snapshot
+* **export**: Export app state to snapshot store
+* **dump**: Dump the snapshot as portable archive format
+* **delete**: Delete a local snapshot
diff --git a/copy-of-sdk-docs/user/run-node/02-interact-node.md b/copy-of-sdk-docs/user/run-node/02-interact-node.md
new file mode 100644
index 00000000..1a76f02f
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/02-interact-node.md
@@ -0,0 +1,289 @@
+---
+sidebar_position: 1
+---
+
+# Interacting with the Node
+
+:::note Synopsis
+There are multiple ways to interact with a node: using the CLI, using gRPC or using the REST endpoints.
+:::
+
+:::note Pre-requisite Readings
+
+* [gRPC, REST and CometBFT Endpoints](../../learn/advanced/06-grpc_rest.md)
+* [Running a Node](./01-run-node.md)
+
+:::
+
+## Using the CLI
+
+Now that your chain is running, it is time to try sending tokens from the first account you created to a second account. In a new terminal window, start by running the following query command:
+
+```bash
+simd query bank balances $MY_VALIDATOR_ADDRESS
+```
+
+You should see the current balance of the account you created, equal to the original balance of `stake` you granted it minus the amount you delegated via the `gentx`. Now, create a second account:
+
+```bash
+simd keys add recipient --keyring-backend test
+
+# Put the generated address in a variable for later use.
+RECIPIENT=$(simd keys show recipient -a --keyring-backend test)
+```
+
+The command above creates a local key-pair that is not yet registered on the chain. An account is created the first time it receives tokens from another account. Now, run the following command to send tokens to the `recipient` account:
+
+```bash
+simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000000stake --chain-id my-test-chain --keyring-backend test
+
+# Check that the recipient account did receive the tokens.
+simd query bank balances $RECIPIENT
+```
+
+Finally, delegate some of the stake tokens sent to the `recipient` account to the validator:
+
+```bash
+simd tx staking delegate $(simd keys show my_validator --bech val -a --keyring-backend test) 500stake --from recipient --chain-id my-test-chain --keyring-backend test
+
+# Query the total delegations to `validator`.
+simd query staking delegations-to $(simd keys show my_validator --bech val -a --keyring-backend test)
+```
+
+You should see two delegations, the first one made from the `gentx`, and the second one you just performed from the `recipient` account.
+
+## Using gRPC
+
+The Protobuf ecosystem developed tools for different use cases, including code-generation from `*.proto` files into various languages. These tools allow the building of clients easily. Often, the client connection (i.e. the transport) can be plugged and replaced very easily. Let's explore one of the most popular transports: [gRPC](../../learn/advanced/06-grpc_rest.md).
+
+Since the code generation library largely depends on your own tech stack, we will only present three alternatives:
+
+* `grpcurl` for generic debugging and testing,
+* programmatically via Go,
+* CosmJS for JavaScript/TypeScript developers.
+
+### grpcurl
+
+[grpcurl](https://github.com/fullstorydev/grpcurl) is like `curl` but for gRPC. It is also available as a Go library, but we will use it only as a CLI command for debugging and testing purposes. Follow the instructions in the previous link to install it.
+
+Assuming you have a local node running (either a localnet, or connected to a live network), you should be able to run the following command to list the Protobuf services available (you can replace `localhost:9000` by the gRPC server endpoint of another node, which is configured under the `grpc.address` field inside [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml)):
+
+```bash
+grpcurl -plaintext localhost:9090 list
+```
+
+You should see a list of gRPC services, like `cosmos.bank.v1beta1.Query`. This is called reflection, which is a Protobuf endpoint returning a description of all available endpoints. Each of these represents a different Protobuf service, and each service exposes multiple RPC methods you can query against.
+
+In order to get a description of the service you can run the following command:
+
+```bash
+grpcurl -plaintext \
+ localhost:9090 \
+ describe cosmos.bank.v1beta1.Query # Service we want to inspect
+```
+
+It's also possible to execute an RPC call to query the node for information:
+
+```bash
+grpcurl \
+ -plaintext \
+ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/AllBalances
+```
+
+The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786).
+
+#### Query for historical state using grpcurl
+
+You may also query for historical data by passing some [gRPC metadata](https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) to the query: the `x-cosmos-block-height` metadata should contain the block to query. Using grpcurl as above, the command looks like:
+
+```bash
+grpcurl \
+ -plaintext \
+ -H "x-cosmos-block-height: 123" \
+ -d "{\"address\":\"$MY_VALIDATOR_ADDRESS\"}" \
+ localhost:9090 \
+ cosmos.bank.v1beta1.Query/AllBalances
+```
+
+Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response.
+
+### Programmatically via Go
+
+The following snippet shows how to query the state using gRPC inside a Go program. The idea is to create a gRPC connection, and use the Protobuf-generated client code to query the gRPC server.
+
+#### Install Cosmos SDK
+
+
+```bash
+go get github.com/cosmos/cosmos-sdk@main
+```
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+
+ "google.golang.org/grpc"
+
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ banktypes "github.com/cosmos/cosmos-sdk/x/bank/types"
+)
+
+func queryState() error {
+ myAddress, err := sdk.AccAddressFromBech32("cosmos1...") // the my_validator or recipient address.
+ if err != nil {
+ return err
+ }
+
+ // Create a connection to the gRPC server.
+ grpcConn, err := grpc.Dial(
+ "127.0.0.1:9090", // your gRPC server address.
+ grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism.
+ // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry
+ // if the request/response types contain interface instead of 'nil' you should pass the application specific codec.
+ grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())),
+ )
+ if err != nil {
+ return err
+ }
+ defer grpcConn.Close()
+
+ // This creates a gRPC client to query the x/bank service.
+ bankClient := banktypes.NewQueryClient(grpcConn)
+ bankRes, err := bankClient.Balance(
+ context.Background(),
+ &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"},
+ )
+ if err != nil {
+ return err
+ }
+
+ fmt.Println(bankRes.GetBalance()) // Prints the account balance
+
+ return nil
+}
+
+func main() {
+ if err := queryState(); err != nil {
+ panic(err)
+ }
+}
+```
+
+You can replace the query client (here we are using `x/bank`'s) with one generated from any other Protobuf service. The list of all available gRPC query endpoints is [coming soon](https://github.com/cosmos/cosmos-sdk/issues/7786).
+
+#### Query for historical state using Go
+
+Querying for historical blocks is done by adding the block height metadata in the gRPC request.
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/metadata"
+
+ "github.com/cosmos/cosmos-sdk/codec"
+ sdk "github.com/cosmos/cosmos-sdk/types"
+ grpctypes "github.com/cosmos/cosmos-sdk/types/grpc"
+ banktypes "github.com/cosmos/cosmos-sdk/x/bank/types"
+)
+
+func queryState() error {
+ myAddress, err := sdk.AccAddressFromBech32("cosmos1yerherx4d43gj5wa3zl5vflj9d4pln42n7kuzu") // the my_validator or recipient address.
+ if err != nil {
+ return err
+ }
+
+ // Create a connection to the gRPC server.
+ grpcConn, err := grpc.Dial(
+ "127.0.0.1:9090", // your gRPC server address.
+ grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism.
+ // This instantiates a general gRPC codec which handles proto bytes. We pass in a nil interface registry
+ // if the request/response types contain interface instead of 'nil' you should pass the application specific codec.
+ grpc.WithDefaultCallOptions(grpc.ForceCodec(codec.NewProtoCodec(nil).GRPCCodec())),
+ )
+ if err != nil {
+ return err
+ }
+ defer grpcConn.Close()
+
+ // This creates a gRPC client to query the x/bank service.
+ bankClient := banktypes.NewQueryClient(grpcConn)
+
+ var header metadata.MD
+ _, err = bankClient.Balance(
+ metadata.AppendToOutgoingContext(context.Background(), grpctypes.GRPCBlockHeightHeader, "12"), // Add metadata to request
+ &banktypes.QueryBalanceRequest{Address: myAddress.String(), Denom: "stake"},
+ grpc.Header(&header), // Retrieve header from response
+ )
+ if err != nil {
+ return err
+ }
+ blockHeight := header.Get(grpctypes.GRPCBlockHeightHeader)
+
+ fmt.Println(blockHeight) // Prints the block height (12)
+
+ return nil
+}
+
+func main() {
+ if err := queryState(); err != nil {
+ panic(err)
+ }
+}
+```
+
+### CosmJS
+
+CosmJS documentation can be found at [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs). As of January 2021, CosmJS documentation is still a work in progress.
+
+## Using the REST Endpoints
+
+As described in the [gRPC guide](../../learn/advanced/06-grpc_rest.md), all gRPC services on the Cosmos SDK are made available for more convenient REST-based queries through gRPC-gateway. The format of the URL path is based on the Protobuf service method's full-qualified name, but may contain small customizations so that final URLs look more idiomatic. For example, the REST endpoint for the `cosmos.bank.v1beta1.Query/AllBalances` method is `GET /cosmos/bank/v1beta1/balances/{address}`. Request arguments are passed as query parameters.
+
+Note that the REST endpoints are not enabled by default. To enable them, edit the `api` section of your `~/.simapp/config/app.toml` file:
+
+```toml
+# Enable defines if the API server should be enabled.
+enable = true
+```
+
+As a concrete example, the `curl` command to make balances request is:
+
+```bash
+curl \
+ -X GET \
+ -H "Content-Type: application/json" \
+ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS
+```
+
+Make sure to replace `localhost:1317` with the REST endpoint of your node, configured under the `api.address` field.
+
+The list of all available REST endpoints is available as a Swagger specification file, it can be viewed at `localhost:1317/swagger`. Make sure that the `api.swagger` field is set to true in your [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml) file.
+
+### Query for historical state using REST
+
+Querying for historical state is done using the HTTP header `x-cosmos-block-height`. For example, a curl command would look like:
+
+```bash
+curl \
+ -X GET \
+ -H "Content-Type: application/json" \
+ -H "x-cosmos-block-height: 123" \
+ http://localhost:1317/cosmos/bank/v1beta1/balances/$MY_VALIDATOR_ADDRESS
+```
+
+Assuming the state at that block has not yet been pruned by the node, this query should return a non-empty response.
+
+### Cross-Origin Resource Sharing (CORS)
+
+[CORS policies](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) are not enabled by default to help with security. If you would like to use the rest-server in a public environment we recommend you provide a reverse proxy, this can be done with [nginx](https://www.nginx.com/). For testing and development purposes there is an `enabled-unsafe-cors` field inside [`app.toml`](../../user/run-node/01-run-node.md#configuring-the-node-using-apptoml-and-configtoml).
diff --git a/copy-of-sdk-docs/user/run-node/03-txs.md b/copy-of-sdk-docs/user/run-node/03-txs.md
new file mode 100644
index 00000000..93f81055
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/03-txs.md
@@ -0,0 +1,429 @@
+---
+sidebar_position: 1
+---
+
+# Generating, Signing and Broadcasting Transactions
+
+:::note Synopsis
+This document describes how to generate an (unsigned) transaction, signing it (with one or multiple keys), and broadcasting it to the network.
+:::
+
+## Using the CLI
+
+The easiest way to send transactions is using the CLI, as we have seen in the previous page when [interacting with a node](./02-interact-node.md#using-the-cli). For example, running the following command
+
+```bash
+simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --keyring-backend test
+```
+
+will run the following steps:
+
+* generate a transaction with one `Msg` (`x/bank`'s `MsgSend`), and print the generated transaction to the console.
+* ask the user for confirmation to send the transaction from the `$MY_VALIDATOR_ADDRESS` account.
+* fetch `$MY_VALIDATOR_ADDRESS` from the keyring. This is possible because we have [set up the CLI's keyring](./00-keyring.md) in a previous step.
+* sign the generated transaction with the keyring's account.
+* broadcast the signed transaction to the network. This is possible because the CLI connects to the node's CometBFT RPC endpoint.
+
+The CLI bundles all the necessary steps into a simple-to-use user experience. However, it's possible to run all the steps individually too.
+
+### Generating a Transaction
+
+Generating a transaction can simply be done by appending the `--generate-only` flag on any `tx` command, e.g.:
+
+```bash
+simd tx bank send $MY_VALIDATOR_ADDRESS $RECIPIENT 1000stake --chain-id my-test-chain --generate-only
+```
+
+This will output the unsigned transaction as JSON in the console. We can also save the unsigned transaction to a file (to be passed around between signers more easily) by appending `> unsigned_tx.json` to the above command.
+
+### Signing a Transaction
+
+Signing a transaction using the CLI requires the unsigned transaction to be saved in a file. Let's assume the unsigned transaction is in a file called `unsigned_tx.json` in the current directory (see previous paragraph on how to do that). Then, simply run the following command:
+
+```bash
+simd tx sign unsigned_tx.json --chain-id my-test-chain --keyring-backend test --from $MY_VALIDATOR_ADDRESS
+```
+
+This command will decode the unsigned transaction and sign it with `SIGN_MODE_DIRECT` with `$MY_VALIDATOR_ADDRESS`'s key, which we already set up in the keyring. The signed transaction will be output as JSON to the console, and, as above, we can save it to a file by appending `--output-document signed_tx.json`.
+
+Some useful flags to consider in the `tx sign` command:
+
+* `--sign-mode`: you may use `amino-json` to sign the transaction using `SIGN_MODE_LEGACY_AMINO_JSON`,
+* `--offline`: sign in offline mode. This means that the `tx sign` command doesn't connect to the node to retrieve the signer's account number and sequence, both needed for signing. In this case, you must manually supply the `--account-number` and `--sequence` flags. This is useful for offline signing, i.e. signing in a secure environment which doesn't have access to the internet.
+
+#### Signing with Multiple Signers
+
+:::warning
+Please note that signing a transaction with multiple signers or with a multisig account, where at least one signer uses `SIGN_MODE_DIRECT`, is not yet possible. You may follow [this Github issue](https://github.com/cosmos/cosmos-sdk/issues/8141) for more info.
+:::
+
+Signing with multiple signers is done with the `tx multisign` command. This command assumes that all signers use `SIGN_MODE_LEGACY_AMINO_JSON`. The flow is similar to the `tx sign` command flow, but instead of signing an unsigned transaction file, each signer signs the file signed by previous signer(s). The `tx multisign` command will append signatures to the existing transactions. It is important that signers sign the transaction **in the same order** as given by the transaction, which is retrievable using the `GetSigners()` method.
+
+For example, starting with the `unsigned_tx.json`, and assuming the transaction has 4 signers, we would run:
+
+```bash
+# Let signer1 sign the unsigned tx.
+simd tx multisign unsigned_tx.json signer_key_1 --chain-id my-test-chain --keyring-backend test > partial_tx_1.json
+# Now signer1 will send the partial_tx_1.json to the signer2.
+# Signer2 appends their signature:
+simd tx multisign partial_tx_1.json signer_key_2 --chain-id my-test-chain --keyring-backend test > partial_tx_2.json
+# Signer2 sends the partial_tx_2.json file to signer3, and signer3 can append his signature:
+simd tx multisign partial_tx_2.json signer_key_3 --chain-id my-test-chain --keyring-backend test > partial_tx_3.json
+```
+
+### Broadcasting a Transaction
+
+Broadcasting a transaction is done using the following command:
+
+```bash
+simd tx broadcast tx_signed.json
+```
+
+You may optionally pass the `--broadcast-mode` flag to specify which response to receive from the node:
+
+* `sync`: the CLI waits for a CheckTx execution response only.
+* `async`: the CLI returns immediately (transaction might fail).
+
+### Encoding a Transaction
+
+In order to broadcast a transaction using the gRPC or REST endpoints, the transaction will need to be encoded first. This can be done using the CLI.
+
+Encoding a transaction is done using the following command:
+
+```bash
+simd tx encode tx_signed.json
+```
+
+This will read the transaction from the file, serialize it using Protobuf, and output the transaction bytes as base64 in the console.
+
+### Decoding a Transaction
+
+The CLI can also be used to decode transaction bytes.
+
+Decoding a transaction is done using the following command:
+
+```bash
+simd tx decode [protobuf-byte-string]
+```
+
+This will decode the transaction bytes and output the transaction as JSON in the console. You can also save the transaction to a file by appending `> tx.json` to the above command.
+
+## Programmatically with Go
+
+It is possible to manipulate transactions programmatically via Go using the Cosmos SDK's `TxBuilder` interface.
+
+### Generating a Transaction
+
+Before generating a transaction, a new instance of a `TxBuilder` needs to be created. Since the Cosmos SDK supports both Amino and Protobuf transactions, the first step would be to decide which encoding scheme to use. All the subsequent steps remain unchanged, whether you're using Amino or Protobuf, as `TxBuilder` abstracts the encoding mechanisms. In the following snippet, we will use Protobuf.
+
+```go
+import (
+ "github.com/cosmos/cosmos-sdk/simapp"
+)
+
+func sendTx() error {
+ // Choose your codec: Amino or Protobuf. Here, we use Protobuf, given by the following function.
+ app := simapp.NewSimApp(...)
+
+ // Create a new TxBuilder.
+ txBuilder := app.TxConfig().NewTxBuilder()
+
+ // --snip--
+}
+```
+
+We can also set up some keys and addresses that will send and receive the transactions. Here, for the purpose of the tutorial, we will be using some dummy data to create keys.
+
+```go
+import (
+ "github.com/cosmos/cosmos-sdk/testutil/testdata"
+)
+
+priv1, _, addr1 := testdata.KeyTestPubAddr()
+priv2, _, addr2 := testdata.KeyTestPubAddr()
+priv3, _, addr3 := testdata.KeyTestPubAddr()
+```
+
+Populating the `TxBuilder` can be done via its methods:
+
+```go reference
+https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/client/tx_config.go#L39-L57
+```
+
+```go
+import (
+ banktypes "github.com/cosmos/cosmos-sdk/x/bank/types"
+)
+
+func sendTx() error {
+ // --snip--
+
+ // Define two x/bank MsgSend messages:
+ // - from addr1 to addr3,
+ // - from addr2 to addr3.
+ // This means that the transaction needs two signers: addr1 and addr2.
+ msg1 := banktypes.NewMsgSend(addr1, addr3, types.NewCoins(types.NewInt64Coin("atom", 12)))
+ msg2 := banktypes.NewMsgSend(addr2, addr3, types.NewCoins(types.NewInt64Coin("atom", 34)))
+
+ err := txBuilder.SetMsgs(msg1, msg2)
+ if err != nil {
+ return err
+ }
+
+ txBuilder.SetGasLimit(...)
+ txBuilder.SetFeeAmount(...)
+ txBuilder.SetMemo(...)
+ txBuilder.SetTimeoutHeight(...)
+}
+```
+
+At this point, `TxBuilder`'s underlying transaction is ready to be signed.
+
+#### Generating an Unordered Transaction
+
+Starting with Cosmos SDK v0.53.0, users may send unordered transactions to chains that have the feature enabled.
+
+:::warning
+
+Unordered transactions MUST leave sequence values unset. When a transaction is both unordered and contains a non-zero sequence value,
+the transaction will be rejected. External services that operate on prior assumptions about transaction sequence values should be updated to handle unordered transactions.
+Services should be aware that when the transaction is unordered, the transaction sequence will always be zero.
+
+:::
+
+Using the example above, we can set the required fields to mark a transaction as unordered.
+By default, unordered transactions charge an extra 2240 units of gas to offset the additional storage overhead that supports their functionality.
+The extra units of gas are customizable and therefore vary by chain, so be sure to check the chain's ante handler for the gas value set, if any.
+
+```go
+func sendTx() error {
+ // --snip--
+ expiration := 5 * time.Minute
+ txBuilder.SetUnordered(true)
+ txBuilder.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond)))
+}
+```
+
+Unordered transactions from the same account must use a unique timeout timestamp value. The difference between each timeout timestamp value may be as small as a nanosecond, however.
+
+```go
+import (
+ "github.com/cosmos/cosmos-sdk/client"
+)
+
+func sendMessages(txBuilders []client.TxBuilder) error {
+ // --snip--
+ expiration := 5 * time.Minute
+ for _, txb := range txBuilders {
+ txb.SetUnordered(true)
+ txb.SetTimeoutTimestamp(time.Now().Add(expiration + (1 * time.Nanosecond)))
+ }
+}
+```
+
+### Signing a Transaction
+
+We set encoding config to use Protobuf, which will use `SIGN_MODE_DIRECT` by default. As per [ADR-020](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md), each signer needs to sign the `SignerInfo`s of all other signers. This means that we need to perform two steps sequentially:
+
+* for each signer, populate the signer's `SignerInfo` inside `TxBuilder`,
+* once all `SignerInfo`s are populated, for each signer, sign the `SignDoc` (the payload to be signed).
+
+In the current `TxBuilder`'s API, both steps are done using the same method: `SetSignatures()`. The current API requires us to first perform a round of `SetSignatures()` _with empty signatures_, only to populate `SignerInfo`s, and a second round of `SetSignatures()` to actually sign the correct payload.
+
+```go
+import (
+ cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
+ "github.com/cosmos/cosmos-sdk/types/tx/signing"
+ xauthsigning "github.com/cosmos/cosmos-sdk/x/auth/signing"
+)
+
+func sendTx() error {
+ // --snip--
+
+ privs := []cryptotypes.PrivKey{priv1, priv2}
+ accNums:= []uint64{..., ...} // The accounts' account numbers
+ accSeqs:= []uint64{..., ...} // The accounts' sequence numbers
+
+ // First round: we gather all the signer infos. We use the "set empty
+ // signature" hack to do that.
+ var sigsV2 []signing.SignatureV2
+ for i, priv := range privs {
+ sigV2 := signing.SignatureV2{
+ PubKey: priv.PubKey(),
+ Data: &signing.SingleSignatureData{
+ SignMode: encCfg.TxConfig.SignModeHandler().DefaultMode(),
+ Signature: nil,
+ },
+ Sequence: accSeqs[i],
+ }
+
+ sigsV2 = append(sigsV2, sigV2)
+ }
+ err := txBuilder.SetSignatures(sigsV2...)
+ if err != nil {
+ return err
+ }
+
+ // Second round: all signer infos are set, so each signer can sign.
+ sigsV2 = []signing.SignatureV2{}
+ for i, priv := range privs {
+ signerData := xauthsigning.SignerData{
+ ChainID: chainID,
+ AccountNumber: accNums[i],
+ Sequence: accSeqs[i],
+ }
+ sigV2, err := tx.SignWithPrivKey(
+ encCfg.TxConfig.SignModeHandler().DefaultMode(), signerData,
+ txBuilder, priv, encCfg.TxConfig, accSeqs[i])
+ if err != nil {
+ return nil, err
+ }
+
+ sigsV2 = append(sigsV2, sigV2)
+ }
+ err = txBuilder.SetSignatures(sigsV2...)
+ if err != nil {
+ return err
+ }
+}
+```
+
+The `TxBuilder` is now correctly populated. To print it, you can use the `TxConfig` interface from the initial encoding config `encCfg`:
+
+```go
+func sendTx() error {
+ // --snip--
+
+ // Generated Protobuf-encoded bytes.
+ txBytes, err := encCfg.TxConfig.TxEncoder()(txBuilder.GetTx())
+ if err != nil {
+ return err
+ }
+
+ // Generate a JSON string.
+ txJSONBytes, err := encCfg.TxConfig.TxJSONEncoder()(txBuilder.GetTx())
+ if err != nil {
+ return err
+ }
+ txJSON := string(txJSONBytes)
+}
+```
+
+### Broadcasting a Transaction
+
+The preferred way to broadcast a transaction is to use gRPC, though using REST (via `gRPC-gateway`) or the CometBFT RPC is also possible. An overview of the differences between these methods is exposed [here](../../learn/advanced/06-grpc_rest.md). For this tutorial, we will only describe the gRPC method.
+
+```go
+import (
+ "context"
+ "fmt"
+
+ "google.golang.org/grpc"
+
+ "github.com/cosmos/cosmos-sdk/types/tx"
+)
+
+func sendTx(ctx context.Context) error {
+ // --snip--
+
+ // Create a connection to the gRPC server.
+ grpcConn := grpc.Dial(
+ "127.0.0.1:9090", // Or your gRPC server address.
+ grpc.WithInsecure(), // The Cosmos SDK doesn't support any transport security mechanism.
+ )
+ defer grpcConn.Close()
+
+ // Broadcast the tx via gRPC. We create a new client for the Protobuf Tx
+ // service.
+ txClient := tx.NewServiceClient(grpcConn)
+ // We then call the BroadcastTx method on this client.
+ grpcRes, err := txClient.BroadcastTx(
+ ctx,
+ &tx.BroadcastTxRequest{
+ Mode: tx.BroadcastMode_BROADCAST_MODE_SYNC,
+ TxBytes: txBytes, // Proto-binary of the signed transaction, see previous step.
+ },
+ )
+ if err != nil {
+ return err
+ }
+
+ fmt.Println(grpcRes.TxResponse.Code) // Should be `0` if the tx is successful
+
+ return nil
+}
+```
+
+#### Simulating a Transaction
+
+Before broadcasting a transaction, we sometimes may want to dry-run the transaction, to estimate some information about the transaction without actually committing it. This is called simulating a transaction, and can be done as follows:
+
+```go
+import (
+ "context"
+ "fmt"
+ "testing"
+
+ "github.com/cosmos/cosmos-sdk/client"
+ "github.com/cosmos/cosmos-sdk/types/tx"
+ authtx "github.com/cosmos/cosmos-sdk/x/auth/tx"
+)
+
+func simulateTx() error {
+ // --snip--
+
+ // Simulate the tx via gRPC. We create a new client for the Protobuf Tx
+ // service.
+ txClient := tx.NewServiceClient(grpcConn)
+ txBytes := /* Fill in with your signed transaction bytes. */
+
+ // We then call the Simulate method on this client.
+ grpcRes, err := txClient.Simulate(
+ context.Background(),
+ &tx.SimulateRequest{
+ TxBytes: txBytes,
+ },
+ )
+ if err != nil {
+ return err
+ }
+
+ fmt.Println(grpcRes.GasInfo) // Prints estimated gas used.
+
+ return nil
+}
+```
+
+## Using gRPC
+
+It is not possible to generate or sign a transaction using gRPC, only to broadcast one. In order to broadcast a transaction using gRPC, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go.
+
+### Broadcasting a Transaction
+
+Broadcasting a transaction using the gRPC endpoint can be done by sending a `BroadcastTx` request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction:
+
+```bash
+grpcurl -plaintext \
+ -d '{"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \
+ localhost:9090 \
+ cosmos.tx.v1beta1.Service/BroadcastTx
+```
+
+## Using REST
+
+It is not possible to generate or sign a transaction using REST, only to broadcast one. In order to broadcast a transaction using REST, you will need to generate, sign, and encode the transaction using either the CLI or programmatically with Go.
+
+### Broadcasting a Transaction
+
+Broadcasting a transaction using the REST endpoint (served by `gRPC-gateway`) can be done by sending a POST request as follows, where the `txBytes` are the protobuf-encoded bytes of a signed transaction:
+
+```bash
+curl -X POST \
+ -H "Content-Type: application/json" \
+ -d' {"tx_bytes":"{{txBytes}}","mode":"BROADCAST_MODE_SYNC"}' \
+ localhost:1317/cosmos/tx/v1beta1/txs
+```
+
+## Using CosmJS (JavaScript & TypeScript)
+
+CosmJS aims to build client libraries in JavaScript that can be embedded in web applications. Please see [https://cosmos.github.io/cosmjs](https://cosmos.github.io/cosmjs) for more information. As of January 2021, CosmJS documentation is still a work in progress.
diff --git a/copy-of-sdk-docs/user/run-node/04-rosetta.md b/copy-of-sdk-docs/user/run-node/04-rosetta.md
new file mode 100644
index 00000000..e4527abb
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/04-rosetta.md
@@ -0,0 +1,144 @@
+# Rosetta
+
+The `rosetta` project implements Coinbase's [Rosetta API](https://www.rosetta-api.org). This document provides instructions on how to use the Rosetta API integration. For information about the motivation and design choices, refer to [ADR 035](https://docs.cosmos.network/main/architecture/adr-035-rosetta-api-support).
+
+## Installing Rosetta
+
+The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK.
+
+Rosetta can be added to any cosmos chain node. standalone or natively.
+
+### Standalone
+
+Rosetta can be executed as a standalone service, it connects to the node endpoints and expose the required endpoints.
+
+Install Rosetta standalone server with the following command:
+
+```bash
+go install github.com/cosmos/rosetta
+```
+
+Alternatively, for building from source, simply run `make build`. The binary will be located in the root folder.
+
+### Native - As a node command
+
+To enable Native Rosetta API support, it's required to add the `RosettaCommand` to your application's root command file (e.g. `simd/cmd/root.go`).
+
+Import the `rosettaCmd` package:
+
+```go
+import "github.com/cosmos/rosetta/cmd"
+```
+
+Find the following line:
+
+```go
+initRootCmd(rootCmd, encodingConfig)
+```
+
+After that line, add the following:
+
+```go
+rootCmd.AddCommand(
+ rosettaCmd.RosettaCommand(encodingConfig.InterfaceRegistry, encodingConfig.Codec)
+)
+```
+
+The `RosettaCommand` function builds the `rosetta` root command and is defined in the `rosettaCmd` package (`github.com/cosmos/rosetta/cmd`).
+
+Since we’ve updated the Cosmos SDK to work with the Rosetta API, updating the application's root command file is all you need to do.
+
+An implementation example can be found in `simapp` package.
+
+## Use Rosetta Command
+
+To run Rosetta in your application CLI, use the following command:
+
+> **Note:** if using the native approach, add your node name before any rosetta command.
+
+```shell
+rosetta --help
+```
+
+To test and run Rosetta API endpoints for applications that are running and exposed, use the following command:
+
+```shell
+rosetta
+ --blockchain "your application name (ex: gaia)"
+ --network "your chain identifier (ex: testnet-1)"
+ --tendermint "tendermint endpoint (ex: localhost:26657)"
+ --grpc "gRPC endpoint (ex: localhost:9090)"
+ --addr "rosetta binding address (ex: :8080)"
+ --grpc-types-server (optional) "gRPC endpoint for message descriptor types"
+```
+
+## Plugins - Multi chain connections
+
+Rosetta will try to reflect the node types trough reflection over the node gRPC endpoints, there may be cases were this approach is not enough. It is possible to extend or implement the required types easily through plugins.
+
+To use Rosetta over any chain, it is required to set up prefixes and registering zone specific interfaces through plugins.
+
+Each plugin is a minimalist implementation of `InitZone` and `RegisterInterfaces` which allow Rosetta to parse chain specific data. There is an example for cosmos-hub chain under `plugins/cosmos-hun/` folder
+- **InitZone**: An empty method that is executed first and defines prefixes, parameters and other settings.
+- **RegisterInterfaces**: This method receives an interface registry which is were the zone specific types and interfaces will be loaded
+
+In order to add a new plugin:
+1. Create a folder over `plugins` folder with the name of the desired zone
+2. Add a `main.go` file with the mentioned methods above.
+3. Build the code binary through `go build -buildmode=plugin -o main.so main.go`
+
+The plugin folder is selected through the cli `--plugin` flag and loaded into the Rosetta server.
+
+## Extensions
+
+There are two ways in which you can customize and extend the implementation with your custom settings.
+
+### Message extension
+
+In order to make an `sdk.Msg` understandable by rosetta the only thing which is required is adding the methods to your messages that satisfy the `rosetta.Msg` interface. Examples on how to do so can be found in the staking types such as `MsgDelegate`, or in bank types such as `MsgSend`.
+
+### Client interface override
+
+In case more customization is required, it's possible to embed the Client type and override the methods which require customizations.
+
+Example:
+
+```go
+package custom_client
+import (
+
+"context"
+"github.com/coinbase/rosetta-sdk-go/types"
+"github.com/cosmos/rosetta/lib"
+)
+
+// CustomClient embeds the standard cosmos client
+// which means that it implements the cosmos-rosetta-gateway Client
+// interface while at the same time allowing to customize certain methods
+type CustomClient struct {
+ *rosetta.Client
+}
+
+func (c *CustomClient) ConstructionPayload(_ context.Context, request *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error) {
+ // provide custom signature bytes
+ panic("implement me")
+}
+```
+
+NOTE: when using a customized client, the command cannot be used as the constructors required **may** differ, so it's required to create a new one. We intend to provide a way to init a customized client without writing extra code in the future.
+
+### Error extension
+
+Since rosetta requires to provide 'returned' errors to network options. In order to declare a new rosetta error, we use the `errors` package in cosmos-rosetta-gateway.
+
+Example:
+
+```go
+package custom_errors
+import crgerrs "github.com/cosmos/rosetta/lib/errors"
+
+var customErrRetriable = true
+var CustomError = crgerrs.RegisterError(100, "custom message", customErrRetriable, "description")
+```
+
+Note: errors must be registered before cosmos-rosetta-gateway's `Server`.`Start` method is called. Otherwise the registration will be ignored. Errors with same code will be ignored too.
diff --git a/copy-of-sdk-docs/user/run-node/05-run-testnet.md b/copy-of-sdk-docs/user/run-node/05-run-testnet.md
new file mode 100644
index 00000000..9200042e
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/05-run-testnet.md
@@ -0,0 +1,101 @@
+---
+sidebar_position: 1
+---
+
+# Running a Testnet
+
+:::note Synopsis
+The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes.
+:::
+
+In addition to the commands for [running a node](./01-run-node.md), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process.
+
+## Initialize Files
+
+First, let's take a look at the `init-files` subcommand.
+
+This is similar to the `init` command when initializing a single node, but in this case we are initializing multiple nodes, generating the genesis transactions for each node, and then collecting those transactions.
+
+The `init-files` subcommand initializes the necessary files to run a test network in a separate process (i.e. using a Docker container). Running this command is not a prerequisite for the `start` subcommand ([see below](#start-testnet)).
+
+In order to initialize the files for a test network, run the following command:
+
+```bash
+simd testnet init-files
+```
+
+You should see the following output in your terminal:
+
+```bash
+Successfully initialized 4 node directories
+```
+
+The default output directory is a relative `.testnets` directory. Let's take a look at the files created within the `.testnets` directory.
+
+### gentxs
+
+The `gentxs` directory includes a genesis transaction for each validator node. Each file includes a JSON encoded genesis transaction used to register a validator node at the time of genesis. The genesis transactions are added to the `genesis.json` file within each node directory during the initialization process.
+
+### nodes
+
+A node directory is created for each validator node. Within each node directory is a `simd` directory. The `simd` directory is the home directory for each node, which includes the configuration and data files for that node (i.e. the same files included in the default `~/.simapp` directory when running a single node).
+
+## Start Testnet
+
+Now, let's take a look at the `start` subcommand.
+
+The `start` subcommand both initializes and starts an in-process test network. This is the fastest way to spin up a local test network for testing purposes.
+
+You can start the local test network by running the following command:
+
+```bash
+simd testnet start
+```
+
+You should see something similar to the following:
+
+```bash
+acquiring test network lock
+preparing test network with chain-id "chain-mtoD9v"
+
+
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ THIS MNEMONIC IS FOR TESTING PURPOSES ONLY ++
+++ DO NOT USE IN PRODUCTION ++
+++ ++
+++ sustain know debris minute gate hybrid stereo custom ++
+++ divorce cross spoon machine latin vibrant term oblige ++
+++ moment beauty laundry repeat grab game bronze truly ++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+
+starting test network...
+started test network
+press the Enter Key to terminate
+```
+
+The first validator node is now running in-process, which means the test network will terminate once you either close the terminal window or you press the Enter key. In the output, the mnemonic phrase for the first validator node is provided for testing purposes. The validator node is using the same default addresses being used when initializing and starting a single node (no need to provide a `--node` flag).
+
+Check the status of the first validator node:
+
+```shell
+simd status
+```
+
+Import the key from the provided mnemonic:
+
+```shell
+simd keys add test --recover --keyring-backend test
+```
+
+Check the balance of the account address:
+
+```shell
+simd q bank balances [address]
+```
+
+Use this test account to manually test against the test network.
+
+## Testnet Options
+
+You can customize the configuration of the test network with flags. In order to see all flag options, append the `--help` flag to each command.
diff --git a/copy-of-sdk-docs/user/run-node/06-run-production.md b/copy-of-sdk-docs/user/run-node/06-run-production.md
new file mode 100644
index 00000000..6eee4808
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/06-run-production.md
@@ -0,0 +1,269 @@
+---
+sidebar_position: 1
+---
+
+# Running in Production
+
+:::note Synopsis
+This section describes how to securely run a node in a public setting and/or on a mainnet on one of the many Cosmos SDK public blockchains.
+:::
+
+When operating a node, full node or validator, in production it is important to set your server up securely.
+
+:::note
+There are many different ways to secure a server and your node, the described steps here is one way. To see another way of setting up a server see the [run in production tutorial](https://tutorials.cosmos.network/hands-on-exercise/4-run-in-prod).
+:::
+
+:::note
+This walkthrough assumes the underlying operating system is Ubuntu.
+:::
+
+## Server Setup
+
+### User
+
+When creating a server most times it is created as user `root`. This user has heightened privileges on the server. When operating a node, it is recommended to not run your node as the root user.
+
+1. Create a new user
+
+```bash
+sudo adduser change_me
+```
+
+2. We want to allow this user to perform sudo tasks
+
+```bash
+sudo usermod -aG sudo change_me
+```
+
+Now when logging into the server, the non `root` user can be used.
+
+### Go
+
+1. Install the [Go](https://go.dev/doc/install) version preconized by the application.
+
+:::warning
+In the past, validators [have had issues](https://github.com/cosmos/cosmos-sdk/issues/13976) when using different versions of Go. It is recommended that the whole validator set uses the version of Go that is preconized by the application.
+:::
+
+### Firewall
+
+Nodes should not have all ports open to the public, this is a simple way to get DDOS'd. Secondly it is recommended by [CometBFT](https://github.com/cometbft/cometbft) to never expose ports that are not required to operate a node.
+
+When setting up a firewall there are a few ports that can be open when operating a Cosmos SDK node. There is the CometBFT json-RPC, prometheus, p2p, remote signer and Cosmos SDK GRPC and REST. If the node is being operated as a node that does not offer endpoints to be used for submission or querying then a max of three endpoints are needed.
+
+Most, if not all servers come equipped with [ufw](https://help.ubuntu.com/community/UFW). Ufw will be used in this tutorial.
+
+1. Reset UFW to disallow all incoming connections and allow outgoing
+
+```bash
+sudo ufw default deny incoming
+sudo ufw default allow outgoing
+```
+
+2. Lets make sure that port 22 (ssh) stays open.
+
+```bash
+sudo ufw allow ssh
+```
+
+or
+
+```bash
+sudo ufw allow 22
+```
+
+Both of the above commands are the same.
+
+3. Allow Port 26656 (cometbft p2p port). If the node has a modified p2p port then that port must be used here.
+
+```bash
+sudo ufw allow 26656/tcp
+```
+
+4. Allow port 26660 (cometbft [prometheus](https://prometheus.io)). This acts as the applications monitoring port as well.
+
+```bash
+sudo ufw allow 26660/tcp
+```
+
+5. IF the node which is being setup would like to expose CometBFTs jsonRPC and Cosmos SDK GRPC and REST then follow this step. (Optional)
+
+##### CometBFT JsonRPC
+
+```bash
+sudo ufw allow 26657/tcp
+```
+
+##### Cosmos SDK GRPC
+
+```bash
+sudo ufw allow 9090/tcp
+```
+
+##### Cosmos SDK REST
+
+```bash
+sudo ufw allow 1317/tcp
+```
+
+6. Lastly, enable ufw
+
+```bash
+sudo ufw enable
+```
+
+### Signing
+
+If the node that is being started is a validator there are multiple ways a validator could sign blocks.
+
+#### File
+
+File based signing is the simplest and default approach. This approach works by storing the consensus key, generated on initialization, to sign blocks. This approach is only as safe as your server setup as if the server is compromised so is your key. This key is located in the `config/priv_val_key.json` directory generated on initialization.
+
+A second file exists that user must be aware of, the file is located in the data directory `data/priv_val_state.json`. This file protects your node from double signing. It keeps track of the consensus keys last sign height, round and latest signature. If the node crashes and needs to be recovered this file must be kept in order to ensure that the consensus key will not be used for signing a block that was previously signed.
+
+#### Remote Signer
+
+A remote signer is a secondary server that is separate from the running node that signs blocks with the consensus key. This means that the consensus key does not live on the node itself. This increases security because your full node which is connected to the remote signer can be swapped without missing blocks.
+
+The two most used remote signers are [tmkms](https://github.com/iqlusioninc/tmkms) from [Iqlusion](https://www.iqlusion.io) and [horcrux](https://github.com/strangelove-ventures/horcrux) from [Strangelove](https://strange.love).
+
+##### TMKMS
+
+###### Dependencies
+
+1. Update server dependencies and install extras needed.
+
+```sh
+sudo apt update -y && sudo apt install build-essential curl jq -y
+```
+
+2. Install Rust:
+
+```sh
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+```
+
+3. Install Libusb:
+
+```sh
+sudo apt install libusb-1.0-0-dev
+```
+
+###### Setup
+
+There are two ways to install tmkms, from source or `cargo install`. In the examples we will cover downloading or building from source and using softsign. Softsign stands for software signing, but you could use a [yubihsm](https://www.yubico.com/products/hardware-security-module/) as your signing key if you wish.
+
+1. Build:
+
+From source:
+
+```bash
+cd $HOME
+git clone https://github.com/iqlusioninc/tmkms.git
+cd $HOME/tmkms
+cargo install tmkms --features=softsign
+tmkms init config
+tmkms softsign keygen ./config/secrets/secret_connection_key
+```
+
+or
+
+Cargo install:
+
+```bash
+cargo install tmkms --features=softsign
+tmkms init config
+tmkms softsign keygen ./config/secrets/secret_connection_key
+```
+
+:::note
+To use tmkms with a yubikey install the binary with `--features=yubihsm`.
+:::
+
+2. Migrate the validator key from the full node to the new tmkms instance.
+
+```bash
+scp user@123.456.32.123:~/.simd/config/priv_validator_key.json ~/tmkms/config/secrets
+```
+
+3. Import the validator key into tmkms.
+
+```bash
+tmkms softsign import $HOME/tmkms/config/secrets/priv_validator_key.json $HOME/tmkms/config/secrets/priv_validator_key
+```
+
+At this point, it is necessary to delete the `priv_validator_key.json` from the validator node and the tmkms node. Since the key has been imported into tmkms (above) it is no longer necessary on the nodes. The key can be safely stored offline.
+
+4. Modify the `tmkms.toml`.
+
+```bash
+vim $HOME/tmkms/config/tmkms.toml
+```
+
+This example shows a configuration that could be used for soft signing. The example has an IP of `123.456.12.345` with a port of `26659` a chain_id of `test-chain-waSDSe`. These are items that must be modified for the usecase of tmkms and the network.
+
+```toml
+# CometBFT KMS configuration file
+
+## Chain Configuration
+
+[[chain]]
+id = "osmosis-1"
+key_format = { type = "bech32", account_key_prefix = "cosmospub", consensus_key_prefix = "cosmosvalconspub" }
+state_file = "/root/tmkms/config/state/priv_validator_state.json"
+
+## Signing Provider Configuration
+
+### Software-based Signer Configuration
+
+[[providers.softsign]]
+chain_ids = ["test-chain-waSDSe"]
+key_type = "consensus"
+path = "/root/tmkms/config/secrets/priv_validator_key"
+
+## Validator Configuration
+
+[[validator]]
+chain_id = "test-chain-waSDSe"
+addr = "tcp://123.456.12.345:26659"
+secret_key = "/root/tmkms/config/secrets/secret_connection_key"
+protocol_version = "v0.34"
+reconnect = true
+```
+
+5. Set the address of the tmkms instance.
+
+```bash
+vim $HOME/.simd/config/config.toml
+
+priv_validator_laddr = "tcp://0.0.0.0:26659"
+```
+
+:::tip
+The above address it set to `0.0.0.0` but it is recommended to set the tmkms server to secure the startup
+:::
+
+:::tip
+It is recommended to comment or delete the lines that specify the path of the validator key and validator:
+
+```toml
+# Path to the JSON file containing the private key to use as a validator in the consensus protocol
+# priv_validator_key_file = "config/priv_validator_key.json"
+
+# Path to the JSON file containing the last sign state of a validator
+# priv_validator_state_file = "data/priv_validator_state.json"
+```
+
+:::
+
+6. Start the two processes.
+
+```bash
+tmkms start -c $HOME/tmkms/config/tmkms.toml
+```
+
+```bash
+simd start
+```
diff --git a/copy-of-sdk-docs/user/run-node/_category_.json b/copy-of-sdk-docs/user/run-node/_category_.json
new file mode 100644
index 00000000..65e64b94
--- /dev/null
+++ b/copy-of-sdk-docs/user/run-node/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Running a Node, API and CLI",
+ "position": 0,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-docs/user/user.md b/copy-of-sdk-docs/user/user.md
new file mode 100644
index 00000000..5429e8ad
--- /dev/null
+++ b/copy-of-sdk-docs/user/user.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 0
+---
+# User Guides
+
+This section is designed for developers who are using the Cosmos SDK to build applications. It provides essential guides and references to effectively use the SDK's features.
+
+* [Setting up keys](./run-node/00-keyring.md) - Learn how to set up secure key management using the Cosmos SDK's keyring feature. This guide provides a streamlined approach to cryptographic key handling, which is crucial for securing your application.
+* [Running a node](./run-node/01-run-node.md) - This guide provides step-by-step instructions to deploy and manage a node in the Cosmos network. It ensures a smooth and reliable operation of your blockchain application by covering all the necessary setup and maintenance steps.
+* [CLI](./run-node/02-interact-node.md) - Discover how to navigate and interact with the Cosmos SDK using the Command Line Interface (CLI). This section covers efficient and powerful command-based operations that can help you manage your application effectively.
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/_category_.json b/copy-of-sdk-versioned_docs/version-0.47/build/_category_.json
new file mode 100644
index 00000000..9f308823
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "Build",
+ "position": 0,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/PROCESS.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/PROCESS.md
new file mode 100644
index 00000000..e30a7406
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/PROCESS.md
@@ -0,0 +1,58 @@
+# ADR Creation Process
+
+1. Copy the `adr-template.md` file. Use the following filename pattern: `adr-next_number-title.md`
+2. Create a draft Pull Request if you want to get an early feedback.
+3. Make sure the context and a solution is clear and well documented.
+4. Add an entry to a list in the [README](README.md) file.
+5. Create a Pull Request to propose a new ADR.
+
+## What is an ADR?
+
+An ADR is a document to document an implementation and design that may or may not have been discussed in an RFC. While an RFC is meant to replace synchoronus communication in a distributed environment, an ADR is meant to document an already made decision. An ADR wont come with much of a communication overhead because the discussion was recorded in an RFC or a synchronous discussion. If the consensus came from a synchoronus discussion then a short excerpt should be added to the ADR to explain the goals.
+
+## ADR life cycle
+
+ADR creation is an **iterative** process. Instead of having a high amount of communication overhead, an ADR is used when there is already a decision made and implementation details need to be added. The ADR should document what the collective consensus for the specific issue is and how to solve it.
+
+1. Every ADR should start with either an RFC or discussion where consensus has been met.
+
+2. Once consensus is met, a GitHub Pull Request (PR) is created with a new document based on the `adr-template.md`.
+
+3. If a _proposed_ ADR is merged, then it should clearly document outstanding issues either in ADR document notes or in a GitHub Issue.
+
+4. The PR SHOULD always be merged. In the case of a faulty ADR, we still prefer to merge it with a _rejected_ status. The only time the ADR SHOULD NOT be merged is if the author abandons it.
+
+5. Merged ADRs SHOULD NOT be pruned.
+
+### ADR status
+
+Status has two components:
+
+```text
+{CONSENSUS STATUS} {IMPLEMENTATION STATUS}
+```
+
+IMPLEMENTATION STATUS is either `Implemented` or `Not Implemented`.
+
+#### Consensus Status
+
+```text
+DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEDED by ADR-xxx
+ \ |
+ \ |
+ v v
+ ABANDONED
+```
+
+* `DRAFT`: [optional] an ADR which is work in progress, not being ready for a general review. This is to present an early work and get an early feedback in a Draft Pull Request form.
+* `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached an agreed yet.
+* `LAST CALL `: [optional] clear notify that we are close to accept updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached and we still want to give it a time to let the community react or analyze.
+* `ACCEPTED`: ADR which will represent a currently implemented or to be implemented architecture design.
+* `REJECTED`: ADR can go from PROPOSED or ACCEPTED to rejected if the consensus among project stakeholders will decide so.
+* `SUPERSEEDED by ADR-xxx`: ADR which has been superseded by a new ADR.
+* `ABANDONED`: the ADR is no longer pursued by the original authors.
+
+## Language used in ADR
+
+* The context/background should be written in the present tense.
+* Avoid using a first, personal form.
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/README.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/README.md
new file mode 100644
index 00000000..ce1ee432
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/README.md
@@ -0,0 +1,94 @@
+---
+sidebar_position: 1
+---
+
+# Architecture Decision Records (ADR)
+
+This is a location to record all high-level architecture decisions in the Cosmos-SDK.
+
+An Architectural Decision (**AD**) is a software design choice that addresses a functional or non-functional requirement that is architecturally significant.
+An Architecturally Significant Requirement (**ASR**) is a requirement that has a measurable effect on a software system’s architecture and quality.
+An Architectural Decision Record (**ADR**) captures a single AD, such as often done when writing personal notes or meeting minutes; the collection of ADRs created and maintained in a project constitute its decision log. All these are within the topic of Architectural Knowledge Management (AKM).
+
+You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
+
+## Rationale
+
+ADRs are intended to be the primary mechanism for proposing new feature designs and new processes, for collecting community input on an issue, and for documenting the design decisions.
+An ADR should provide:
+
+* Context on the relevant goals and the current state
+* Proposed changes to achieve the goals
+* Summary of pros and cons
+* References
+* Changelog
+
+Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and
+justification for a change in architecture, or for the architecture of something
+new. The spec is much more compressed and streamlined summary of everything as
+it stands today.
+
+If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
+
+## Creating new ADR
+
+Read about the [PROCESS](PROCESS.md).
+
+### Use RFC 2119 Keywords
+
+When writing ADRs, follow the same best practices for writing RFCs. When writing RFCs, key words are used to signify the requirements in the specification. These words are often capitalized: "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL. They are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119).
+
+## ADR Table of Contents
+
+### Accepted
+
+* [ADR 002: SDK Documentation Structure](adr-002-docs-structure.md)
+* [ADR 004: Split Denomination Keys](adr-004-split-denomination-keys.md)
+* [ADR 006: Secret Store Replacement](adr-006-secret-store-replacement.md)
+* [ADR 009: Evidence Module](adr-009-evidence-module.md)
+* [ADR 010: Modular AnteHandler](adr-010-modular-antehandler.md)
+* [ADR 019: Protocol Buffer State Encoding](adr-019-protobuf-state-encoding.md)
+* [ADR 020: Protocol Buffer Transaction Encoding](adr-020-protobuf-transaction-encoding.md)
+* [ADR 021: Protocol Buffer Query Encoding](adr-021-protobuf-query-encoding.md)
+* [ADR 023: Protocol Buffer Naming and Versioning](adr-023-protobuf-naming.md)
+* [ADR 029: Fee Grant Module](adr-029-fee-grant-module.md)
+* [ADR 030: Message Authorization Module](adr-030-authz-module.md)
+* [ADR 031: Protobuf Msg Services](adr-031-msg-service.md)
+* [ADR 055: ORM](adr-055-orm.md)
+* [ADR 058: Auto-Generated CLI](adr-058-auto-generated-cli.md)
+* [ADR 060: ABCI 1.0 (Phase I)](adr-060-abci-1.0.md)
+* [ADR 061: Liquid Staking](adr-061-liquid-staking.md)
+
+### Proposed
+
+* [ADR 003: Dynamic Capability Store](adr-003-dynamic-capability-store.md)
+* [ADR 011: Generalize Genesis Accounts](adr-011-generalize-genesis-accounts.md)
+* [ADR 012: State Accessors](adr-012-state-accessors.md)
+* [ADR 013: Metrics](adr-013-metrics.md)
+* [ADR 016: Validator Consensus Key Rotation](adr-016-validator-consensus-key-rotation.md)
+* [ADR 017: Historical Header Module](adr-017-historical-header-module.md)
+* [ADR 018: Extendable Voting Periods](adr-018-extendable-voting-period.md)
+* [ADR 022: Custom baseapp panic handling](adr-022-custom-panic-handling.md)
+* [ADR 024: Coin Metadata](adr-024-coin-metadata.md)
+* [ADR 027: Deterministic Protobuf Serialization](adr-027-deterministic-protobuf-serialization.md)
+* [ADR 028: Public Key Addresses](adr-028-public-key-addresses.md)
+* [ADR 032: Typed Events](adr-032-typed-events.md)
+* [ADR 033: Inter-module RPC](adr-033-protobuf-inter-module-comm.md)
+* [ADR 035: Rosetta API Support](adr-035-rosetta-api-support.md)
+* [ADR 037: Governance Split Votes](adr-037-gov-split-vote.md)
+* [ADR 038: State Listening](adr-038-state-listening.md)
+* [ADR 039: Epoched Staking](adr-039-epoched-staking.md)
+* [ADR 040: Storage and SMT State Commitments](adr-040-storage-and-smt-state-commitments.md)
+* [ADR 046: Module Params](adr-046-module-params.md)
+* [ADR 054: Semver Compatible SDK Modules](adr-054-semver-compatible-modules.md)
+* [ADR 057: App Wiring](adr-057-app-wiring.md)
+* [ADR 059: Test Scopes](adr-059-test-scopes.md)
+* [ADR 062: Collections State Layer](adr-062-collections-state-layer.md)
+* [ADR 063: Core Module API](adr-063-core-module-api.md)
+* [ADR 065: Store V2](adr-065-store-v2.md)
+
+### Draft
+
+* [ADR 044: Guidelines for Updating Protobuf Definitions](adr-044-protobuf-updates-guidelines.md)
+* [ADR 047: Extend Upgrade Plan](adr-047-extend-upgrade-plan.md)
+* [ADR 053: Go Module Refactoring](adr-053-go-module-refactoring.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/_category_.json b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/_category_.json
new file mode 100644
index 00000000..87ceb937
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/_category_.json
@@ -0,0 +1,5 @@
+{
+ "label": "ADRs",
+ "position": 11,
+ "link": null
+}
\ No newline at end of file
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-002-docs-structure.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-002-docs-structure.md
new file mode 100644
index 00000000..5819151f
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-002-docs-structure.md
@@ -0,0 +1,86 @@
+# ADR 002: SDK Documentation Structure
+
+## Context
+
+There is a need for a scalable structure of the Cosmos SDK documentation. Current documentation includes a lot of non-related Cosmos SDK material, is difficult to maintain and hard to follow as a user.
+
+Ideally, we would have:
+
+* All docs related to dev frameworks or tools live in their respective github repos (sdk repo would contain sdk docs, hub repo would contain hub docs, lotion repo would contain lotion docs, etc.)
+* All other docs (faqs, whitepaper, high-level material about Cosmos) would live on the website.
+
+## Decision
+
+Re-structure the `/docs` folder of the Cosmos SDK github repo as follows:
+
+```text
+docs/
+├── README
+├── intro/
+├── concepts/
+│ ├── baseapp
+│ ├── types
+│ ├── store
+│ ├── server
+│ ├── modules/
+│ │ ├── keeper
+│ │ ├── handler
+│ │ ├── cli
+│ ├── gas
+│ └── commands
+├── clients/
+│ ├── lite/
+│ ├── service-providers
+├── modules/
+├── spec/
+├── translations/
+└── architecture/
+```
+
+The files in each sub-folders do not matter and will likely change. What matters is the sectioning:
+
+* `README`: Landing page of the docs.
+* `intro`: Introductory material. Goal is to have a short explainer of the Cosmos SDK and then channel people to the resource they need. The [Cosmos SDK tutorial](https://github.com/cosmos/sdk-application-tutorial/) will be highlighted, as well as the `godocs`.
+* `concepts`: Contains high-level explanations of the abstractions of the Cosmos SDK. It does not contain specific code implementation and does not need to be updated often. **It is not an API specification of the interfaces**. API spec is the `godoc`.
+* `clients`: Contains specs and info about the various Cosmos SDK clients.
+* `spec`: Contains specs of modules, and others.
+* `modules`: Contains links to `godocs` and the spec of the modules.
+* `architecture`: Contains architecture-related docs like the present one.
+* `translations`: Contains different translations of the documentation.
+
+Website docs sidebar will only include the following sections:
+
+* `README`
+* `intro`
+* `concepts`
+* `clients`
+
+`architecture` need not be displayed on the website.
+
+## Status
+
+Accepted
+
+## Consequences
+
+### Positive
+
+* Much clearer organisation of the Cosmos SDK docs.
+* The `/docs` folder now only contains Cosmos SDK and gaia related material. Later, it will only contain Cosmos SDK related material.
+* Developers only have to update `/docs` folder when they open a PR (and not `/examples` for example).
+* Easier for developers to find what they need to update in the docs thanks to reworked architecture.
+* Cleaner vuepress build for website docs.
+* Will help build an executable doc (cf https://github.com/cosmos/cosmos-sdk/issues/2611)
+
+### Neutral
+
+* We need to move a bunch of deprecated stuff to `/_attic` folder.
+* We need to integrate content in `docs/sdk/docs/core` in `concepts`.
+* We need to move all the content that currently lives in `docs` and does not fit in new structure (like `lotion`, intro material, whitepaper) to the website repository.
+* Update `DOCS_README.md`
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/issues/1460
+* https://github.com/cosmos/cosmos-sdk/pull/2695
+* https://github.com/cosmos/cosmos-sdk/issues/2611
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-003-dynamic-capability-store.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-003-dynamic-capability-store.md
new file mode 100644
index 00000000..f9ddd364
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-003-dynamic-capability-store.md
@@ -0,0 +1,344 @@
+# ADR 3: Dynamic Capability Store
+
+## Changelog
+
+* 12 December 2019: Initial version
+* 02 April 2020: Memory Store Revisions
+
+## Context
+
+Full implementation of the [IBC specification](https://github.com/cosmos/ibc) requires the ability to create and authenticate object-capability keys at runtime (i.e., during transaction execution),
+as described in [ICS 5](https://github.com/cosmos/ibc/tree/master/spec/core/ics-005-port-allocation#technical-specification). In the IBC specification, capability keys are created for each newly initialised
+port & channel, and are used to authenticate future usage of the port or channel. Since channels and potentially ports can be initialised during transaction execution, the state machine must be able to create
+object-capability keys at this time.
+
+At present, the Cosmos SDK does not have the ability to do this. Object-capability keys are currently pointers (memory addresses) of `StoreKey` structs created at application initialisation in `app.go` ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L132))
+and passed to Keepers as fixed arguments ([example](https://github.com/cosmos/gaia/blob/dcbddd9f04b3086c0ad07ee65de16e7adedc7da4/app/app.go#L160)). Keepers cannot create or store capability keys during transaction execution — although they could call `NewKVStoreKey` and take the memory address
+of the returned struct, storing this in the Merklised store would result in a consensus fault, since the memory address will be different on each machine (this is intentional — were this not the case, the keys would be predictable and couldn't serve as object capabilities).
+
+Keepers need a way to keep a private map of store keys which can be altered during transaction execution, along with a suitable mechanism for regenerating the unique memory addresses (capability keys) in this map whenever the application is started or restarted, along with a mechanism to revert capability creation on tx failure.
+This ADR proposes such an interface & mechanism.
+
+## Decision
+
+The Cosmos SDK will include a new `CapabilityKeeper` abstraction, which is responsible for provisioning,
+tracking, and authenticating capabilities at runtime. During application initialisation in `app.go`,
+the `CapabilityKeeper` will be hooked up to modules through unique function references
+(by calling `ScopeToModule`, defined below) so that it can identify the calling module when later
+invoked.
+
+When the initial state is loaded from disk, the `CapabilityKeeper`'s `Initialise` function will create
+new capability keys for all previously allocated capability identifiers (allocated during execution of
+past transactions and assigned to particular modes), and keep them in a memory-only store while the
+chain is running.
+
+The `CapabilityKeeper` will include a persistent `KVStore`, a `MemoryStore`, and an in-memory map.
+The persistent `KVStore` tracks which capability is owned by which modules.
+The `MemoryStore` stores a forward mapping that map from module name, capability tuples to capability names and
+a reverse mapping that map from module name, capability name to the capability index.
+Since we cannot marshal the capability into a `KVStore` and unmarshal without changing the memory location of the capability,
+the reverse mapping in the KVStore will simply map to an index. This index can then be used as a key in the ephemeral
+go-map to retrieve the capability at the original memory location.
+
+The `CapabilityKeeper` will define the following types & functions:
+
+The `Capability` is similar to `StoreKey`, but has a globally unique `Index()` instead of
+a name. A `String()` method is provided for debugging.
+
+A `Capability` is simply a struct, the address of which is taken for the actual capability.
+
+```go
+type Capability struct {
+ index uint64
+}
+```
+
+A `CapabilityKeeper` contains a persistent store key, memory store key, and mapping of allocated module names.
+
+```go
+type CapabilityKeeper struct {
+ persistentKey StoreKey
+ memKey StoreKey
+ capMap map[uint64]*Capability
+ moduleNames map[string]interface{}
+ sealed bool
+}
+```
+
+The `CapabilityKeeper` provides the ability to create *scoped* sub-keepers which are tied to a
+particular module name. These `ScopedCapabilityKeeper`s must be created at application initialisation
+and passed to modules, which can then use them to claim capabilities they receive and retrieve
+capabilities which they own by name, in addition to creating new capabilities & authenticating capabilities
+passed by other modules.
+
+```go
+type ScopedCapabilityKeeper struct {
+ persistentKey StoreKey
+ memKey StoreKey
+ capMap map[uint64]*Capability
+ moduleName string
+}
+```
+
+`ScopeToModule` is used to create a scoped sub-keeper with a particular name, which must be unique.
+It MUST be called before `InitialiseAndSeal`.
+
+```go
+func (ck CapabilityKeeper) ScopeToModule(moduleName string) ScopedCapabilityKeeper {
+ if k.sealed {
+ panic("cannot scope to module via a sealed capability keeper")
+ }
+
+ if _, ok := k.scopedModules[moduleName]; ok {
+ panic(fmt.Sprintf("cannot create multiple scoped keepers for the same module name: %s", moduleName))
+ }
+
+ k.scopedModules[moduleName] = struct{}{}
+
+ return ScopedKeeper{
+ cdc: k.cdc,
+ storeKey: k.storeKey,
+ memKey: k.memKey,
+ capMap: k.capMap,
+ module: moduleName,
+ }
+}
+```
+
+`InitialiseAndSeal` MUST be called exactly once, after loading the initial state and creating all
+necessary `ScopedCapabilityKeeper`s, in order to populate the memory store with newly-created
+capability keys in accordance with the keys previously claimed by particular modules and prevent the
+creation of any new `ScopedCapabilityKeeper`s.
+
+```go
+func (ck CapabilityKeeper) InitialiseAndSeal(ctx Context) {
+ if ck.sealed {
+ panic("capability keeper is sealed")
+ }
+
+ persistentStore := ctx.KVStore(ck.persistentKey)
+ map := ctx.KVStore(ck.memKey)
+
+ // initialise memory store for all names in persistent store
+ for index, value := range persistentStore.Iter() {
+ capability = &CapabilityKey{index: index}
+
+ for moduleAndCapability := range value {
+ moduleName, capabilityName := moduleAndCapability.Split("/")
+ memStore.Set(moduleName + "/fwd/" + capability, capabilityName)
+ memStore.Set(moduleName + "/rev/" + capabilityName, index)
+
+ ck.capMap[index] = capability
+ }
+ }
+
+ ck.sealed = true
+}
+```
+
+`NewCapability` can be called by any module to create a new unique, unforgeable object-capability
+reference. The newly created capability is automatically persisted; the calling module need not
+call `ClaimCapability`.
+
+```go
+func (sck ScopedCapabilityKeeper) NewCapability(ctx Context, name string) (Capability, error) {
+ // check name not taken in memory store
+ if capStore.Get("rev/" + name) != nil {
+ return nil, errors.New("name already taken")
+ }
+
+ // fetch the current index
+ index := persistentStore.Get("index")
+
+ // create a new capability
+ capability := &CapabilityKey{index: index}
+
+ // set persistent store
+ persistentStore.Set(index, Set.singleton(sck.moduleName + "/" + name))
+
+ // update the index
+ index++
+ persistentStore.Set("index", index)
+
+ // set forward mapping in memory store from capability to name
+ memStore.Set(sck.moduleName + "/fwd/" + capability, name)
+
+ // set reverse mapping in memory store from name to index
+ memStore.Set(sck.moduleName + "/rev/" + name, index)
+
+ // set the in-memory mapping from index to capability pointer
+ capMap[index] = capability
+
+ // return the newly created capability
+ return capability
+}
+```
+
+`AuthenticateCapability` can be called by any module to check that a capability
+does in fact correspond to a particular name (the name can be untrusted user input)
+with which the calling module previously associated it.
+
+```go
+func (sck ScopedCapabilityKeeper) AuthenticateCapability(name string, capability Capability) bool {
+ // return whether forward mapping in memory store matches name
+ return memStore.Get(sck.moduleName + "/fwd/" + capability) === name
+}
+```
+
+`ClaimCapability` allows a module to claim a capability key which it has received from another module
+so that future `GetCapability` calls will succeed.
+
+`ClaimCapability` MUST be called if a module which receives a capability wishes to access it by name
+in the future. Capabilities are multi-owner, so if multiple modules have a single `Capability` reference,
+they will all own it.
+
+```go
+func (sck ScopedCapabilityKeeper) ClaimCapability(ctx Context, capability Capability, name string) error {
+ persistentStore := ctx.KVStore(sck.persistentKey)
+
+ // set forward mapping in memory store from capability to name
+ memStore.Set(sck.moduleName + "/fwd/" + capability, name)
+
+ // set reverse mapping in memory store from name to capability
+ memStore.Set(sck.moduleName + "/rev/" + name, capability)
+
+ // update owner set in persistent store
+ owners := persistentStore.Get(capability.Index())
+ owners.add(sck.moduleName + "/" + name)
+ persistentStore.Set(capability.Index(), owners)
+}
+```
+
+`GetCapability` allows a module to fetch a capability which it has previously claimed by name.
+The module is not allowed to retrieve capabilities which it does not own.
+
+```go
+func (sck ScopedCapabilityKeeper) GetCapability(ctx Context, name string) (Capability, error) {
+ // fetch the index of capability using reverse mapping in memstore
+ index := memStore.Get(sck.moduleName + "/rev/" + name)
+
+ // fetch capability from go-map using index
+ capability := capMap[index]
+
+ // return the capability
+ return capability
+}
+```
+
+`ReleaseCapability` allows a module to release a capability which it had previously claimed. If no
+more owners exist, the capability will be deleted globally.
+
+```go
+func (sck ScopedCapabilityKeeper) ReleaseCapability(ctx Context, capability Capability) err {
+ persistentStore := ctx.KVStore(sck.persistentKey)
+
+ name := capStore.Get(sck.moduleName + "/fwd/" + capability)
+ if name == nil {
+ return error("capability not owned by module")
+ }
+
+ // delete forward mapping in memory store
+ memoryStore.Delete(sck.moduleName + "/fwd/" + capability, name)
+
+ // delete reverse mapping in memory store
+ memoryStore.Delete(sck.moduleName + "/rev/" + name, capability)
+
+ // update owner set in persistent store
+ owners := persistentStore.Get(capability.Index())
+ owners.remove(sck.moduleName + "/" + name)
+ if owners.size() > 0 {
+ // there are still other owners, keep the capability around
+ persistentStore.Set(capability.Index(), owners)
+ } else {
+ // no more owners, delete the capability
+ persistentStore.Delete(capability.Index())
+ delete(capMap[capability.Index()])
+ }
+}
+```
+
+### Usage patterns
+
+#### Initialisation
+
+Any modules which use dynamic capabilities must be provided a `ScopedCapabilityKeeper` in `app.go`:
+
+```go
+ck := NewCapabilityKeeper(persistentKey, memoryKey)
+mod1Keeper := NewMod1Keeper(ck.ScopeToModule("mod1"), ....)
+mod2Keeper := NewMod2Keeper(ck.ScopeToModule("mod2"), ....)
+
+// other initialisation logic ...
+
+// load initial state...
+
+ck.InitialiseAndSeal(initialContext)
+```
+
+#### Creating, passing, claiming and using capabilities
+
+Consider the case where `mod1` wants to create a capability, associate it with a resource (e.g. an IBC channel) by name, then pass it to `mod2` which will use it later:
+
+Module 1 would have the following code:
+
+```go
+capability := scopedCapabilityKeeper.NewCapability(ctx, "resourceABC")
+mod2Keeper.SomeFunction(ctx, capability, args...)
+```
+
+`SomeFunction`, running in module 2, could then claim the capability:
+
+```go
+func (k Mod2Keeper) SomeFunction(ctx Context, capability Capability) {
+ k.sck.ClaimCapability(ctx, capability, "resourceABC")
+ // other logic...
+}
+```
+
+Later on, module 2 can retrieve that capability by name and pass it to module 1, which will authenticate it against the resource:
+
+```go
+func (k Mod2Keeper) SomeOtherFunction(ctx Context, name string) {
+ capability := k.sck.GetCapability(ctx, name)
+ mod1.UseResource(ctx, capability, "resourceABC")
+}
+```
+
+Module 1 will then check that this capability key is authenticated to use the resource before allowing module 2 to use it:
+
+```go
+func (k Mod1Keeper) UseResource(ctx Context, capability Capability, resource string) {
+ if !k.sck.AuthenticateCapability(name, capability) {
+ return errors.New("unauthenticated")
+ }
+ // do something with the resource
+}
+```
+
+If module 2 passed the capability key to module 3, module 3 could then claim it and call module 1 just like module 2 did
+(in which case module 1, module 2, and module 3 would all be able to use this capability).
+
+## Status
+
+Proposed.
+
+## Consequences
+
+### Positive
+
+* Dynamic capability support.
+* Allows CapabilityKeeper to return same capability pointer from go-map while reverting any writes to the persistent `KVStore` and in-memory `MemoryStore` on tx failure.
+
+### Negative
+
+* Requires an additional keeper.
+* Some overlap with existing `StoreKey` system (in the future they could be combined, since this is a superset functionality-wise).
+* Requires an extra level of indirection in the reverse mapping, since MemoryStore must map to index which must then be used as key in a go map to retrieve the actual capability
+
+### Neutral
+
+(none known)
+
+## References
+
+* [Original discussion](https://github.com/cosmos/cosmos-sdk/pull/5230#discussion_r343978513)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-004-split-denomination-keys.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-004-split-denomination-keys.md
new file mode 100644
index 00000000..8abf25fd
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-004-split-denomination-keys.md
@@ -0,0 +1,120 @@
+# ADR 004: Split Denomination Keys
+
+## Changelog
+
+* 2020-01-08: Initial version
+* 2020-01-09: Alterations to handle vesting accounts
+* 2020-01-14: Updates from review feedback
+* 2020-01-30: Updates from implementation
+
+### Glossary
+
+* denom / denomination key -- unique token identifier.
+
+## Context
+
+With permissionless IBC, anyone will be able to send arbitrary denominations to any other account. Currently, all non-zero balances are stored along with the account in an `sdk.Coins` struct, which creates a potential denial-of-service concern, as too many denominations will become expensive to load & store each time the account is modified. See issues [5467](https://github.com/cosmos/cosmos-sdk/issues/5467) and [4982](https://github.com/cosmos/cosmos-sdk/issues/4982) for additional context.
+
+Simply rejecting incoming deposits after a denomination count limit doesn't work, since it opens up a griefing vector: someone could send a user lots of nonsensical coins over IBC, and then prevent the user from receiving real denominations (such as staking rewards).
+
+## Decision
+
+Balances shall be stored per-account & per-denomination under a denomination- and account-unique key, thus enabling O(1) read & write access to the balance of a particular account in a particular denomination.
+
+### Account interface (x/auth)
+
+`GetCoins()` and `SetCoins()` will be removed from the account interface, since coin balances will
+now be stored in & managed by the bank module.
+
+The vesting account interface will replace `SpendableCoins` in favor of `LockedCoins` which does
+not require the account balance anymore. In addition, `TrackDelegation()` will now accept the
+account balance of all tokens denominated in the vesting balance instead of loading the entire
+account balance.
+
+Vesting accounts will continue to store original vesting, delegated free, and delegated
+vesting coins (which is safe since these cannot contain arbitrary denominations).
+
+### Bank keeper (x/bank)
+
+The following APIs will be added to the `x/bank` keeper:
+
+* `GetAllBalances(ctx Context, addr AccAddress) Coins`
+* `GetBalance(ctx Context, addr AccAddress, denom string) Coin`
+* `SetBalance(ctx Context, addr AccAddress, coin Coin)`
+* `LockedCoins(ctx Context, addr AccAddress) Coins`
+* `SpendableCoins(ctx Context, addr AccAddress) Coins`
+
+Additional APIs may be added to facilitate iteration and auxiliary functionality not essential to
+core functionality or persistence.
+
+Balances will be stored first by the address, then by the denomination (the reverse is also possible,
+but retrieval of all balances for a single account is presumed to be more frequent):
+
+```go
+var BalancesPrefix = []byte("balances")
+
+func (k Keeper) SetBalance(ctx Context, addr AccAddress, balance Coin) error {
+ if !balance.IsValid() {
+ return err
+ }
+
+ store := ctx.KVStore(k.storeKey)
+ balancesStore := prefix.NewStore(store, BalancesPrefix)
+ accountStore := prefix.NewStore(balancesStore, addr.Bytes())
+
+ bz := Marshal(balance)
+ accountStore.Set([]byte(balance.Denom), bz)
+
+ return nil
+}
+```
+
+This will result in the balances being indexed by the byte representation of
+`balances/{address}/{denom}`.
+
+`DelegateCoins()` and `UndelegateCoins()` will be altered to only load each individual
+account balance by denomination found in the (un)delegation amount. As a result,
+any mutations to the account balance by will made by denomination.
+
+`SubtractCoins()` and `AddCoins()` will be altered to read & write the balances
+directly instead of calling `GetCoins()` / `SetCoins()` (which no longer exist).
+
+`trackDelegation()` and `trackUndelegation()` will be altered to no longer update
+account balances.
+
+External APIs will need to scan all balances under an account to retain backwards-compatibility. It
+is advised that these APIs use `GetBalance` and `SetBalance` instead of `GetAllBalances` when
+possible as to not load the entire account balance.
+
+### Supply module
+
+The supply module, in order to implement the total supply invariant, will now need
+to scan all accounts & call `GetAllBalances` using the `x/bank` Keeper, then sum
+the balances and check that they match the expected total supply.
+
+## Status
+
+Accepted.
+
+## Consequences
+
+### Positive
+
+* O(1) reads & writes of balances (with respect to the number of denominations for
+which an account has non-zero balances). Note, this does not relate to the actual
+I/O cost, rather the total number of direct reads needed.
+
+### Negative
+
+* Slightly less efficient reads/writes when reading & writing all balances of a
+single account in a transaction.
+
+### Neutral
+
+None in particular.
+
+## References
+
+* Ref: https://github.com/cosmos/cosmos-sdk/issues/4982
+* Ref: https://github.com/cosmos/cosmos-sdk/issues/5467
+* Ref: https://github.com/cosmos/cosmos-sdk/issues/5492
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-006-secret-store-replacement.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-006-secret-store-replacement.md
new file mode 100644
index 00000000..fe2e2546
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-006-secret-store-replacement.md
@@ -0,0 +1,54 @@
+# ADR 006: Secret Store Replacement
+
+## Changelog
+
+* July 29th, 2019: Initial draft
+* September 11th, 2019: Work has started
+* November 4th: Cosmos SDK changes merged in
+* November 18th: Gaia changes merged in
+
+## Context
+
+Currently, a Cosmos SDK application's CLI directory stores key material and metadata in a plain text database in the user’s home directory. Key material is encrypted by a passphrase, protected by bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text.
+
+This is not desirable for a number of reasons. Perhaps the biggest reason is insufficient security protection of key material and metadata. Leaking the plain text allows an attacker to surveil what keys a given computer controls via a number of techniques, like compromised dependencies without any privilege execution. This could be followed by a more targeted attack on a particular user/computer.
+
+All modern desktop computers OS (Ubuntu, Debian, MacOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data.
+
+We are seeking solution that provides a common abstraction layer to the many different backends and reasonable fallback for minimal platforms that don’t provide a native secret store.
+
+## Decision
+
+We recommend replacing the current Keybase backend based on LevelDB with [Keyring](https://github.com/99designs/keyring) by 99 designs. This application is designed to provide a common abstraction and uniform interface between many secret stores and is used by AWS Vault application by 99-designs application.
+
+This appears to fulfill the requirement of protecting both key material and metadata from rouge software on a user’s machine.
+
+## Status
+
+Accepted
+
+## Consequences
+
+### Positive
+
+Increased safety for users.
+
+### Negative
+
+Users must manually migrate.
+
+Testing against all supported backends is difficult.
+
+Running tests locally on a Mac require numerous repetitive password entries.
+
+### Neutral
+
+{neutral consequences}
+
+## References
+
+* #4754 Switch secret store to the keyring secret store (original PR by @poldsam) [__CLOSED__]
+* #5029 Add support for github.com/99designs/keyring-backed keybases [__MERGED__]
+* #5097 Add keys migrate command [__MERGED__]
+* #5180 Drop on-disk keybase in favor of keyring [_PENDING_REVIEW_]
+* cosmos/gaia#164 Drop on-disk keybase in favor of keyring (gaia's changes) [_PENDING_REVIEW_]
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-007-specialization-groups.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-007-specialization-groups.md
new file mode 100644
index 00000000..9a351dd1
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-007-specialization-groups.md
@@ -0,0 +1,177 @@
+# ADR 007: Specialization Groups
+
+## Changelog
+
+* 2019 Jul 31: Initial Draft
+
+## Context
+
+This idea was first conceived of in order to fulfill the use case of the
+creation of a decentralized Computer Emergency Response Team (dCERT), whose
+members would be elected by a governing community and would fulfill the role of
+coordinating the community under emergency situations. This thinking
+can be further abstracted into the conception of "blockchain specialization
+groups".
+
+The creation of these groups are the beginning of specialization capabilities
+within a wider blockchain community which could be used to enable a certain
+level of delegated responsibilities. Examples of specialization which could be
+beneficial to a blockchain community include: code auditing, emergency response,
+code development etc. This type of community organization paves the way for
+individual stakeholders to delegate votes by issue type, if in the future
+governance proposals include a field for issue type.
+
+## Decision
+
+A specialization group can be broadly broken down into the following functions
+(herein containing examples):
+
+* Membership Admittance
+* Membership Acceptance
+* Membership Revocation
+ * (probably) Without Penalty
+ * member steps down (self-Revocation)
+ * replaced by new member from governance
+ * (probably) With Penalty
+ * due to breach of soft-agreement (determined through governance)
+ * due to breach of hard-agreement (determined by code)
+* Execution of Duties
+ * Special transactions which only execute for members of a specialization
+ group (for example, dCERT members voting to turn off transaction routes in
+ an emergency scenario)
+* Compensation
+ * Group compensation (further distribution decided by the specialization group)
+ * Individual compensation for all constituents of a group from the
+ greater community
+
+Membership admittance to a specialization group could take place over a wide
+variety of mechanisms. The most obvious example is through a general vote among
+the entire community, however in certain systems a community may want to allow
+the members already in a specialization group to internally elect new members,
+or maybe the community may assign a permission to a particular specialization
+group to appoint members to other 3rd party groups. The sky is really the limit
+as to how membership admittance can be structured. We attempt to capture
+some of these possiblities in a common interface dubbed the `Electionator`. For
+its initial implementation as a part of this ADR we recommend that the general
+election abstraction (`Electionator`) is provided as well as a basic
+implementation of that abstraction which allows for a continuous election of
+members of a specialization group.
+
+``` golang
+// The Electionator abstraction covers the concept space for
+// a wide variety of election kinds.
+type Electionator interface {
+
+ // is the election object accepting votes.
+ Active() bool
+
+ // functionality to execute for when a vote is cast in this election, here
+ // the vote field is anticipated to be marshalled into a vote type used
+ // by an election.
+ //
+ // NOTE There are no explicit ids here. Just votes which pertain specifically
+ // to one electionator. Anyone can create and send a vote to the electionator item
+ // which will presumably attempt to marshal those bytes into a particular struct
+ // and apply the vote information in some arbitrary way. There can be multiple
+ // Electionators within the Cosmos-Hub for multiple specialization groups, votes
+ // would need to be routed to the Electionator upstream of here.
+ Vote(addr sdk.AccAddress, vote []byte)
+
+ // here lies all functionality to authenticate and execute changes for
+ // when a member accepts being elected
+ AcceptElection(sdk.AccAddress)
+
+ // Register a revoker object
+ RegisterRevoker(Revoker)
+
+ // No more revokers may be registered after this function is called
+ SealRevokers()
+
+ // register hooks to call when an election actions occur
+ RegisterHooks(ElectionatorHooks)
+
+ // query for the current winner(s) of this election based on arbitrary
+ // election ruleset
+ QueryElected() []sdk.AccAddress
+
+ // query metadata for an address in the election this
+ // could include for example position that an address
+ // is being elected for within a group
+ //
+ // this metadata may be directly related to
+ // voting information and/or privileges enabled
+ // to members within a group.
+ QueryMetadata(sdk.AccAddress) []byte
+}
+
+// ElectionatorHooks, once registered with an Electionator,
+// trigger execution of relevant interface functions when
+// Electionator events occur.
+type ElectionatorHooks interface {
+ AfterVoteCast(addr sdk.AccAddress, vote []byte)
+ AfterMemberAccepted(addr sdk.AccAddress)
+ AfterMemberRevoked(addr sdk.AccAddress, cause []byte)
+}
+
+// Revoker defines the function required for a membership revocation rule-set
+// used by a specialization group. This could be used to create self revoking,
+// and evidence based revoking, etc. Revokers types may be created and
+// reused for different election types.
+//
+// When revoking the "cause" bytes may be arbitrarily marshalled into evidence,
+// memos, etc.
+type Revoker interface {
+ RevokeName() string // identifier for this revoker type
+ RevokeMember(addr sdk.AccAddress, cause []byte) error
+}
+```
+
+Certain level of commonality likely exists between the existing code within
+`x/governance` and required functionality of elections. This common
+functionality should be abstracted during implementation. Similarly for each
+vote implementation client CLI/REST functionality should be abstracted
+to be reused for multiple elections.
+
+The specialization group abstraction firstly extends the `Electionator`
+but also further defines traits of the group.
+
+``` golang
+type SpecializationGroup interface {
+ Electionator
+ GetName() string
+ GetDescription() string
+
+ // general soft contract the group is expected
+ // to fulfill with the greater community
+ GetContract() string
+
+ // messages which can be executed by the members of the group
+ Handler(ctx sdk.Context, msg sdk.Msg) sdk.Result
+
+ // logic to be executed at endblock, this may for instance
+ // include payment of a stipend to the group members
+ // for participation in the security group.
+ EndBlocker(ctx sdk.Context)
+}
+```
+
+## Status
+
+> Proposed
+
+## Consequences
+
+### Positive
+
+* increases specialization capabilities of a blockchain
+* improve abstractions in `x/gov/` such that they can be used with specialization groups
+
+### Negative
+
+* could be used to increase centralization within a community
+
+### Neutral
+
+## References
+
+* [dCERT ADR](adr-008-dCERT-group.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-008-dCERT-group.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-008-dCERT-group.md
new file mode 100644
index 00000000..2097bf1b
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-008-dCERT-group.md
@@ -0,0 +1,171 @@
+# ADR 008: Decentralized Computer Emergency Response Team (dCERT) Group
+
+## Changelog
+
+* 2019 Jul 31: Initial Draft
+
+## Context
+
+In order to reduce the number of parties involved with handling sensitive
+information in an emergency scenario, we propose the creation of a
+specialization group named The Decentralized Computer Emergency Response Team
+(dCERT). Initially this group's role is intended to serve as coordinators
+between various actors within a blockchain community such as validators,
+bug-hunters, and developers. During a time of crisis, the dCERT group would
+aggregate and relay input from a variety of stakeholders to the developers who
+are actively devising a patch to the software, this way sensitive information
+does not need to be publicly disclosed while some input from the community can
+still be gained.
+
+Additionally, a special privilege is proposed for the dCERT group: the capacity
+to "circuit-break" (aka. temporarily disable) a particular message path. Note
+that this privilege should be enabled/disabled globally with a governance
+parameter such that this privilege could start disabled and later be enabled
+through a parameter change proposal, once a dCERT group has been established.
+
+In the future it is foreseeable that the community may wish to expand the roles
+of dCERT with further responsibilities such as the capacity to "pre-approve" a
+security update on behalf of the community prior to a full community
+wide vote whereby the sensitive information would be revealed prior to a
+vulnerability being patched on the live network.
+
+## Decision
+
+The dCERT group is proposed to include an implementation of a `SpecializationGroup`
+as defined in [ADR 007](adr-007-specialization-groups.md). This will include the
+implementation of:
+
+* continuous voting
+* slashing due to breach of soft contract
+* revoking a member due to breach of soft contract
+* emergency disband of the entire dCERT group (ex. for colluding maliciously)
+* compensation stipend from the community pool or other means decided by
+ governance
+
+This system necessitates the following new parameters:
+
+* blockly stipend allowance per dCERT member
+* maximum number of dCERT members
+* required staked slashable tokens for each dCERT member
+* quorum for suspending a particular member
+* proposal wager for disbanding the dCERT group
+* stabilization period for dCERT member transition
+* circuit break dCERT privileges enabled
+
+These parameters are expected to be implemented through the param keeper such
+that governance may change them at any given point.
+
+### Continuous Voting Electionator
+
+An `Electionator` object is to be implemented as continuous voting and with the
+following specifications:
+
+* All delegation addresses may submit votes at any point which updates their
+ preferred representation on the dCERT group.
+* Preferred representation may be arbitrarily split between addresses (ex. 50%
+ to John, 25% to Sally, 25% to Carol)
+* In order for a new member to be added to the dCERT group they must
+ send a transaction accepting their admission at which point the validity of
+ their admission is to be confirmed.
+ * A sequence number is assigned when a member is added to dCERT group.
+ If a member leaves the dCERT group and then enters back, a new sequence number
+ is assigned.
+* Addresses which control the greatest amount of preferred-representation are
+ eligible to join the dCERT group (up the _maximum number of dCERT members_).
+ If the dCERT group is already full and new member is admitted, the existing
+ dCERT member with the lowest amount of votes is kicked from the dCERT group.
+ * In the split situation where the dCERT group is full but a vying candidate
+ has the same amount of vote as an existing dCERT member, the existing
+ member should maintain its position.
+ * In the split situation where somebody must be kicked out but the two
+ addresses with the smallest number of votes have the same number of votes,
+ the address with the smallest sequence number maintains its position.
+* A stabilization period can be optionally included to reduce the
+ "flip-flopping" of the dCERT membership tail members. If a stabilization
+ period is provided which is greater than 0, when members are kicked due to
+ insufficient support, a queue entry is created which documents which member is
+ to replace which other member. While this entry is in the queue, no new entries
+ to kick that same dCERT member can be made. When the entry matures at the
+ duration of the stabilization period, the new member is instantiated, and old
+ member kicked.
+
+### Staking/Slashing
+
+All members of the dCERT group must stake tokens _specifically_ to maintain
+eligibility as a dCERT member. These tokens can be staked directly by the vying
+dCERT member or out of the good will of a 3rd party (who shall gain no on-chain
+benefits for doing so). This staking mechanism should use the existing global
+unbonding time of tokens staked for network validator security. A dCERT member
+can _only be_ a member if it has the required tokens staked under this
+mechanism. If those tokens are unbonded then the dCERT member must be
+automatically kicked from the group.
+
+Slashing of a particular dCERT member due to soft-contract breach should be
+performed by governance on a per member basis based on the magnitude of the
+breach. The process flow is anticipated to be that a dCERT member is suspended
+by the dCERT group prior to being slashed by governance.
+
+Membership suspension by the dCERT group takes place through a voting procedure
+by the dCERT group members. After this suspension has taken place, a governance
+proposal to slash the dCERT member must be submitted, if the proposal is not
+approved by the time the rescinding member has completed unbonding their
+tokens, then the tokens are no longer staked and unable to be slashed.
+
+Additionally in the case of an emergency situation of a colluding and malicious
+dCERT group, the community needs the capability to disband the entire dCERT
+group and likely fully slash them. This could be achieved though a special new
+proposal type (implemented as a general governance proposal) which would halt
+the functionality of the dCERT group until the proposal was concluded. This
+special proposal type would likely need to also have a fairly large wager which
+could be slashed if the proposal creator was malicious. The reason a large
+wager should be required is because as soon as the proposal is made, the
+capability of the dCERT group to halt message routes is put on temporarily
+suspended, meaning that a malicious actor who created such a proposal could
+then potentially exploit a bug during this period of time, with no dCERT group
+capable of shutting down the exploitable message routes.
+
+### dCERT membership transactions
+
+Active dCERT members
+
+* change of the description of the dCERT group
+* circuit break a message route
+* vote to suspend a dCERT member.
+
+Here circuit-breaking refers to the capability to disable a groups of messages,
+This could for instance mean: "disable all staking-delegation messages", or
+"disable all distribution messages". This could be accomplished by verifying
+that the message route has not been "circuit-broken" at CheckTx time (in
+`baseapp/baseapp.go`).
+
+"unbreaking" a circuit is anticipated only to occur during a hard fork upgrade
+meaning that no capability to unbreak a message route on a live chain is
+required.
+
+Note also, that if there was a problem with governance voting (for instance a
+capability to vote many times) then governance would be broken and should be
+halted with this mechanism, it would be then up to the validator set to
+coordinate and hard-fork upgrade to a patched version of the software where
+governance is re-enabled (and fixed). If the dCERT group abuses this privilege
+they should all be severely slashed.
+
+## Status
+
+> Proposed
+
+## Consequences
+
+### Positive
+
+* Potential to reduces the number of parties to coordinate with during an emergency
+* Reduction in possibility of disclosing sensitive information to malicious parties
+
+### Negative
+
+* Centralization risks
+
+### Neutral
+
+## References
+
+ [Specialization Groups ADR](adr-007-specialization-groups.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-009-evidence-module.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-009-evidence-module.md
new file mode 100644
index 00000000..ded04a14
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-009-evidence-module.md
@@ -0,0 +1,182 @@
+# ADR 009: Evidence Module
+
+## Changelog
+
+* 2019 July 31: Initial draft
+* 2019 October 24: Initial implementation
+
+## Status
+
+Accepted
+
+## Context
+
+In order to support building highly secure, robust and interoperable blockchain
+applications, it is vital for the Cosmos SDK to expose a mechanism in which arbitrary
+evidence can be submitted, evaluated and verified resulting in some agreed upon
+penalty for any misbehavior committed by a validator, such as equivocation (double-voting),
+signing when unbonded, signing an incorrect state transition (in the future), etc.
+Furthermore, such a mechanism is paramount for any
+[IBC](https://github.com/cosmos/ics/blob/master/ibc/2_IBC_ARCHITECTURE.md) or
+cross-chain validation protocol implementation in order to support the ability
+for any misbehavior to be relayed back from a collateralized chain to a primary
+chain so that the equivocating validator(s) can be slashed.
+
+## Decision
+
+We will implement an evidence module in the Cosmos SDK supporting the following
+functionality:
+
+* Provide developers with the abstractions and interfaces necessary to define
+ custom evidence messages, message handlers, and methods to slash and penalize
+ accordingly for misbehavior.
+* Support the ability to route evidence messages to handlers in any module to
+ determine the validity of submitted misbehavior.
+* Support the ability, through governance, to modify slashing penalties of any
+ evidence type.
+* Querier implementation to support querying params, evidence types, params, and
+ all submitted valid misbehavior.
+
+### Types
+
+First, we define the `Evidence` interface type. The `x/evidence` module may implement
+its own types that can be used by many chains (e.g. `CounterFactualEvidence`).
+In addition, other modules may implement their own `Evidence` types in a similar
+manner in which governance is extensible. It is important to note any concrete
+type implementing the `Evidence` interface may include arbitrary fields such as
+an infraction time. We want the `Evidence` type to remain as flexible as possible.
+
+When submitting evidence to the `x/evidence` module, the concrete type must provide
+the validator's consensus address, which should be known by the `x/slashing`
+module (assuming the infraction is valid), the height at which the infraction
+occurred and the validator's power at same height in which the infraction occurred.
+
+```go
+type Evidence interface {
+ Route() string
+ Type() string
+ String() string
+ Hash() HexBytes
+ ValidateBasic() error
+
+ // The consensus address of the malicious validator at time of infraction
+ GetConsensusAddress() ConsAddress
+
+ // Height at which the infraction occurred
+ GetHeight() int64
+
+ // The total power of the malicious validator at time of infraction
+ GetValidatorPower() int64
+
+ // The total validator set power at time of infraction
+ GetTotalPower() int64
+}
+```
+
+### Routing & Handling
+
+Each `Evidence` type must map to a specific unique route and be registered with
+the `x/evidence` module. It accomplishes this through the `Router` implementation.
+
+```go
+type Router interface {
+ AddRoute(r string, h Handler) Router
+ HasRoute(r string) bool
+ GetRoute(path string) Handler
+ Seal()
+}
+```
+
+Upon successful routing through the `x/evidence` module, the `Evidence` type
+is passed through a `Handler`. This `Handler` is responsible for executing all
+corresponding business logic necessary for verifying the evidence as valid. In
+addition, the `Handler` may execute any necessary slashing and potential jailing.
+Since slashing fractions will typically result from some form of static functions,
+allow the `Handler` to do this provides the greatest flexibility. An example could
+be `k * evidence.GetValidatorPower()` where `k` is an on-chain parameter controlled
+by governance. The `Evidence` type should provide all the external information
+necessary in order for the `Handler` to make the necessary state transitions.
+If no error is returned, the `Evidence` is considered valid.
+
+```go
+type Handler func(Context, Evidence) error
+```
+
+### Submission
+
+`Evidence` is submitted through a `MsgSubmitEvidence` message type which is internally
+handled by the `x/evidence` module's `SubmitEvidence`.
+
+```go
+type MsgSubmitEvidence struct {
+ Evidence
+}
+
+func handleMsgSubmitEvidence(ctx Context, keeper Keeper, msg MsgSubmitEvidence) Result {
+ if err := keeper.SubmitEvidence(ctx, msg.Evidence); err != nil {
+ return err.Result()
+ }
+
+ // emit events...
+
+ return Result{
+ // ...
+ }
+}
+```
+
+The `x/evidence` module's keeper is responsible for matching the `Evidence` against
+the module's router and invoking the corresponding `Handler` which may include
+slashing and jailing the validator. Upon success, the submitted evidence is persisted.
+
+```go
+func (k Keeper) SubmitEvidence(ctx Context, evidence Evidence) error {
+ handler := keeper.router.GetRoute(evidence.Route())
+ if err := handler(ctx, evidence); err != nil {
+ return ErrInvalidEvidence(keeper.codespace, err)
+ }
+
+ keeper.setEvidence(ctx, evidence)
+ return nil
+}
+```
+
+### Genesis
+
+Finally, we need to represent the genesis state of the `x/evidence` module. The
+module only needs a list of all submitted valid infractions and any necessary params
+for which the module needs in order to handle submitted evidence. The `x/evidence`
+module will naturally define and route native evidence types for which it'll most
+likely need slashing penalty constants for.
+
+```go
+type GenesisState struct {
+ Params Params
+ Infractions []Evidence
+}
+```
+
+## Consequences
+
+### Positive
+
+* Allows the state machine to process misbehavior submitted on-chain and penalize
+ validators based on agreed upon slashing parameters.
+* Allows evidence types to be defined and handled by any module. This further allows
+ slashing and jailing to be defined by more complex mechanisms.
+* Does not solely rely on Tendermint to submit evidence.
+
+### Negative
+
+* No easy way to introduce new evidence types through governance on a live chain
+ due to the inability to introduce the new evidence type's corresponding handler
+
+### Neutral
+
+* Should we persist infractions indefinitely? Or should we rather rely on events?
+
+## References
+
+* [ICS](https://github.com/cosmos/ics)
+* [IBC Architecture](https://github.com/cosmos/ics/blob/master/ibc/1_IBC_ARCHITECTURE.md)
+* [Tendermint Fork Accountability](https://github.com/tendermint/spec/blob/7b3138e69490f410768d9b1ffc7a17abc23ea397/spec/consensus/fork-accountability.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-010-modular-antehandler.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-010-modular-antehandler.md
new file mode 100644
index 00000000..386af1a7
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-010-modular-antehandler.md
@@ -0,0 +1,290 @@
+# ADR 010: Modular AnteHandler
+
+## Changelog
+
+* 2019 Aug 31: Initial draft
+* 2021 Sep 14: Superseded by ADR-045
+
+## Status
+
+SUPERSEDED by ADR-045
+
+## Context
+
+The current AnteHandler design allows users to either use the default AnteHandler provided in `x/auth` or to build their own AnteHandler from scratch. Ideally AnteHandler functionality is split into multiple, modular functions that can be chained together along with custom ante-functions so that users do not have to rewrite common antehandler logic when they want to implement custom behavior.
+
+For example, let's say a user wants to implement some custom signature verification logic. In the current codebase, the user would have to write their own Antehandler from scratch largely reimplementing much of the same code and then set their own custom, monolithic antehandler in the baseapp. Instead, we would like to allow users to specify custom behavior when necessary and combine them with default ante-handler functionality in a way that is as modular and flexible as possible.
+
+## Proposals
+
+### Per-Module AnteHandler
+
+One approach is to use the [ModuleManager](https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/module) and have each module implement its own antehandler if it requires custom antehandler logic. The ModuleManager can then be passed in an AnteHandler order in the same way it has an order for BeginBlockers and EndBlockers. The ModuleManager returns a single AnteHandler function that will take in a tx and run each module's `AnteHandle` in the specified order. The module manager's AnteHandler is set as the baseapp's AnteHandler.
+
+Pros:
+
+1. Simple to implement
+2. Utilizes the existing ModuleManager architecture
+
+Cons:
+
+1. Improves granularity but still cannot get more granular than a per-module basis. e.g. If auth's `AnteHandle` function is in charge of validating memo and signatures, users cannot swap the signature-checking functionality while keeping the rest of auth's `AnteHandle` functionality.
+2. Module AnteHandler are run one after the other. There is no way for one AnteHandler to wrap or "decorate" another.
+
+### Decorator Pattern
+
+The [weave project](https://github.com/iov-one/weave) achieves AnteHandler modularity through the use of a decorator pattern. The interface is designed as follows:
+
+```go
+// Decorator wraps a Handler to provide common functionality
+// like authentication, or fee-handling, to many Handlers
+type Decorator interface {
+ Check(ctx Context, store KVStore, tx Tx, next Checker) (*CheckResult, error)
+ Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) (*DeliverResult, error)
+}
+```
+
+Each decorator works like a modularized Cosmos SDK antehandler function, but it can take in a `next` argument that may be another decorator or a Handler (which does not take in a next argument). These decorators can be chained together, one decorator being passed in as the `next` argument of the previous decorator in the chain. The chain ends in a Router which can take a tx and route to the appropriate msg handler.
+
+A key benefit of this approach is that one Decorator can wrap its internal logic around the next Checker/Deliverer. A weave Decorator may do the following:
+
+```go
+// Example Decorator's Deliver function
+func (example Decorator) Deliver(ctx Context, store KVStore, tx Tx, next Deliverer) {
+ // Do some pre-processing logic
+
+ res, err := next.Deliver(ctx, store, tx)
+
+ // Do some post-processing logic given the result and error
+}
+```
+
+Pros:
+
+1. Weave Decorators can wrap over the next decorator/handler in the chain. The ability to both pre-process and post-process may be useful in certain settings.
+2. Provides a nested modular structure that isn't possible in the solution above, while also allowing for a linear one-after-the-other structure like the solution above.
+
+Cons:
+
+1. It is hard to understand at first glance the state updates that would occur after a Decorator runs given the `ctx`, `store`, and `tx`. A Decorator can have an arbitrary number of nested Decorators being called within its function body, each possibly doing some pre- and post-processing before calling the next decorator on the chain. Thus to understand what a Decorator is doing, one must also understand what every other decorator further along the chain is also doing. This can get quite complicated to understand. A linear, one-after-the-other approach while less powerful, may be much easier to reason about.
+
+### Chained Micro-Functions
+
+The benefit of Weave's approach is that the Decorators can be very concise, which when chained together allows for maximum customizability. However, the nested structure can get quite complex and thus hard to reason about.
+
+Another approach is to split the AnteHandler functionality into tightly scoped "micro-functions", while preserving the one-after-the-other ordering that would come from the ModuleManager approach.
+
+We can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions.
+
+Users can order the AnteHandlers easily by simply using the ModuleManager. The ModuleManager will take in a list of AnteHandlers and return a single AnteHandler that runs each AnteHandler in the order of the list provided. If the user is comfortable with the default ordering of each module, this is as simple as providing a list with each module's antehandler (exactly the same as BeginBlocker and EndBlocker).
+
+If however, users wish to change the order or add, modify, or delete ante micro-functions in anyway; they can always define their own ante micro-functions and add them explicitly to the list that gets passed into module manager.
+
+#### Default Workflow
+
+This is an example of a user's AnteHandler if they choose not to make any custom micro-functions.
+
+##### Cosmos SDK code
+
+```go
+// Chains together a list of AnteHandler micro-functions that get run one after the other.
+// Returned AnteHandler will abort on first error.
+func Chainer(order []AnteHandler) AnteHandler {
+ return func(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ for _, ante := range order {
+ ctx, err := ante(ctx, tx, simulate)
+ if err != nil {
+ return ctx, err
+ }
+ }
+ return ctx, err
+ }
+}
+```
+
+```go
+// AnteHandler micro-function to verify signatures
+func VerifySignatures(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // verify signatures
+ // Returns InvalidSignature Result and abort=true if sigs invalid
+ // Return OK result and abort=false if sigs are valid
+}
+
+// AnteHandler micro-function to validate memo
+func ValidateMemo(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // validate memo
+}
+
+// Auth defines its own default ante-handler by chaining its micro-functions in a recommended order
+AuthModuleAnteHandler := Chainer([]AnteHandler{VerifySignatures, ValidateMemo})
+```
+
+```go
+// Distribution micro-function to deduct fees from tx
+func DeductFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // Deduct fees from tx
+ // Abort if insufficient funds in account to pay for fees
+}
+
+// Distribution micro-function to check if fees > mempool parameter
+func CheckMempoolFees(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // If CheckTx: Abort if the fees are less than the mempool's minFee parameter
+}
+
+// Distribution defines its own default ante-handler by chaining its micro-functions in a recommended order
+DistrModuleAnteHandler := Chainer([]AnteHandler{CheckMempoolFees, DeductFees})
+```
+
+```go
+type ModuleManager struct {
+ // other fields
+ AnteHandlerOrder []AnteHandler
+}
+
+func (mm ModuleManager) GetAnteHandler() AnteHandler {
+ retun Chainer(mm.AnteHandlerOrder)
+}
+```
+
+##### User Code
+
+```go
+// Note: Since user is not making any custom modifications, we can just SetAnteHandlerOrder with the default AnteHandlers provided by each module in our preferred order
+moduleManager.SetAnteHandlerOrder([]AnteHandler(AuthModuleAnteHandler, DistrModuleAnteHandler))
+
+app.SetAnteHandler(mm.GetAnteHandler())
+```
+
+#### Custom Workflow
+
+This is an example workflow for a user that wants to implement custom antehandler logic. In this example, the user wants to implement custom signature verification and change the order of antehandler so that validate memo runs before signature verification.
+
+##### User Code
+
+```go
+// User can implement their own custom signature verification antehandler micro-function
+func CustomSigVerify(ctx Context, tx Tx, simulate bool) (newCtx Context, err error) {
+ // do some custom signature verification logic
+}
+```
+
+```go
+// Micro-functions allow users to change order of when they get executed, and swap out default ante-functionality with their own custom logic.
+// Note that users can still chain the default distribution module handler, and auth micro-function along with their custom ante function
+moduleManager.SetAnteHandlerOrder([]AnteHandler(ValidateMemo, CustomSigVerify, DistrModuleAnteHandler))
+```
+
+Pros:
+
+1. Allows for ante functionality to be as modular as possible.
+2. For users that do not need custom ante-functionality, there is little difference between how antehandlers work and how BeginBlock and EndBlock work in ModuleManager.
+3. Still easy to understand
+
+Cons:
+
+1. Cannot wrap antehandlers with decorators like you can with Weave.
+
+### Simple Decorators
+
+This approach takes inspiration from Weave's decorator design while trying to minimize the number of breaking changes to the Cosmos SDK and maximizing simplicity. Like Weave decorators, this approach allows one `AnteDecorator` to wrap the next AnteHandler to do pre- and post-processing on the result. This is useful since decorators can do defer/cleanups after an AnteHandler returns as well as perform some setup beforehand. Unlike Weave decorators, these `AnteDecorator` functions can only wrap over the AnteHandler rather than the entire handler execution path. This is deliberate as we want decorators from different modules to perform authentication/validation on a `tx`. However, we do not want decorators being capable of wrapping and modifying the results of a `MsgHandler`.
+
+In addition, this approach will not break any core Cosmos SDK API's. Since we preserve the notion of an AnteHandler and still set a single AnteHandler in baseapp, the decorator is simply an additional approach available for users that desire more customization. The API of modules (namely `x/auth`) may break with this approach, but the core API remains untouched.
+
+Allow Decorator interface that can be chained together to create a Cosmos SDK AnteHandler.
+
+This allows users to choose between implementing an AnteHandler by themselves and setting it in the baseapp, or use the decorator pattern to chain their custom decorators with the Cosmos SDK provided decorators in the order they wish.
+
+```go
+// An AnteDecorator wraps an AnteHandler, and can do pre- and post-processing on the next AnteHandler
+type AnteDecorator interface {
+ AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error)
+}
+```
+
+```go
+// ChainAnteDecorators will recursively link all of the AnteDecorators in the chain and return a final AnteHandler function
+// This is done to preserve the ability to set a single AnteHandler function in the baseapp.
+func ChainAnteDecorators(chain ...AnteDecorator) AnteHandler {
+ if len(chain) == 1 {
+ return func(ctx Context, tx Tx, simulate bool) {
+ chain[0].AnteHandle(ctx, tx, simulate, nil)
+ }
+ }
+ return func(ctx Context, tx Tx, simulate bool) {
+ chain[0].AnteHandle(ctx, tx, simulate, ChainAnteDecorators(chain[1:]))
+ }
+}
+```
+
+#### Example Code
+
+Define AnteDecorator functions
+
+```go
+// Setup GasMeter, catch OutOfGasPanic and handle appropriately
+type SetUpContextDecorator struct{}
+
+func (sud SetUpContextDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {
+ ctx.GasMeter = NewGasMeter(tx.Gas)
+
+ defer func() {
+ // recover from OutOfGas panic and handle appropriately
+ }
+
+ return next(ctx, tx, simulate)
+}
+
+// Signature Verification decorator. Verify Signatures and move on
+type SigVerifyDecorator struct{}
+
+func (svd SigVerifyDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {
+ // verify sigs. Return error if invalid
+
+ // call next antehandler if sigs ok
+ return next(ctx, tx, simulate)
+}
+
+// User-defined Decorator. Can choose to pre- and post-process on AnteHandler
+type UserDefinedDecorator struct{
+ // custom fields
+}
+
+func (udd UserDefinedDecorator) AnteHandle(ctx Context, tx Tx, simulate bool, next AnteHandler) (newCtx Context, err error) {
+ // pre-processing logic
+
+ ctx, err = next(ctx, tx, simulate)
+
+ // post-processing logic
+}
+```
+
+Link AnteDecorators to create a final AnteHandler. Set this AnteHandler in baseapp.
+
+```go
+// Create final antehandler by chaining the decorators together
+antehandler := ChainAnteDecorators(NewSetUpContextDecorator(), NewSigVerifyDecorator(), NewUserDefinedDecorator())
+
+// Set chained Antehandler in the baseapp
+bapp.SetAnteHandler(antehandler)
+```
+
+Pros:
+
+1. Allows one decorator to pre- and post-process the next AnteHandler, similar to the Weave design.
+2. Do not need to break baseapp API. Users can still set a single AnteHandler if they choose.
+
+Cons:
+
+1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function.
+2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern.
+
+## Consequences
+
+Since pros and cons are written for each approach, it is omitted from this section
+
+## References
+
+* [#4572](https://github.com/cosmos/cosmos-sdk/issues/4572): Modular AnteHandler Issue
+* [#4582](https://github.com/cosmos/cosmos-sdk/pull/4583): Initial Implementation of Per-Module AnteHandler Approach
+* [Weave Decorator Code](https://github.com/iov-one/weave/blob/master/handler.go#L35)
+* [Weave Design Videos](https://vimeo.com/showcase/6189877)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-011-generalize-genesis-accounts.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-011-generalize-genesis-accounts.md
new file mode 100644
index 00000000..92a704ba
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-011-generalize-genesis-accounts.md
@@ -0,0 +1,170 @@
+# ADR 011: Generalize Genesis Accounts
+
+## Changelog
+
+* 2019-08-30: initial draft
+
+## Context
+
+Currently, the Cosmos SDK allows for custom account types; the `auth` keeper stores any type fulfilling its `Account` interface. However `auth` does not handle exporting or loading accounts to/from a genesis file, this is done by `genaccounts`, which only handles one of 4 concrete account types (`BaseAccount`, `ContinuousVestingAccount`, `DelayedVestingAccount` and `ModuleAccount`).
+
+Projects desiring to use custom accounts (say custom vesting accounts) need to fork and modify `genaccounts`.
+
+## Decision
+
+In summary, we will (un)marshal all accounts (interface types) directly using amino, rather than converting to `genaccounts`’s `GenesisAccount` type. Since doing this removes the majority of `genaccounts`'s code, we will merge `genaccounts` into `auth`. Marshalled accounts will be stored in `auth`'s genesis state.
+
+Detailed changes:
+
+### 1) (Un)Marshal accounts directly using amino
+
+The `auth` module's `GenesisState` gains a new field `Accounts`. Note these aren't of type `exported.Account` for reasons outlined in section 3.
+
+```go
+// GenesisState - all auth state that must be provided at genesis
+type GenesisState struct {
+ Params Params `json:"params" yaml:"params"`
+ Accounts []GenesisAccount `json:"accounts" yaml:"accounts"`
+}
+```
+
+Now `auth`'s `InitGenesis` and `ExportGenesis` (un)marshal accounts as well as the defined params.
+
+```go
+// InitGenesis - Init store state from genesis data
+func InitGenesis(ctx sdk.Context, ak AccountKeeper, data GenesisState) {
+ ak.SetParams(ctx, data.Params)
+ // load the accounts
+ for _, a := range data.Accounts {
+ acc := ak.NewAccount(ctx, a) // set account number
+ ak.SetAccount(ctx, acc)
+ }
+}
+
+// ExportGenesis returns a GenesisState for a given context and keeper
+func ExportGenesis(ctx sdk.Context, ak AccountKeeper) GenesisState {
+ params := ak.GetParams(ctx)
+
+ var genAccounts []exported.GenesisAccount
+ ak.IterateAccounts(ctx, func(account exported.Account) bool {
+ genAccount := account.(exported.GenesisAccount)
+ genAccounts = append(genAccounts, genAccount)
+ return false
+ })
+
+ return NewGenesisState(params, genAccounts)
+}
+```
+
+### 2) Register custom account types on the `auth` codec
+
+The `auth` codec must have all custom account types registered to marshal them. We will follow the pattern established in `gov` for proposals.
+
+An example custom account definition:
+
+```go
+import authtypes "github.com/cosmos/cosmos-sdk/x/auth/types"
+
+// Register the module account type with the auth module codec so it can decode module accounts stored in a genesis file
+func init() {
+ authtypes.RegisterAccountTypeCodec(ModuleAccount{}, "cosmos-sdk/ModuleAccount")
+}
+
+type ModuleAccount struct {
+ ...
+```
+
+The `auth` codec definition:
+
+```go
+var ModuleCdc *codec.LegacyAmino
+
+func init() {
+ ModuleCdc = codec.NewLegacyAmino()
+ // register module msg's and Account interface
+ ...
+ // leave the codec unsealed
+}
+
+// RegisterAccountTypeCodec registers an external account type defined in another module for the internal ModuleCdc.
+func RegisterAccountTypeCodec(o interface{}, name string) {
+ ModuleCdc.RegisterConcrete(o, name, nil)
+}
+```
+
+### 3) Genesis validation for custom account types
+
+Modules implement a `ValidateGenesis` method. As `auth` does not know of account implementations, accounts will need to validate themselves.
+
+We will unmarshal accounts into a `GenesisAccount` interface that includes a `Validate` method.
+
+```go
+type GenesisAccount interface {
+ exported.Account
+ Validate() error
+}
+```
+
+Then the `auth` `ValidateGenesis` function becomes:
+
+```go
+// ValidateGenesis performs basic validation of auth genesis data returning an
+// error for any failed validation criteria.
+func ValidateGenesis(data GenesisState) error {
+ // Validate params
+ ...
+
+ // Validate accounts
+ addrMap := make(map[string]bool, len(data.Accounts))
+ for _, acc := range data.Accounts {
+
+ // check for duplicated accounts
+ addrStr := acc.GetAddress().String()
+ if _, ok := addrMap[addrStr]; ok {
+ return fmt.Errorf("duplicate account found in genesis state; address: %s", addrStr)
+ }
+ addrMap[addrStr] = true
+
+ // check account specific validation
+ if err := acc.Validate(); err != nil {
+ return fmt.Errorf("invalid account found in genesis state; address: %s, error: %s", addrStr, err.Error())
+ }
+
+ }
+ return nil
+}
+```
+
+### 4) Move add-genesis-account cli to `auth`
+
+The `genaccounts` module contains a cli command to add base or vesting accounts to a genesis file.
+
+This will be moved to `auth`. We will leave it to projects to write their own commands to add custom accounts. An extensible cli handler, similar to `gov`, could be created but it is not worth the complexity for this minor use case.
+
+### 5) Update module and vesting accounts
+
+Under the new scheme, module and vesting account types need some minor updates:
+
+* Type registration on `auth`'s codec (shown above)
+* A `Validate` method for each `Account` concrete type
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* custom accounts can be used without needing to fork `genaccounts`
+* reduction in lines of code
+
+### Negative
+
+### Neutral
+
+* `genaccounts` module no longer exists
+* accounts in genesis files are stored under `accounts` in `auth` rather than in the `genaccounts` module.
+-`add-genesis-account` cli command now in `auth`
+
+## References
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-012-state-accessors.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-012-state-accessors.md
new file mode 100644
index 00000000..93600000
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-012-state-accessors.md
@@ -0,0 +1,155 @@
+# ADR 012: State Accessors
+
+## Changelog
+
+* 2019 Sep 04: Initial draft
+
+## Context
+
+Cosmos SDK modules currently use the `KVStore` interface and `Codec` to access their respective state. While
+this provides a large degree of freedom to module developers, it is hard to modularize and the UX is
+mediocre.
+
+First, each time a module tries to access the state, it has to marshal the value and set or get the
+value and finally unmarshal. Usually this is done by declaring `Keeper.GetXXX` and `Keeper.SetXXX` functions,
+which are repetitive and hard to maintain.
+
+Second, this makes it harder to align with the object capability theorem: the right to access the
+state is defined as a `StoreKey`, which gives full access on the entire Merkle tree, so a module cannot
+send the access right to a specific key-value pair (or a set of key-value pairs) to another module safely.
+
+Finally, because the getter/setter functions are defined as methods of a module's `Keeper`, the reviewers
+have to consider the whole Merkle tree space when they reviewing a function accessing any part of the state.
+There is no static way to know which part of the state that the function is accessing (and which is not).
+
+## Decision
+
+We will define a type named `Value`:
+
+```go
+type Value struct {
+ m Mapping
+ key []byte
+}
+```
+
+The `Value` works as a reference for a key-value pair in the state, where `Value.m` defines the key-value
+space it will access and `Value.key` defines the exact key for the reference.
+
+We will define a type named `Mapping`:
+
+```go
+type Mapping struct {
+ storeKey sdk.StoreKey
+ cdc *codec.LegacyAmino
+ prefix []byte
+}
+```
+
+The `Mapping` works as a reference for a key-value space in the state, where `Mapping.storeKey` defines
+the IAVL (sub-)tree and `Mapping.prefix` defines the optional subspace prefix.
+
+We will define the following core methods for the `Value` type:
+
+```go
+// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal
+func (Value) Get(ctx Context, ptr interface{}) {}
+
+// Get and unmarshal stored data, return error if not exists or cannot unmarshal
+func (Value) GetSafe(ctx Context, ptr interface{}) {}
+
+// Get stored data as raw byte slice
+func (Value) GetRaw(ctx Context) []byte {}
+
+// Marshal and set a raw value
+func (Value) Set(ctx Context, o interface{}) {}
+
+// Check if a raw value exists
+func (Value) Exists(ctx Context) bool {}
+
+// Delete a raw value value
+func (Value) Delete(ctx Context) {}
+```
+
+We will define the following core methods for the `Mapping` type:
+
+```go
+// Constructs key-value pair reference corresponding to the key argument in the Mapping space
+func (Mapping) Value(key []byte) Value {}
+
+// Get and unmarshal stored data, noop if not exists, panic if cannot unmarshal
+func (Mapping) Get(ctx Context, key []byte, ptr interface{}) {}
+
+// Get and unmarshal stored data, return error if not exists or cannot unmarshal
+func (Mapping) GetSafe(ctx Context, key []byte, ptr interface{})
+
+// Get stored data as raw byte slice
+func (Mapping) GetRaw(ctx Context, key []byte) []byte {}
+
+// Marshal and set a raw value
+func (Mapping) Set(ctx Context, key []byte, o interface{}) {}
+
+// Check if a raw value exists
+func (Mapping) Has(ctx Context, key []byte) bool {}
+
+// Delete a raw value value
+func (Mapping) Delete(ctx Context, key []byte) {}
+```
+
+Each method of the `Mapping` type that is passed the arguments `ctx`, `key`, and `args...` will proxy
+the call to `Mapping.Value(key)` with arguments `ctx` and `args...`.
+
+In addition, we will define and provide a common set of types derived from the `Value` type:
+
+```go
+type Boolean struct { Value }
+type Enum struct { Value }
+type Integer struct { Value; enc IntEncoding }
+type String struct { Value }
+// ...
+```
+
+Where the encoding schemes can be different, `o` arguments in core methods are typed, and `ptr` arguments
+in core methods are replaced by explicit return types.
+
+Finally, we will define a family of types derived from the `Mapping` type:
+
+```go
+type Indexer struct {
+ m Mapping
+ enc IntEncoding
+}
+```
+
+Where the `key` argument in core method is typed.
+
+Some of the properties of the accessor types are:
+
+* State access happens only when a function which takes a `Context` as an argument is invoked
+* Accessor type structs give rights to access the state only that the struct is referring, no other
+* Marshalling/Unmarshalling happens implicitly within the core methods
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Serialization will be done automatically
+* Shorter code size, less boilerplate, better UX
+* References to the state can be transferred safely
+* Explicit scope of accessing
+
+### Negative
+
+* Serialization format will be hidden
+* Different architecture from the current, but the use of accessor types can be opt-in
+* Type-specific types (e.g. `Boolean` and `Integer`) have to be defined manually
+
+### Neutral
+
+## References
+
+* [#4554](https://github.com/cosmos/cosmos-sdk/issues/4554)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-013-metrics.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-013-metrics.md
new file mode 100644
index 00000000..33849b56
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-013-metrics.md
@@ -0,0 +1,157 @@
+# ADR 013: Observability
+
+## Changelog
+
+* 20-01-2020: Initial Draft
+
+## Status
+
+Proposed
+
+## Context
+
+Telemetry is paramount into debugging and understanding what the application is doing and how it is
+performing. We aim to expose metrics from modules and other core parts of the Cosmos SDK.
+
+In addition, we should aim to support multiple configurable sinks that an operator may choose from.
+By default, when telemetry is enabled, the application should track and expose metrics that are
+stored in-memory. The operator may choose to enable additional sinks, where we support only
+[Prometheus](https://prometheus.io/) for now, as it's battle-tested, simple to setup, open source,
+and is rich with ecosystem tooling.
+
+We must also aim to integrate metrics into the Cosmos SDK in the most seamless way possible such that
+metrics may be added or removed at will and without much friction. To do this, we will use the
+[go-metrics](https://github.com/armon/go-metrics) library.
+
+Finally, operators may enable telemetry along with specific configuration options. If enabled, metrics
+will be exposed via `/metrics?format={text|prometheus}` via the API server.
+
+## Decision
+
+We will add an additional configuration block to `app.toml` that defines telemetry settings:
+
+```toml
+###############################################################################
+### Telemetry Configuration ###
+###############################################################################
+
+[telemetry]
+
+# Prefixed with keys to separate services
+service-name = {{ .Telemetry.ServiceName }}
+
+# Enabled enables the application telemetry functionality. When enabled,
+# an in-memory sink is also enabled by default. Operators may also enabled
+# other sinks such as Prometheus.
+enabled = {{ .Telemetry.Enabled }}
+
+# Enable prefixing gauge values with hostname
+enable-hostname = {{ .Telemetry.EnableHostname }}
+
+# Enable adding hostname to labels
+enable-hostname-label = {{ .Telemetry.EnableHostnameLabel }}
+
+# Enable adding service to labels
+enable-service-label = {{ .Telemetry.EnableServiceLabel }}
+
+# PrometheusRetentionTime, when positive, enables a Prometheus metrics sink.
+prometheus-retention-time = {{ .Telemetry.PrometheusRetentionTime }}
+```
+
+The given configuration allows for two sinks -- in-memory and Prometheus. We create a `Metrics`
+type that performs all the bootstrapping for the operator, so capturing metrics becomes seamless.
+
+```go
+// Metrics defines a wrapper around application telemetry functionality. It allows
+// metrics to be gathered at any point in time. When creating a Metrics object,
+// internally, a global metrics is registered with a set of sinks as configured
+// by the operator. In addition to the sinks, when a process gets a SIGUSR1, a
+// dump of formatted recent metrics will be sent to STDERR.
+type Metrics struct {
+ memSink *metrics.InmemSink
+ prometheusEnabled bool
+}
+
+// Gather collects all registered metrics and returns a GatherResponse where the
+// metrics are encoded depending on the type. Metrics are either encoded via
+// Prometheus or JSON if in-memory.
+func (m *Metrics) Gather(format string) (GatherResponse, error) {
+ switch format {
+ case FormatPrometheus:
+ return m.gatherPrometheus()
+
+ case FormatText:
+ return m.gatherGeneric()
+
+ case FormatDefault:
+ return m.gatherGeneric()
+
+ default:
+ return GatherResponse{}, fmt.Errorf("unsupported metrics format: %s", format)
+ }
+}
+```
+
+In addition, `Metrics` allows us to gather the current set of metrics at any given point in time. An
+operator may also choose to send a signal, SIGUSR1, to dump and print formatted metrics to STDERR.
+
+During an application's bootstrapping and construction phase, if `Telemetry.Enabled` is `true`, the
+API server will create an instance of a reference to `Metrics` object and will register a metrics
+handler accordingly.
+
+```go
+func (s *Server) Start(cfg config.Config) error {
+ // ...
+
+ if cfg.Telemetry.Enabled {
+ m, err := telemetry.New(cfg.Telemetry)
+ if err != nil {
+ return err
+ }
+
+ s.metrics = m
+ s.registerMetrics()
+ }
+
+ // ...
+}
+
+func (s *Server) registerMetrics() {
+ metricsHandler := func(w http.ResponseWriter, r *http.Request) {
+ format := strings.TrimSpace(r.FormValue("format"))
+
+ gr, err := s.metrics.Gather(format)
+ if err != nil {
+ rest.WriteErrorResponse(w, http.StatusBadRequest, fmt.Sprintf("failed to gather metrics: %s", err))
+ return
+ }
+
+ w.Header().Set("Content-Type", gr.ContentType)
+ _, _ = w.Write(gr.Metrics)
+ }
+
+ s.Router.HandleFunc("/metrics", metricsHandler).Methods("GET")
+}
+```
+
+Application developers may track counters, gauges, summaries, and key/value metrics. There is no
+additional lifting required by modules to leverage profiling metrics. To do so, it's as simple as:
+
+```go
+func (k BaseKeeper) MintCoins(ctx sdk.Context, moduleName string, amt sdk.Coins) error {
+ defer metrics.MeasureSince(time.Now(), "MintCoins")
+ // ...
+}
+```
+
+## Consequences
+
+### Positive
+
+* Exposure into the performance and behavior of an application
+
+### Negative
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-014-proportional-slashing.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-014-proportional-slashing.md
new file mode 100644
index 00000000..63cd04de
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-014-proportional-slashing.md
@@ -0,0 +1,85 @@
+# ADR 14: Proportional Slashing
+
+## Changelog
+
+* 2019-10-15: Initial draft
+* 2020-05-25: Removed correlation root slashing
+* 2020-07-01: Updated to include S-curve function instead of linear
+
+## Context
+
+In Proof of Stake-based chains, centralization of consensus power amongst a small set of validators can cause harm to the network due to increased risk of censorship, liveness failure, fork attacks, etc. However, while this centralization causes a negative externality to the network, it is not directly felt by the delegators contributing towards delegating towards already large validators. We would like a way to pass on the negative externality cost of centralization onto those large validators and their delegators.
+
+## Decision
+
+### Design
+
+To solve this problem, we will implement a procedure called Proportional Slashing. The desire is that the larger a validator is, the more they should be slashed. The first naive attempt is to make a validator's slash percent proportional to their share of consensus voting power.
+
+```text
+slash_amount = k * power // power is the faulting validator's voting power and k is some on-chain constant
+```
+
+However, this will incentivize validators with large amounts of stake to split up their voting power amongst accounts (sybil attack), so that if they fault, they all get slashed at a lower percent. The solution to this is to take into account not just a validator's own voting percentage, but also the voting percentage of all the other validators who get slashed in a specified time frame.
+
+```text
+slash_amount = k * (power_1 + power_2 + ... + power_n) // where power_i is the voting power of the ith validator faulting in the specified time frame and k is some on-chain constant
+```
+
+Now, if someone splits a validator of 10% into two validators of 5% each which both fault, then they both fault in the same time frame, they both will get slashed at the sum 10% amount.
+
+However in practice, we likely don't want a linear relation between amount of stake at fault, and the percentage of stake to slash. In particular, solely 5% of stake double signing effectively did nothing to majorly threaten security, whereas 30% of stake being at fault clearly merits a large slashing factor, due to being very close to the point at which Tendermint security is threatened. A linear relation would require a factor of 6 gap between these two, whereas the difference in risk posed to the network is much larger. We propose using S-curves (formally [logistic functions](https://en.wikipedia.org/wiki/Logistic_function) to solve this). S-Curves capture the desired criterion quite well. They allow the slashing factor to be minimal for small values, and then grow very rapidly near some threshold point where the risk posed becomes notable.
+
+#### Parameterization
+
+This requires parameterizing a logistic function. It is very well understood how to parameterize this. It has four parameters:
+
+1) A minimum slashing factor
+2) A maximum slashing factor
+3) The inflection point of the S-curve (essentially where do you want to center the S)
+4) The rate of growth of the S-curve (How elongated is the S)
+
+#### Correlation across non-sybil validators
+
+One will note, that this model doesn't differentiate between multiple validators run by the same operators vs validators run by different operators. This can be seen as an additional benefit in fact. It incentivizes validators to differentiate their setups from other validators, to avoid having correlated faults with them or else they risk a higher slash. So for example, operators should avoid using the same popular cloud hosting platforms or using the same Staking as a Service providers. This will lead to a more resilient and decentralized network.
+
+#### Griefing
+
+Griefing, the act of intentionally getting oneself slashed in order to make another's slash worse, could be a concern here. However, using the protocol described here, the attacker also gets equally impacted by the grief as the victim, so it would not provide much benefit to the griefer.
+
+### Implementation
+
+In the slashing module, we will add two queues that will track all of the recent slash events. For double sign faults, we will define "recent slashes" as ones that have occurred within the last `unbonding period`. For liveness faults, we will define "recent slashes" as ones that have occurred withing the last `jail period`.
+
+```go
+type SlashEvent struct {
+ Address sdk.ValAddress
+ ValidatorVotingPercent sdk.Dec
+ SlashedSoFar sdk.Dec
+}
+```
+
+These slash events will be pruned from the queue once they are older than their respective "recent slash period".
+
+Whenever a new slash occurs, a `SlashEvent` struct is created with the faulting validator's voting percent and a `SlashedSoFar` of 0. Because recent slash events are pruned before the unbonding period and unjail period expires, it should not be possible for the same validator to have multiple SlashEvents in the same Queue at the same time.
+
+We then will iterate over all the SlashEvents in the queue, adding their `ValidatorVotingPercent` to calculate the new percent to slash all the validators in the queue at, using the "Square of Sum of Roots" formula introduced above.
+
+Once we have the `NewSlashPercent`, we then iterate over all the `SlashEvent`s in the queue once again, and if `NewSlashPercent > SlashedSoFar` for that SlashEvent, we call the `staking.Slash(slashEvent.Address, slashEvent.Power, Math.Min(Math.Max(minSlashPercent, NewSlashPercent - SlashedSoFar), maxSlashPercent)` (we pass in the power of the validator before any slashes occurred, so that we slash the right amount of tokens). We then set `SlashEvent.SlashedSoFar` amount to `NewSlashPercent`.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Increases decentralization by disincentivizing delegating to large validators
+* Incentivizes Decorrelation of Validators
+* More severely punishes attacks than accidental faults
+* More flexibility in slashing rates parameterization
+
+### Negative
+
+* More computationally expensive than current implementation. Will require more data about "recent slashing events" to be stored on chain.
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-016-validator-consensus-key-rotation.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-016-validator-consensus-key-rotation.md
new file mode 100644
index 00000000..1d91a8de
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-016-validator-consensus-key-rotation.md
@@ -0,0 +1,125 @@
+# ADR 016: Validator Consensus Key Rotation
+
+## Changelog
+
+* 2019 Oct 23: Initial draft
+* 2019 Nov 28: Add key rotation fee
+
+## Context
+
+Validator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. https://github.com/tendermint/tendermint/issues/1136). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos SDK.
+
+We don't need to make any update on consensus logic in Tendermint because Tendermint does not have any mapping information of consensus key and validator operator key, meaning that from Tendermint point of view, a consensus key rotation of a validator is simply a replacement of a consensus key to another.
+
+Also, it should be noted that this ADR includes only the simplest form of consensus key rotation without considering multiple consensus keys concept. Such multiple consensus keys concept shall remain a long term goal of Tendermint and Cosmos SDK.
+
+## Decision
+
+### Pseudo procedure for consensus key rotation
+
+* create new random consensus key.
+* create and broadcast a transaction with a `MsgRotateConsPubKey` that states the new consensus key is now coupled with the validator operator with signature from the validator's operator key.
+* old consensus key becomes unable to participate on consensus immediately after the update of key mapping state on-chain.
+* start validating with new consensus key.
+* validators using HSM and KMS should update the consensus key in HSM to use the new rotated key after the height `h` when `MsgRotateConsPubKey` committed to the blockchain.
+
+### Considerations
+
+* consensus key mapping information management strategy
+ * store history of each key mapping changes in the kvstore.
+ * the state machine can search corresponding consensus key paired with given validator operator for any arbitrary height in a recent unbonding period.
+ * the state machine does not need any historical mapping information which is past more than unbonding period.
+* key rotation costs related to LCD and IBC
+ * LCD and IBC will have traffic/computation burden when there exists frequent power changes
+ * In current Tendermint design, consensus key rotations are seen as power changes from LCD or IBC perspective
+ * Therefore, to minimize unnecessary frequent key rotation behavior, we limited maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee
+* limits
+ * a validator cannot rotate its consensus key more than `MaxConsPubKeyRotations` time for any unbonding period, to prevent spam.
+ * parameters can be decided by governance and stored in genesis file.
+* key rotation fee
+ * a validator should pay `KeyRotationFee` to rotate the consensus key which is calculated as below
+ * `KeyRotationFee` = (max(`VotingPowerPercentage` *100, 1)* `InitialKeyRotationFee`) * 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period)
+* evidence module
+ * evidence module can search corresponding consensus key for any height from slashing keeper so that it can decide which consensus key is supposed to be used for given height.
+* abci.ValidatorUpdate
+ * tendermint already has ability to change a consensus key by ABCI communication(`ValidatorUpdate`).
+ * validator consensus key update can be done via creating new + delete old by change the power to zero.
+ * therefore, we expect we even do not need to change tendermint codebase at all to implement this feature.
+* new genesis parameters in `staking` module
+ * `MaxConsPubKeyRotations` : maximum number of rotation can be executed by a validator in recent unbonding period. default value 10 is suggested(11th key rotation will be rejected)
+ * `InitialKeyRotationFee` : the initial key rotation fee when no key rotation has happened in recent unbonding period. default value 1atom is suggested(1atom fee for the first key rotation in recent unbonding period)
+
+### Workflow
+
+1. The validator generates a new consensus keypair.
+2. The validator generates and signs a `MsgRotateConsPubKey` tx with their operator key and new ConsPubKey
+
+ ```go
+ type MsgRotateConsPubKey struct {
+ ValidatorAddress sdk.ValAddress
+ NewPubKey crypto.PubKey
+ }
+ ```
+
+3. `handleMsgRotateConsPubKey` gets `MsgRotateConsPubKey`, calls `RotateConsPubKey` with emits event
+4. `RotateConsPubKey`
+ * checks if `NewPubKey` is not duplicated on `ValidatorsByConsAddr`
+ * checks if the validator is does not exceed parameter `MaxConsPubKeyRotations` by iterating `ConsPubKeyRotationHistory`
+ * checks if the signing account has enough balance to pay `KeyRotationFee`
+ * pays `KeyRotationFee` to community fund
+ * overwrites `NewPubKey` in `validator.ConsPubKey`
+ * deletes old `ValidatorByConsAddr`
+ * `SetValidatorByConsAddr` for `NewPubKey`
+ * Add `ConsPubKeyRotationHistory` for tracking rotation
+
+ ```go
+ type ConsPubKeyRotationHistory struct {
+ OperatorAddress sdk.ValAddress
+ OldConsPubKey crypto.PubKey
+ NewConsPubKey crypto.PubKey
+ RotatedHeight int64
+ }
+ ```
+
+5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator
+
+ ```go
+ abci.ValidatorUpdate{
+ PubKey: cmttypes.TM2PB.PubKey(OldConsPubKey),
+ Power: 0,
+ }
+
+ abci.ValidatorUpdate{
+ PubKey: cmttypes.TM2PB.PubKey(NewConsPubKey),
+ Power: v.ConsensusPower(),
+ }
+ ```
+
+6. at `previousVotes` Iteration logic of `AllocateTokens`, `previousVote` using `OldConsPubKey` match up with `ConsPubKeyRotationHistory`, and replace validator for token allocation
+7. Migrate `ValidatorSigningInfo` and `ValidatorMissedBlockBitArray` from `OldConsPubKey` to `NewConsPubKey`
+
+* Note : All above features shall be implemented in `staking` module.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Validators can immediately or periodically rotate their consensus key to have better security policy
+* improved security against Long-Range attacks (https://nearprotocol.com/blog/long-range-attacks-and-a-new-fork-choice-rule) given a validator throws away the old consensus key(s)
+
+### Negative
+
+* Slash module needs more computation because it needs to lookup corresponding consensus key of validators for each height
+* frequent key rotations will make light client bisection less efficient
+
+### Neutral
+
+## References
+
+* on tendermint repo : https://github.com/tendermint/tendermint/issues/1136
+* on cosmos-sdk repo : https://github.com/cosmos/cosmos-sdk/issues/5231
+* about multiple consensus keys : https://github.com/tendermint/tendermint/issues/1758#issuecomment-545291698
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-017-historical-header-module.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-017-historical-header-module.md
new file mode 100644
index 00000000..573c632c
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-017-historical-header-module.md
@@ -0,0 +1,61 @@
+# ADR 17: Historical Header Module
+
+## Changelog
+
+* 26 November 2019: Start of first version
+* 2 December 2019: Final draft of first version
+
+## Context
+
+In order for the Cosmos SDK to implement the [IBC specification](https://github.com/cosmos/ics), modules within the Cosmos SDK must have the ability to introspect recent consensus states (validator sets & commitment roots) as proofs of these values on other chains must be checked during the handshakes.
+
+## Decision
+
+The application MUST store the most recent `n` headers in a persistent store. At first, this store MAY be the current Merklised store. A non-Merklised store MAY be used later as no proofs are necessary.
+
+The application MUST store this information by storing new headers immediately when handling `abci.RequestBeginBlock`:
+
+```go
+func BeginBlock(ctx sdk.Context, keeper HistoricalHeaderKeeper, req abci.RequestBeginBlock) abci.ResponseBeginBlock {
+ info := HistoricalInfo{
+ Header: ctx.BlockHeader(),
+ ValSet: keeper.StakingKeeper.GetAllValidators(ctx), // note that this must be stored in a canonical order
+ }
+ keeper.SetHistoricalInfo(ctx, ctx.BlockHeight(), info)
+ n := keeper.GetParamRecentHeadersToStore()
+ keeper.PruneHistoricalInfo(ctx, ctx.BlockHeight() - n)
+ // continue handling request
+}
+```
+
+Alternatively, the application MAY store only the hash of the validator set.
+
+The application MUST make these past `n` committed headers available for querying by Cosmos SDK modules through the `Keeper`'s `GetHistoricalInfo` function. This MAY be implemented in a new module, or it MAY also be integrated into an existing one (likely `x/staking` or `x/ibc`).
+
+`n` MAY be configured as a parameter store parameter, in which case it could be changed by `ParameterChangeProposal`s, although it will take some blocks for the stored information to catch up if `n` is increased.
+
+## Status
+
+Proposed.
+
+## Consequences
+
+Implementation of this ADR will require changes to the Cosmos SDK. It will not require changes to Tendermint.
+
+### Positive
+
+* Easy retrieval of headers & state roots for recent past heights by modules anywhere in the Cosmos SDK.
+* No RPC calls to Tendermint required.
+* No ABCI alterations required.
+
+### Negative
+
+* Duplicates `n` headers data in Tendermint & the application (additional disk usage) - in the long term, an approach such as [this](https://github.com/tendermint/tendermint/issues/4210) might be preferable.
+
+### Neutral
+
+(none known)
+
+## References
+
+* [ICS 2: "Consensus state introspection"](https://github.com/cosmos/ibc/tree/master/spec/core/ics-002-client-semantics#consensus-state-introspection)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-018-extendable-voting-period.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-018-extendable-voting-period.md
new file mode 100644
index 00000000..ee238fc3
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-018-extendable-voting-period.md
@@ -0,0 +1,66 @@
+# ADR 18: Extendable Voting Periods
+
+## Changelog
+
+* 1 January 2020: Start of first version
+
+## Context
+
+Currently the voting period for all governance proposals is the same. However, this is suboptimal as all governance proposals do not require the same time period. For more non-contentious proposals, they can be dealt with more efficently with a faster period, while more contentious or complex proposals may need a longer period for extended discussion/consideration.
+
+## Decision
+
+We would like to design a mechanism for making the voting period of a governance proposal variable based on the demand of voters. We would like it to be based on the view of the governance participants, rather than just the proposer of a governance proposal (thus, allowing the proposer to select the voting period length is not sufficient).
+
+However, we would like to avoid the creation of an entire second voting process to determine the length of the voting period, as it just pushed the problem to determining the length of that first voting period.
+
+Thus, we propose the following mechanism:
+
+### Params
+
+* The current gov param `VotingPeriod` is to be replaced by a `MinVotingPeriod` param. This is the default voting period that all governance proposal voting periods start with.
+* There is a new gov param called `MaxVotingPeriodExtension`.
+
+### Mechanism
+
+There is a new `Msg` type called `MsgExtendVotingPeriod`, which can be sent by any staked account during a proposal's voting period. It allows the sender to unilaterally extend the length of the voting period by `MaxVotingPeriodExtension * sender's share of voting power`. Every address can only call `MsgExtendVotingPeriod` once per proposal.
+
+So for example, if the `MaxVotingPeriodExtension` is set to 100 Days, then anyone with 1% of voting power can extend the voting power by 1 day. If 33% of voting power has sent the message, the voting period will be extended by 33 days. Thus, if absolutely everyone chooses to extend the voting period, the absolute maximum voting period will be `MinVotingPeriod + MaxVotingPeriodExtension`.
+
+This system acts as a sort of distributed coordination, where individual stakers choosing to extend or not, allows the system the guage the conentiousness/complexity of the proposal. It is extremely unlikely that many stakers will choose to extend at the exact same time, it allows stakers to view how long others have already extended thus far, to decide whether or not to extend further.
+
+### Dealing with Unbonding/Redelegation
+
+There is one thing that needs to be addressed. How to deal with redelegation/unbonding during the voting period. If a staker of 5% calls `MsgExtendVotingPeriod` and then unbonds, does the voting period then decrease by 5 days again? This is not good as it can give people a false sense of how long they have to make their decision. For this reason, we want to design it such that the voting period length can only be extended, not shortened. To do this, the current extension amount is based on the highest percent that voted extension at any time. This is best explained by example:
+
+1. Let's say 2 stakers of voting power 4% and 3% respectively vote to extend. The voting period will be extended by 7 days.
+2. Now the staker of 3% decides to unbond before the end of the voting period. The voting period extension remains 7 days.
+3. Now, let's say another staker of 2% voting power decides to extend voting period. There is now 6% of active voting power choosing the extend. The voting power remains 7 days.
+4. If a fourth staker of 10% chooses to extend now, there is a total of 16% of active voting power wishing to extend. The voting period will be extended to 16 days.
+
+### Delegators
+
+Just like votes in the actual voting period, delegators automatically inherit the extension of their validators. If their validator chooses to extend, their voting power will be used in the validator's extension. However, the delegator is unable to override their validator and "unextend" as that would contradict the "voting power length can only be ratcheted up" principle described in the previous section. However, a delegator may choose the extend using their personal voting power, if their validator has not done so.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* More complex/contentious governance proposals will have more time to properly digest and deliberate
+
+### Negative
+
+* Governance process becomes more complex and requires more understanding to interact with effectively
+* Can no longer predict when a governance proposal will end. Can't assume order in which governance proposals will end.
+
+### Neutral
+
+* The minimum voting period can be made shorter
+
+## References
+
+* [Cosmos Forum post where idea first originated](https://forum.cosmos.network/t/proposal-draft-reduce-governance-voting-period-to-7-days/3032/9)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-019-protobuf-state-encoding.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-019-protobuf-state-encoding.md
new file mode 100644
index 00000000..5ad1b953
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-019-protobuf-state-encoding.md
@@ -0,0 +1,379 @@
+# ADR 019: Protocol Buffer State Encoding
+
+## Changelog
+
+* 2020 Feb 15: Initial Draft
+* 2020 Feb 24: Updates to handle messages with interface fields
+* 2020 Apr 27: Convert usages of `oneof` for interfaces to `Any`
+* 2020 May 15: Describe `cosmos_proto` extensions and amino compatibility
+* 2020 Dec 4: Move and rename `MarshalAny` and `UnmarshalAny` into the `codec.Codec` interface.
+* 2021 Feb 24: Remove mentions of `HybridCodec`, which has been abandoned in [#6843](https://github.com/cosmos/cosmos-sdk/pull/6843).
+
+## Status
+
+Accepted
+
+## Context
+
+Currently, the Cosmos SDK utilizes [go-amino](https://github.com/tendermint/go-amino/) for binary
+and JSON object encoding over the wire bringing parity between logical objects and persistence objects.
+
+From the Amino docs:
+
+> Amino is an object encoding specification. It is a subset of Proto3 with an extension for interface
+> support. See the [Proto3 spec](https://developers.google.com/protocol-buffers/docs/proto3) for more
+> information on Proto3, which Amino is largely compatible with (but not with Proto2).
+>
+> The goal of the Amino encoding protocol is to bring parity into logic objects and persistence objects.
+
+Amino also aims to have the following goals (not a complete list):
+
+* Binary bytes must be decode-able with a schema.
+* Schema must be upgradeable.
+* The encoder and decoder logic must be reasonably simple.
+
+However, we believe that Amino does not fulfill these goals completely and does not fully meet the
+needs of a truly flexible cross-language and multi-client compatible encoding protocol in the Cosmos SDK.
+Namely, Amino has proven to be a big pain-point in regards to supporting object serialization across
+clients written in various languages while providing virtually little in the way of true backwards
+compatibility and upgradeability. Furthermore, through profiling and various benchmarks, Amino has
+been shown to be an extremely large performance bottleneck in the Cosmos SDK 1. This is
+largely reflected in the performance of simulations and application transaction throughput.
+
+Thus, we need to adopt an encoding protocol that meets the following criteria for state serialization:
+
+* Language agnostic
+* Platform agnostic
+* Rich client support and thriving ecosystem
+* High performance
+* Minimal encoded message size
+* Codegen-based over reflection-based
+* Supports backward and forward compatibility
+
+Note, migrating away from Amino should be viewed as a two-pronged approach, state and client encoding.
+This ADR focuses on state serialization in the Cosmos SDK state machine. A corresponding ADR will be
+made to address client-side encoding.
+
+## Decision
+
+We will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers) for serializing
+persisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for
+applications wishing to continue to use Amino. We will provide this mechanism by updating modules to
+accept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK
+will provide two concrete implementations of the `Marshaler` interface: `AminoCodec` and `ProtoCodec`.
+
+* `AminoCodec`: Uses Amino for both binary and JSON encoding.
+* `ProtoCodec`: Uses Protobuf for both binary and JSON encoding.
+
+Modules will use whichever codec that is instantiated in the app. By default, the Cosmos SDK's `simapp`
+instantiates a `ProtoCodec` as the concrete implementation of `Marshaler`, inside the `MakeTestEncodingConfig`
+function. This can be easily overwritten by app developers if they so desire.
+
+The ultimate goal will be to replace Amino JSON encoding with Protobuf encoding and thus have
+modules accept and/or extend `ProtoCodec`. Until then, Amino JSON is still provided for legacy use-cases.
+A handful of places in the Cosmos SDK still have Amino JSON hardcoded, such as the Legacy API REST endpoints
+and the `x/params` store. They are planned to be converted to Protobuf in a gradual manner.
+
+### Module Codecs
+
+Modules that do not require the ability to work with and serialize interfaces, the path to Protobuf
+migration is pretty straightforward. These modules are to simply migrate any existing types that
+are encoded and persisted via their concrete Amino codec to Protobuf and have their keeper accept a
+`Marshaler` that will be a `ProtoCodec`. This migration is simple as things will just work as-is.
+
+Note, any business logic that needs to encode primitive types like `bool` or `int64` should use
+[gogoprotobuf](https://github.com/cosmos/gogoproto) Value types.
+
+Example:
+
+```go
+ ts, err := gogotypes.TimestampProto(completionTime)
+ if err != nil {
+ // ...
+ }
+
+ bz := cdc.MustMarshal(ts)
+```
+
+However, modules can vary greatly in purpose and design and so we must support the ability for modules
+to be able to encode and work with interfaces (e.g. `Account` or `Content`). For these modules, they
+must define their own codec interface that extends `Marshaler`. These specific interfaces are unique
+to the module and will contain method contracts that know how to serialize the needed interfaces.
+
+Example:
+
+```go
+// x/auth/types/codec.go
+
+type Codec interface {
+ codec.Codec
+
+ MarshalAccount(acc exported.Account) ([]byte, error)
+ UnmarshalAccount(bz []byte) (exported.Account, error)
+
+ MarshalAccountJSON(acc exported.Account) ([]byte, error)
+ UnmarshalAccountJSON(bz []byte) (exported.Account, error)
+}
+```
+
+### Usage of `Any` to encode interfaces
+
+In general, module-level .proto files should define messages which encode interfaces
+using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto).
+After [extension discussion](https://github.com/cosmos/cosmos-sdk/issues/6030),
+this was chosen as the preferred alternative to application-level `oneof`s
+as in our original protobuf design. The arguments in favor of `Any` can be
+summarized as follows:
+
+* `Any` provides a simpler, more consistent client UX for dealing with
+interfaces than app-level `oneof`s that will need to be coordinated more
+carefully across applications. Creating a generic transaction
+signing library using `oneof`s may be cumbersome and critical logic may need
+to be reimplemented for each chain
+* `Any` provides more resistance against human error than `oneof`
+* `Any` is generally simpler to implement for both modules and apps
+
+The main counter-argument to using `Any` centers around its additional space
+and possibly performance overhead. The space overhead could be dealt with using
+compression at the persistence layer in the future and the performance impact
+is likely to be small. Thus, not using `Any` is seem as a pre-mature optimization,
+with user experience as the higher order concern.
+
+Note, that given the Cosmos SDK's decision to adopt the `Codec` interfaces described
+above, apps can still choose to use `oneof` to encode state and transactions
+but it is not the recommended approach. If apps do choose to use `oneof`s
+instead of `Any` they will likely lose compatibility with client apps that
+support multiple chains. Thus developers should think carefully about whether
+they care more about what is possibly a pre-mature optimization or end-user
+and client developer UX.
+
+### Safe usage of `Any`
+
+By default, the [gogo protobuf implementation of `Any`](https://pkg.go.dev/github.com/cosmos/gogoproto/types)
+uses [global type registration]( https://github.com/cosmos/gogoproto/blob/master/proto/properties.go#L540)
+to decode values packed in `Any` into concrete
+go types. This introduces a vulnerability where any malicious module
+in the dependency tree could register a type with the global protobuf registry
+and cause it to be loaded and unmarshaled by a transaction that referenced
+it in the `type_url` field.
+
+To prevent this, we introduce a type registration mechanism for decoding `Any`
+values into concrete types through the `InterfaceRegistry` interface which
+bears some similarity to type registration with Amino:
+
+```go
+type InterfaceRegistry interface {
+ // RegisterInterface associates protoName as the public name for the
+ // interface passed in as iface
+ // Ex:
+ // registry.RegisterInterface("cosmos_sdk.Msg", (*sdk.Msg)(nil))
+ RegisterInterface(protoName string, iface interface{})
+
+ // RegisterImplementations registers impls as a concrete implementations of
+ // the interface iface
+ // Ex:
+ // registry.RegisterImplementations((*sdk.Msg)(nil), &MsgSend{}, &MsgMultiSend{})
+ RegisterImplementations(iface interface{}, impls ...proto.Message)
+
+}
+```
+
+In addition to serving as a whitelist, `InterfaceRegistry` can also serve
+to communicate the list of concrete types that satisfy an interface to clients.
+
+In .proto files:
+
+* fields which accept interfaces should be annotated with `cosmos_proto.accepts_interface`
+using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`
+* interface implementations should be annotated with `cosmos_proto.implements_interface`
+using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`
+
+In the future, `protoName`, `cosmos_proto.accepts_interface`, `cosmos_proto.implements_interface`
+may be used via code generation, reflection &/or static linting.
+
+The same struct that implements `InterfaceRegistry` will also implement an
+interface `InterfaceUnpacker` to be used for unpacking `Any`s:
+
+```go
+type InterfaceUnpacker interface {
+ // UnpackAny unpacks the value in any to the interface pointer passed in as
+ // iface. Note that the type in any must have been registered with
+ // RegisterImplementations as a concrete type for that interface
+ // Ex:
+ // var msg sdk.Msg
+ // err := ctx.UnpackAny(any, &msg)
+ // ...
+ UnpackAny(any *Any, iface interface{}) error
+}
+```
+
+Note that `InterfaceRegistry` usage does not deviate from standard protobuf
+usage of `Any`, it just introduces a security and introspection layer for
+golang usage.
+
+`InterfaceRegistry` will be a member of `ProtoCodec`
+described above. In order for modules to register interface types, app modules
+can optionally implement the following interface:
+
+```go
+type InterfaceModule interface {
+ RegisterInterfaceTypes(InterfaceRegistry)
+}
+```
+
+The module manager will include a method to call `RegisterInterfaceTypes` on
+every module that implements it in order to populate the `InterfaceRegistry`.
+
+### Using `Any` to encode state
+
+The Cosmos SDK will provide support methods `MarshalInterface` and `UnmarshalInterface` to hide a complexity of wrapping interface types into `Any` and allow easy serialization.
+
+```go
+import "github.com/cosmos/cosmos-sdk/codec"
+
+// note: eviexported.Evidence is an interface type
+func MarshalEvidence(cdc codec.BinaryCodec, e eviexported.Evidence) ([]byte, error) {
+ return cdc.MarshalInterface(e)
+}
+
+func UnmarshalEvidence(cdc codec.BinaryCodec, bz []byte) (eviexported.Evidence, error) {
+ var evi eviexported.Evidence
+ err := cdc.UnmarshalInterface(&evi, bz)
+ return err, nil
+}
+```
+
+### Using `Any` in `sdk.Msg`s
+
+A similar concept is to be applied for messages that contain interfaces fields.
+For example, we can define `MsgSubmitEvidence` as follows where `Evidence` is
+an interface:
+
+```protobuf
+// x/evidence/types/types.proto
+
+message MsgSubmitEvidence {
+ bytes submitter = 1
+ [
+ (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress"
+ ];
+ google.protobuf.Any evidence = 2;
+}
+```
+
+Note that in order to unpack the evidence from `Any` we do need a reference to
+`InterfaceRegistry`. In order to reference evidence in methods like
+`ValidateBasic` which shouldn't have to know about the `InterfaceRegistry`, we
+introduce an `UnpackInterfaces` phase to deserialization which unpacks
+interfaces before they're needed.
+
+### Unpacking Interfaces
+
+To implement the `UnpackInterfaces` phase of deserialization which unpacks
+interfaces wrapped in `Any` before they're needed, we create an interface
+that `sdk.Msg`s and other types can implement:
+
+```go
+type UnpackInterfacesMessage interface {
+ UnpackInterfaces(InterfaceUnpacker) error
+}
+```
+
+We also introduce a private `cachedValue interface{}` field onto the `Any`
+struct itself with a public getter `GetCachedValue() interface{}`.
+
+The `UnpackInterfaces` method is to be invoked during message deserialization right
+after `Unmarshal` and any interface values packed in `Any`s will be decoded
+and stored in `cachedValue` for reference later.
+
+Then unpacked interface values can safely be used in any code afterwards
+without knowledge of the `InterfaceRegistry`
+and messages can introduce a simple getter to cast the cached value to the
+correct interface type.
+
+This has the added benefit that unmarshaling of `Any` values only happens once
+during initial deserialization rather than every time the value is read. Also,
+when `Any` values are first packed (for instance in a call to
+`NewMsgSubmitEvidence`), the original interface value is cached so that
+unmarshaling isn't needed to read it again.
+
+`MsgSubmitEvidence` could implement `UnpackInterfaces`, plus a convenience getter
+`GetEvidence` as follows:
+
+```go
+func (msg MsgSubmitEvidence) UnpackInterfaces(ctx sdk.InterfaceRegistry) error {
+ var evi eviexported.Evidence
+ return ctx.UnpackAny(msg.Evidence, *evi)
+}
+
+func (msg MsgSubmitEvidence) GetEvidence() eviexported.Evidence {
+ return msg.Evidence.GetCachedValue().(eviexported.Evidence)
+}
+```
+
+### Amino Compatibility
+
+Our custom implementation of `Any` can be used transparently with Amino if used
+with the proper codec instance. What this means is that interfaces packed within
+`Any`s will be amino marshaled like regular Amino interfaces (assuming they
+have been registered properly with Amino).
+
+In order for this functionality to work:
+
+* **all legacy code must use `*codec.LegacyAmino` instead of `*amino.Codec` which is
+ now a wrapper which properly handles `Any`**
+* **all new code should use `Marshaler` which is compatible with both amino and
+ protobuf**
+* Also, before v0.39, `codec.LegacyAmino` will be renamed to `codec.LegacyAmino`.
+
+### Why Wasn't X Chosen Instead
+
+For a more complete comparison to alternative protocols, see [here](https://codeburst.io/json-vs-protocol-buffers-vs-flatbuffers-a4247f8bda6f).
+
+### Cap'n Proto
+
+While [Cap’n Proto](https://capnproto.org/) does seem like an advantageous alternative to Protobuf
+due to it's native support for interfaces/generics and built in canonicalization, it does lack the
+rich client ecosystem compared to Protobuf and is a bit less mature.
+
+### FlatBuffers
+
+[FlatBuffers](https://google.github.io/flatbuffers/) is also a potentially viable alternative, with the
+primary difference being that FlatBuffers does not need a parsing/unpacking step to a secondary
+representation before you can access data, often coupled with per-object memory allocation.
+
+However, it would require great efforts into research and full understanding the scope of the migration
+and path forward -- which isn't immediately clear. In addition, FlatBuffers aren't designed for
+untrusted inputs.
+
+## Future Improvements & Roadmap
+
+In the future we may consider a compression layer right above the persistence
+layer which doesn't change tx or merkle tree hashes, but reduces the storage
+overhead of `Any`. In addition, we may adopt protobuf naming conventions which
+make type URLs a bit more concise while remaining descriptive.
+
+Additional code generation support around the usage of `Any` is something that
+could also be explored in the future to make the UX for go developers more
+seamless.
+
+## Consequences
+
+### Positive
+
+* Significant performance gains.
+* Supports backward and forward type compatibility.
+* Better support for cross-language clients.
+
+### Negative
+
+* Learning curve required to understand and implement Protobuf messages.
+* Slightly larger message size due to use of `Any`, although this could be offset
+ by a compression layer in the future
+
+### Neutral
+
+## References
+
+1. https://github.com/cosmos/cosmos-sdk/issues/4977
+2. https://github.com/cosmos/cosmos-sdk/issues/5444
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-020-protobuf-transaction-encoding.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-020-protobuf-transaction-encoding.md
new file mode 100644
index 00000000..344a7fef
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-020-protobuf-transaction-encoding.md
@@ -0,0 +1,464 @@
+# ADR 020: Protocol Buffer Transaction Encoding
+
+## Changelog
+
+* 2020 March 06: Initial Draft
+* 2020 March 12: API Updates
+* 2020 April 13: Added details on interface `oneof` handling
+* 2020 April 30: Switch to `Any`
+* 2020 May 14: Describe public key encoding
+* 2020 June 08: Store `TxBody` and `AuthInfo` as bytes in `SignDoc`; Document `TxRaw` as broadcast and storage type.
+* 2020 August 07: Use ADR 027 for serializing `SignDoc`.
+* 2020 August 19: Move sequence field from `SignDoc` to `SignerInfo`, as discussed in [#6966](https://github.com/cosmos/cosmos-sdk/issues/6966).
+* 2020 September 25: Remove `PublicKey` type in favor of `secp256k1.PubKey`, `ed25519.PubKey` and `multisig.LegacyAminoPubKey`.
+* 2020 October 15: Add `GetAccount` and `GetAccountWithHeight` methods to the `AccountRetriever` interface.
+* 2021 Feb 24: The Cosmos SDK does not use Tendermint's `PubKey` interface anymore, but its own `cryptotypes.PubKey`. Updates to reflect this.
+* 2021 May 3: Rename `clientCtx.JSONMarshaler` to `clientCtx.JSONCodec`.
+* 2021 June 10: Add `clientCtx.Codec: codec.Codec`.
+
+## Status
+
+Accepted
+
+## Context
+
+This ADR is a continuation of the motivation, design, and context established in
+[ADR 019](adr-019-protobuf-state-encoding.md), namely, we aim to design the
+Protocol Buffer migration path for the client-side of the Cosmos SDK.
+
+Specifically, the client-side migration path primarily includes tx generation and
+signing, message construction and routing, in addition to CLI & REST handlers and
+business logic (i.e. queriers).
+
+With this in mind, we will tackle the migration path via two main areas, txs and
+querying. However, this ADR solely focuses on transactions. Querying should be
+addressed in a future ADR, but it should build off of these proposals.
+
+Based on detailed discussions ([\#6030](https://github.com/cosmos/cosmos-sdk/issues/6030)
+and [\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078)), the original
+design for transactions was changed substantially from an `oneof` /JSON-signing
+approach to the approach described below.
+
+## Decision
+
+### Transactions
+
+Since interface values are encoded with `google.protobuf.Any` in state (see [ADR 019](adr-019-protobuf-state-encoding.md)),
+`sdk.Msg`s are encoding with `Any` in transactions.
+
+One of the main goals of using `Any` to encode interface values is to have a
+core set of types which is reused by apps so that
+clients can safely be compatible with as many chains as possible.
+
+It is one of the goals of this specification to provide a flexible cross-chain transaction
+format that can serve a wide variety of use cases without breaking client
+compatibility.
+
+In order to facilitate signing, transactions are separated into `TxBody`,
+which will be re-used by `SignDoc` below, and `signatures`:
+
+```protobuf
+// types/types.proto
+package cosmos_sdk.v1;
+
+message Tx {
+ TxBody body = 1;
+ AuthInfo auth_info = 2;
+ // A list of signatures that matches the length and order of AuthInfo's signer_infos to
+ // allow connecting signature meta information like public key and signing mode by position.
+ repeated bytes signatures = 3;
+}
+
+// A variant of Tx that pins the signer's exact binary represenation of body and
+// auth_info. This is used for signing, broadcasting and verification. The binary
+// `serialize(tx: TxRaw)` is stored in Tendermint and the hash `sha256(serialize(tx: TxRaw))`
+// becomes the "txhash", commonly used as the transaction ID.
+message TxRaw {
+ // A protobuf serialization of a TxBody that matches the representation in SignDoc.
+ bytes body = 1;
+ // A protobuf serialization of an AuthInfo that matches the representation in SignDoc.
+ bytes auth_info = 2;
+ // A list of signatures that matches the length and order of AuthInfo's signer_infos to
+ // allow connecting signature meta information like public key and signing mode by position.
+ repeated bytes signatures = 3;
+}
+
+message TxBody {
+ // A list of messages to be executed. The required signers of those messages define
+ // the number and order of elements in AuthInfo's signer_infos and Tx's signatures.
+ // Each required signer address is added to the list only the first time it occurs.
+ //
+ // By convention, the first required signer (usually from the first message) is referred
+ // to as the primary signer and pays the fee for the whole transaction.
+ repeated google.protobuf.Any messages = 1;
+ string memo = 2;
+ int64 timeout_height = 3;
+ repeated google.protobuf.Any extension_options = 1023;
+}
+
+message AuthInfo {
+ // This list defines the signing modes for the required signers. The number
+ // and order of elements must match the required signers from TxBody's messages.
+ // The first element is the primary signer and the one which pays the fee.
+ repeated SignerInfo signer_infos = 1;
+ // The fee can be calculated based on the cost of evaluating the body and doing signature verification of the signers. This can be estimated via simulation.
+ Fee fee = 2;
+}
+
+message SignerInfo {
+ // The public key is optional for accounts that already exist in state. If unset, the
+ // verifier can use the required signer address for this position and lookup the public key.
+ google.protobuf.Any public_key = 1;
+ // ModeInfo describes the signing mode of the signer and is a nested
+ // structure to support nested multisig pubkey's
+ ModeInfo mode_info = 2;
+ // sequence is the sequence of the account, which describes the
+ // number of committed transactions signed by a given address. It is used to prevent
+ // replay attacks.
+ uint64 sequence = 3;
+}
+
+message ModeInfo {
+ oneof sum {
+ Single single = 1;
+ Multi multi = 2;
+ }
+
+ // Single is the mode info for a single signer. It is structured as a message
+ // to allow for additional fields such as locale for SIGN_MODE_TEXTUAL in the future
+ message Single {
+ SignMode mode = 1;
+ }
+
+ // Multi is the mode info for a multisig public key
+ message Multi {
+ // bitarray specifies which keys within the multisig are signing
+ CompactBitArray bitarray = 1;
+ // mode_infos is the corresponding modes of the signers of the multisig
+ // which could include nested multisig public keys
+ repeated ModeInfo mode_infos = 2;
+ }
+}
+
+enum SignMode {
+ SIGN_MODE_UNSPECIFIED = 0;
+
+ SIGN_MODE_DIRECT = 1;
+
+ SIGN_MODE_TEXTUAL = 2;
+
+ SIGN_MODE_LEGACY_AMINO_JSON = 127;
+}
+```
+
+As will be discussed below, in order to include as much of the `Tx` as possible
+in the `SignDoc`, `SignerInfo` is separated from signatures so that only the
+raw signatures themselves live outside of what is signed over.
+
+Because we are aiming for a flexible, extensible cross-chain transaction
+format, new transaction processing options should be added to `TxBody` as soon
+those use cases are discovered, even if they can't be implemented yet.
+
+Because there is coordination overhead in this, `TxBody` includes an
+`extension_options` field which can be used for any transaction processing
+options that are not already covered. App developers should, nevertheless,
+attempt to upstream important improvements to `Tx`.
+
+### Signing
+
+All of the signing modes below aim to provide the following guarantees:
+
+* **No Malleability**: `TxBody` and `AuthInfo` cannot change once the transaction
+ is signed
+* **Predictable Gas**: if I am signing a transaction where I am paying a fee,
+ the final gas is fully dependent on what I am signing
+
+These guarantees give the maximum amount confidence to message signers that
+manipulation of `Tx`s by intermediaries can't result in any meaningful changes.
+
+#### `SIGN_MODE_DIRECT`
+
+The "direct" signing behavior is to sign the raw `TxBody` bytes as broadcast over
+the wire. This has the advantages of:
+
+* requiring the minimum additional client capabilities beyond a standard protocol
+ buffers implementation
+* leaving effectively zero holes for transaction malleability (i.e. there are no
+ subtle differences between the signing and encoding formats which could
+ potentially be exploited by an attacker)
+
+Signatures are structured using the `SignDoc` below which reuses the serialization of
+`TxBody` and `AuthInfo` and only adds the fields which are needed for signatures:
+
+```protobuf
+// types/types.proto
+message SignDoc {
+ // A protobuf serialization of a TxBody that matches the representation in TxRaw.
+ bytes body = 1;
+ // A protobuf serialization of an AuthInfo that matches the representation in TxRaw.
+ bytes auth_info = 2;
+ string chain_id = 3;
+ uint64 account_number = 4;
+}
+```
+
+In order to sign in the default mode, clients take the following steps:
+
+1. Serialize `TxBody` and `AuthInfo` using any valid protobuf implementation.
+2. Create a `SignDoc` and serialize it using [ADR 027](adr-027-deterministic-protobuf-serialization.md).
+3. Sign the encoded `SignDoc` bytes.
+4. Build a `TxRaw` and serialize it for broadcasting.
+
+Signature verification is based on comparing the raw `TxBody` and `AuthInfo`
+bytes encoded in `TxRaw` not based on any ["canonicalization"](https://github.com/regen-network/canonical-proto3)
+algorithm which creates added complexity for clients in addition to preventing
+some forms of upgradeability (to be addressed later in this document).
+
+Signature verifiers do:
+
+1. Deserialize a `TxRaw` and pull out `body` and `auth_info`.
+2. Create a list of required signer addresses from the messages.
+3. For each required signer:
+ * Pull account number and sequence from the state.
+ * Obtain the public key either from state or `AuthInfo`'s `signer_infos`.
+ * Create a `SignDoc` and serialize it using [ADR 027](adr-027-deterministic-protobuf-serialization.md).
+ * Verify the signature at the same list position against the serialized `SignDoc`.
+
+#### `SIGN_MODE_LEGACY_AMINO`
+
+In order to support legacy wallets and exchanges, Amino JSON will be temporarily
+supported transaction signing. Once wallets and exchanges have had a
+chance to upgrade to protobuf based signing, this option will be disabled. In
+the meantime, it is foreseen that disabling the current Amino signing would cause
+too much breakage to be feasible. Note that this is mainly a requirement of the
+Cosmos Hub and other chains may choose to disable Amino signing immediately.
+
+Legacy clients will be able to sign a transaction using the current Amino
+JSON format and have it encoded to protobuf using the REST `/tx/encode`
+endpoint before broadcasting.
+
+#### `SIGN_MODE_TEXTUAL`
+
+As was discussed extensively in [\#6078](https://github.com/cosmos/cosmos-sdk/issues/6078),
+there is a desire for a human-readable signing encoding, especially for hardware
+wallets like the [Ledger](https://www.ledger.com) which display
+transaction contents to users before signing. JSON was an attempt at this but
+falls short of the ideal.
+
+`SIGN_MODE_TEXTUAL` is intended as a placeholder for a human-readable
+encoding which will replace Amino JSON. This new encoding should be even more
+focused on readability than JSON, possibly based on formatting strings like
+[MessageFormat](http://userguide.icu-project.org/formatparse/messages).
+
+In order to ensure that the new human-readable format does not suffer from
+transaction malleability issues, `SIGN_MODE_TEXTUAL`
+requires that the _human-readable bytes are concatenated with the raw `SignDoc`_
+to generate sign bytes.
+
+Multiple human-readable formats (maybe even localized messages) may be supported
+by `SIGN_MODE_TEXTUAL` when it is implemented.
+
+### Unknown Field Filtering
+
+Unknown fields in protobuf messages should generally be rejected by transaction
+processors because:
+
+* important data may be present in the unknown fields, that if ignored, will
+ cause unexpected behavior for clients
+* they present a malleability vulnerability where attackers can bloat tx size
+ by adding random uninterpreted data to unsigned content (i.e. the master `Tx`,
+ not `TxBody`)
+
+There are also scenarios where we may choose to safely ignore unknown fields
+(https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-624400188) to
+provide graceful forwards compatibility with newer clients.
+
+We propose that field numbers with bit 11 set (for most use cases this is
+the range of 1024-2047) be considered non-critical fields that can safely be
+ignored if unknown.
+
+To handle this we will need a unknown field filter that:
+
+* always rejects unknown fields in unsigned content (i.e. top-level `Tx` and
+ unsigned parts of `AuthInfo` if present based on the signing mode)
+* rejects unknown fields in all messages (including nested `Any`s) other than
+ fields with bit 11 set
+
+This will likely need to be a custom protobuf parser pass that takes message bytes
+and `FileDescriptor`s and returns a boolean result.
+
+### Public Key Encoding
+
+Public keys in the Cosmos SDK implement the `cryptotypes.PubKey` interface.
+We propose to use `Any` for protobuf encoding as we are doing with other interfaces (for example, in `BaseAccount.PubKey` and `SignerInfo.PublicKey`).
+The following public keys are implemented: secp256k1, secp256r1, ed25519 and legacy-multisignature.
+
+Ex:
+
+```protobuf
+message PubKey {
+ bytes key = 1;
+}
+```
+
+`multisig.LegacyAminoPubKey` has an array of `Any`'s member to support any
+protobuf public key type.
+
+Apps should only attempt to handle a registered set of public keys that they
+have tested. The provided signature verification ante handler decorators will
+enforce this.
+
+### CLI & REST
+
+Currently, the REST and CLI handlers encode and decode types and txs via Amino
+JSON encoding using a concrete Amino codec. Being that some of the types dealt with
+in the client can be interfaces, similar to how we described in [ADR 019](adr-019-protobuf-state-encoding.md),
+the client logic will now need to take a codec interface that knows not only how
+to handle all the types, but also knows how to generate transactions, signatures,
+and messages.
+
+```go
+type AccountRetriever interface {
+ GetAccount(clientCtx Context, addr sdk.AccAddress) (client.Account, error)
+ GetAccountWithHeight(clientCtx Context, addr sdk.AccAddress) (client.Account, int64, error)
+ EnsureExists(clientCtx client.Context, addr sdk.AccAddress) error
+ GetAccountNumberSequence(clientCtx client.Context, addr sdk.AccAddress) (uint64, uint64, error)
+}
+
+type Generator interface {
+ NewTx() TxBuilder
+ NewFee() ClientFee
+ NewSignature() ClientSignature
+ MarshalTx(tx types.Tx) ([]byte, error)
+}
+
+type TxBuilder interface {
+ GetTx() sdk.Tx
+
+ SetMsgs(...sdk.Msg) error
+ GetSignatures() []sdk.Signature
+ SetSignatures(...sdk.Signature)
+ GetFee() sdk.Fee
+ SetFee(sdk.Fee)
+ GetMemo() string
+ SetMemo(string)
+}
+```
+
+We then update `Context` to have new fields: `Codec`, `TxGenerator`,
+and `AccountRetriever`, and we update `AppModuleBasic.GetTxCmd` to take
+a `Context` which should have all of these fields pre-populated.
+
+Each client method should then use one of the `Init` methods to re-initialize
+the pre-populated `Context`. `tx.GenerateOrBroadcastTx` can be used to
+generate or broadcast a transaction. For example:
+
+```go
+import "github.com/spf13/cobra"
+import "github.com/cosmos/cosmos-sdk/client"
+import "github.com/cosmos/cosmos-sdk/client/tx"
+
+func NewCmdDoSomething(clientCtx client.Context) *cobra.Command {
+ return &cobra.Command{
+ RunE: func(cmd *cobra.Command, args []string) error {
+ clientCtx := ctx.InitWithInput(cmd.InOrStdin())
+ msg := NewSomeMsg{...}
+ tx.GenerateOrBroadcastTx(clientCtx, msg)
+ },
+ }
+}
+```
+
+## Future Improvements
+
+### `SIGN_MODE_TEXTUAL` specification
+
+A concrete specification and implementation of `SIGN_MODE_TEXTUAL` is intended
+as a near-term future improvement so that the ledger app and other wallets
+can gracefully transition away from Amino JSON.
+
+### `SIGN_MODE_DIRECT_AUX`
+
+(\*Documented as option (3) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933)
+
+We could add a mode `SIGN_MODE_DIRECT_AUX`
+to support scenarios where multiple signatures
+are being gathered into a single transaction but the message composer does not
+yet know which signatures will be included in the final transaction. For instance,
+I may have a 3/5 multisig wallet and want to send a `TxBody` to all 5
+signers to see who signs first. As soon as I have 3 signatures then I will go
+ahead and build the full transaction.
+
+With `SIGN_MODE_DIRECT`, each signer needs
+to sign the full `AuthInfo` which includes the full list of all signers and
+their signing modes, making the above scenario very hard.
+
+`SIGN_MODE_DIRECT_AUX` would allow "auxiliary" signers to create their signature
+using only `TxBody` and their own `PublicKey`. This allows the full list of
+signers in `AuthInfo` to be delayed until signatures have been collected.
+
+An "auxiliary" signer is any signer besides the primary signer who is paying
+the fee. For the primary signer, the full `AuthInfo` is actually needed to calculate gas and fees
+because that is dependent on how many signers and which key types and signing
+modes they are using. Auxiliary signers, however, do not need to worry about
+fees or gas and thus can just sign `TxBody`.
+
+To generate a signature in `SIGN_MODE_DIRECT_AUX` these steps would be followed:
+
+1. Encode `SignDocAux` (with the same requirement that fields must be serialized
+ in order):
+
+ ```protobuf
+ // types/types.proto
+ message SignDocAux {
+ bytes body_bytes = 1;
+ // PublicKey is included in SignDocAux :
+ // 1. as a special case for multisig public keys. For multisig public keys,
+ // the signer should use the top-level multisig public key they are signing
+ // against, not their own public key. This is to prevent against a form
+ // of malleability where a signature could be taken out of context of the
+ // multisig key that was intended to be signed for
+ // 2. to guard against scenario where configuration information is encoded
+ // in public keys (it has been proposed) such that two keys can generate
+ // the same signature but have different security properties
+ //
+ // By including it here, the composer of AuthInfo cannot reference the
+ // a public key variant the signer did not intend to use
+ PublicKey public_key = 2;
+ string chain_id = 3;
+ uint64 account_number = 4;
+ }
+ ```
+
+2. Sign the encoded `SignDocAux` bytes
+3. Send their signature and `SignerInfo` to primary signer who will then
+ sign and broadcast the final transaction (with `SIGN_MODE_DIRECT` and `AuthInfo`
+ added) once enough signatures have been collected
+
+### `SIGN_MODE_DIRECT_RELAXED`
+
+(_Documented as option (1)(a) in https://github.com/cosmos/cosmos-sdk/issues/6078#issuecomment-628026933_)
+
+This is a variation of `SIGN_MODE_DIRECT` where multiple signers wouldn't need to
+coordinate public keys and signing modes in advance. It would involve an alternate
+`SignDoc` similar to `SignDocAux` above with fee. This could be added in the future
+if client developers found the burden of collecting public keys and modes in advance
+too burdensome.
+
+## Consequences
+
+### Positive
+
+* Significant performance gains.
+* Supports backward and forward type compatibility.
+* Better support for cross-language clients.
+* Multiple signing modes allow for greater protocol evolution
+
+### Negative
+
+* `google.protobuf.Any` type URLs increase transaction size although the effect
+ may be negligible or compression may be able to mitigate it.
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-021-protobuf-query-encoding.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-021-protobuf-query-encoding.md
new file mode 100644
index 00000000..76fd40fe
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-021-protobuf-query-encoding.md
@@ -0,0 +1,256 @@
+# ADR 021: Protocol Buffer Query Encoding
+
+## Changelog
+
+* 2020 March 27: Initial Draft
+
+## Status
+
+Accepted
+
+## Context
+
+This ADR is a continuation of the motivation, design, and context established in
+[ADR 019](adr-019-protobuf-state-encoding.md) and
+[ADR 020](adr-020-protobuf-transaction-encoding.md), namely, we aim to design the
+Protocol Buffer migration path for the client-side of the Cosmos SDK.
+
+This ADR continues from [ADD 020](adr-020-protobuf-transaction-encoding.md)
+to specify the encoding of queries.
+
+## Decision
+
+### Custom Query Definition
+
+Modules define custom queries through a protocol buffers `service` definition.
+These `service` definitions are generally associated with and used by the
+GRPC protocol. However, the protocol buffers specification indicates that
+they can be used more generically by any request/response protocol that uses
+protocol buffer encoding. Thus, we can use `service` definitions for specifying
+custom ABCI queries and even reuse a substantial amount of the GRPC infrastructure.
+
+Each module with custom queries should define a service canonically named `Query`:
+
+```protobuf
+// x/bank/types/types.proto
+
+service Query {
+ rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) { }
+ rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) { }
+}
+```
+
+#### Handling of Interface Types
+
+Modules that use interface types and need true polymorphism generally force a
+`oneof` up to the app-level that provides the set of concrete implementations of
+that interface that the app supports. While app's are welcome to do the same for
+queries and implement an app-level query service, it is recommended that modules
+provide query methods that expose these interfaces via `google.protobuf.Any`.
+There is a concern on the transaction level that the overhead of `Any` is too
+high to justify its usage. However for queries this is not a concern, and
+providing generic module-level queries that use `Any` does not preclude apps
+from also providing app-level queries that return use the app-level `oneof`s.
+
+A hypothetical example for the `gov` module would look something like:
+
+```protobuf
+// x/gov/types/types.proto
+
+import "google/protobuf/any.proto";
+
+service Query {
+ rpc GetProposal(GetProposalParams) returns (AnyProposal) { }
+}
+
+message AnyProposal {
+ ProposalBase base = 1;
+ google.protobuf.Any content = 2;
+}
+```
+
+### Custom Query Implementation
+
+In order to implement the query service, we can reuse the existing [gogo protobuf](https://github.com/cosmos/gogoproto)
+grpc plugin, which for a service named `Query` generates an interface named
+`QueryServer` as below:
+
+```go
+type QueryServer interface {
+ QueryBalance(context.Context, *QueryBalanceParams) (*types.Coin, error)
+ QueryAllBalances(context.Context, *QueryAllBalancesParams) (*QueryAllBalancesResponse, error)
+}
+```
+
+The custom queries for our module are implemented by implementing this interface.
+
+The first parameter in this generated interface is a generic `context.Context`,
+whereas querier methods generally need an instance of `sdk.Context` to read
+from the store. Since arbitrary values can be attached to `context.Context`
+using the `WithValue` and `Value` methods, the Cosmos SDK should provide a function
+`sdk.UnwrapSDKContext` to retrieve the `sdk.Context` from the provided
+`context.Context`.
+
+An example implementation of `QueryBalance` for the bank module as above would
+look something like:
+
+```go
+type Querier struct {
+ Keeper
+}
+
+func (q Querier) QueryBalance(ctx context.Context, params *types.QueryBalanceParams) (*sdk.Coin, error) {
+ balance := q.GetBalance(sdk.UnwrapSDKContext(ctx), params.Address, params.Denom)
+ return &balance, nil
+}
+```
+
+### Custom Query Registration and Routing
+
+Query server implementations as above would be registered with `AppModule`s using
+a new method `RegisterQueryService(grpc.Server)` which could be implemented simply
+as below:
+
+```go
+// x/bank/module.go
+func (am AppModule) RegisterQueryService(server grpc.Server) {
+ types.RegisterQueryServer(server, keeper.Querier{am.keeper})
+}
+```
+
+Underneath the hood, a new method `RegisterService(sd *grpc.ServiceDesc, handler interface{})`
+will be added to the existing `baseapp.QueryRouter` to add the queries to the custom
+query routing table (with the routing method being described below).
+The signature for this method matches the existing
+`RegisterServer` method on the GRPC `Server` type where `handler` is the custom
+query server implementation described above.
+
+GRPC-like requests are routed by the service name (ex. `cosmos_sdk.x.bank.v1.Query`)
+and method name (ex. `QueryBalance`) combined with `/`s to form a full
+method name (ex. `/cosmos_sdk.x.bank.v1.Query/QueryBalance`). This gets translated
+into an ABCI query as `custom/cosmos_sdk.x.bank.v1.Query/QueryBalance`. Service handlers
+registered with `QueryRouter.RegisterService` will be routed this way.
+
+Beyond the method name, GRPC requests carry a protobuf encoded payload, which maps naturally
+to `RequestQuery.Data`, and receive a protobuf encoded response or error. Thus
+there is a quite natural mapping of GRPC-like rpc methods to the existing
+`sdk.Query` and `QueryRouter` infrastructure.
+
+This basic specification allows us to reuse protocol buffer `service` definitions
+for ABCI custom queries substantially reducing the need for manual decoding and
+encoding in query methods.
+
+### GRPC Protocol Support
+
+In addition to providing an ABCI query pathway, we can easily provide a GRPC
+proxy server that routes requests in the GRPC protocol to ABCI query requests
+under the hood. In this way, clients could use their host languages' existing
+GRPC implementations to make direct queries against Cosmos SDK app's using
+these `service` definitions. In order for this server to work, the `QueryRouter`
+on `BaseApp` will need to expose the service handlers registered with
+`QueryRouter.RegisterService` to the proxy server implementation. Nodes could
+launch the proxy server on a separate port in the same process as the ABCI app
+with a command-line flag.
+
+### REST Queries and Swagger Generation
+
+[grpc-gateway](https://github.com/grpc-ecosystem/grpc-gateway) is a project that
+translates REST calls into GRPC calls using special annotations on service
+methods. Modules that want to expose REST queries should add `google.api.http`
+annotations to their `rpc` methods as in this example below.
+
+```protobuf
+// x/bank/types/types.proto
+
+service Query {
+ rpc QueryBalance(QueryBalanceParams) returns (cosmos_sdk.v1.Coin) {
+ option (google.api.http) = {
+ get: "/x/bank/v1/balance/{address}/{denom}"
+ };
+ }
+ rpc QueryAllBalances(QueryAllBalancesParams) returns (QueryAllBalancesResponse) {
+ option (google.api.http) = {
+ get: "/x/bank/v1/balances/{address}"
+ };
+ }
+}
+```
+
+grpc-gateway will work direcly against the GRPC proxy described above which will
+translate requests to ABCI queries under the hood. grpc-gateway can also
+generate Swagger definitions automatically.
+
+In the current implementation of REST queries, each module needs to implement
+REST queries manually in addition to ABCI querier methods. Using the grpc-gateway
+approach, there will be no need to generate separate REST query handlers, just
+query servers as described above as grpc-gateway handles the translation of protobuf
+to REST as well as Swagger definitions.
+
+The Cosmos SDK should provide CLI commands for apps to start GRPC gateway either in
+a separate process or the same process as the ABCI app, as well as provide a
+command for generating grpc-gateway proxy `.proto` files and the `swagger.json`
+file.
+
+### Client Usage
+
+The gogo protobuf grpc plugin generates client interfaces in addition to server
+interfaces. For the `Query` service defined above we would get a `QueryClient`
+interface like:
+
+```go
+type QueryClient interface {
+ QueryBalance(ctx context.Context, in *QueryBalanceParams, opts ...grpc.CallOption) (*types.Coin, error)
+ QueryAllBalances(ctx context.Context, in *QueryAllBalancesParams, opts ...grpc.CallOption) (*QueryAllBalancesResponse, error)
+}
+```
+
+Via a small patch to gogo protobuf ([gogo/protobuf#675](https://github.com/gogo/protobuf/pull/675))
+we have tweaked the grpc codegen to use an interface rather than concrete type
+for the generated client struct. This allows us to also reuse the GRPC infrastructure
+for ABCI client queries.
+
+1Context`will receive a new method`QueryConn`that returns a`ClientConn`
+that routes calls to ABCI queries
+
+Clients (such as CLI methods) will then be able to call query methods like this:
+
+```go
+clientCtx := client.NewContext()
+queryClient := types.NewQueryClient(clientCtx.QueryConn())
+params := &types.QueryBalanceParams{addr, denom}
+result, err := queryClient.QueryBalance(gocontext.Background(), params)
+```
+
+### Testing
+
+Tests would be able to create a query client directly from keeper and `sdk.Context`
+references using a `QueryServerTestHelper` as below:
+
+```go
+queryHelper := baseapp.NewQueryServerTestHelper(ctx)
+types.RegisterQueryServer(queryHelper, keeper.Querier{app.BankKeeper})
+queryClient := types.NewQueryClient(queryHelper)
+```
+
+## Future Improvements
+
+## Consequences
+
+### Positive
+
+* greatly simplified querier implementation (no manual encoding/decoding)
+* easy query client generation (can use existing grpc and swagger tools)
+* no need for REST query implementations
+* type safe query methods (generated via grpc plugin)
+* going forward, there will be less breakage of query methods because of the
+backwards compatibility guarantees provided by buf
+
+### Negative
+
+* all clients using the existing ABCI/REST queries will need to be refactored
+for both the new GRPC/REST query paths as well as protobuf/proto-json encoded
+data, but this is more or less unavoidable in the protobuf refactoring
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-022-custom-panic-handling.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-022-custom-panic-handling.md
new file mode 100644
index 00000000..2cdce59f
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-022-custom-panic-handling.md
@@ -0,0 +1,218 @@
+# ADR 022: Custom BaseApp panic handling
+
+## Changelog
+
+* 2020 Apr 24: Initial Draft
+* 2021 Sep 14: Superseded by ADR-045
+
+## Status
+
+SUPERSEDED by ADR-045
+
+## Context
+
+The current implementation of BaseApp does not allow developers to write custom error handlers during panic recovery
+[runTx()](https://github.com/cosmos/cosmos-sdk/blob/bad4ca75f58b182f600396ca350ad844c18fc80b/baseapp/baseapp.go#L539)
+method. We think that this method can be more flexible and can give Cosmos SDK users more options for customizations without
+the need to rewrite whole BaseApp. Also there's one special case for `sdk.ErrorOutOfGas` error handling, that case
+might be handled in a "standard" way (middleware) alongside the others.
+
+We propose middleware-solution, which could help developers implement the following cases:
+
+* add external logging (let's say sending reports to external services like [Sentry](https://sentry.io));
+* call panic for specific error cases;
+
+It will also make `OutOfGas` case and `default` case one of the middlewares.
+`Default` case wraps recovery object to an error and logs it ([example middleware implementation](#recovery-middleware)).
+
+Our project has a sidecar service running alongside the blockchain node (smart contracts virtual machine). It is
+essential that node <-> sidecar connectivity stays stable for TXs processing. So when the communication breaks we need
+to crash the node and reboot it once the problem is solved. That behaviour makes node's state machine execution
+deterministic. As all keeper panics are caught by runTx's `defer()` handler, we have to adjust the BaseApp code
+in order to customize it.
+
+## Decision
+
+### Design
+
+#### Overview
+
+Instead of hardcoding custom error handling into BaseApp we suggest using set of middlewares which can be customized
+externally and will allow developers use as many custom error handlers as they want. Implementation with tests
+can be found [here](https://github.com/cosmos/cosmos-sdk/pull/6053).
+
+#### Implementation details
+
+##### Recovery handler
+
+New `RecoveryHandler` type added. `recoveryObj` input argument is an object returned by the standard Go function
+`recover()` from the `builtin` package.
+
+```go
+type RecoveryHandler func(recoveryObj interface{}) error
+```
+
+Handler should type assert (or other methods) an object to define if object should be handled.
+`nil` should be returned if input object can't be handled by that `RecoveryHandler` (not a handler's target type).
+Not `nil` error should be returned if input object was handled and middleware chain execution should be stopped.
+
+An example:
+
+```go
+func exampleErrHandler(recoveryObj interface{}) error {
+ err, ok := recoveryObj.(error)
+ if !ok { return nil }
+
+ if someSpecificError.Is(err) {
+ panic(customPanicMsg)
+ } else {
+ return nil
+ }
+}
+```
+
+This example breaks the application execution, but it also might enrich the error's context like the `OutOfGas` handler.
+
+##### Recovery middleware
+
+We also add a middleware type (decorator). That function type wraps `RecoveryHandler` and returns the next middleware in
+execution chain and handler's `error`. Type is used to separate actual `recovery()` object handling from middleware
+chain processing.
+
+```go
+type recoveryMiddleware func(recoveryObj interface{}) (recoveryMiddleware, error)
+
+func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) recoveryMiddleware {
+ return func(recoveryObj interface{}) (recoveryMiddleware, error) {
+ if err := handler(recoveryObj); err != nil {
+ return nil, err
+ }
+ return next, nil
+ }
+}
+```
+
+Function receives a `recoveryObj` object and returns:
+
+* (next `recoveryMiddleware`, `nil`) if object wasn't handled (not a target type) by `RecoveryHandler`;
+* (`nil`, not nil `error`) if input object was handled and other middlewares in the chain should not be executed;
+* (`nil`, `nil`) in case of invalid behavior. Panic recovery might not have been properly handled;
+this can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`');
+
+`OutOfGas` middleware example:
+
+```go
+func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) recoveryMiddleware {
+ handler := func(recoveryObj interface{}) error {
+ err, ok := recoveryObj.(sdk.ErrorOutOfGas)
+ if !ok { return nil }
+
+ return errorsmod.Wrap(
+ sdkerrors.ErrOutOfGas, fmt.Sprintf(
+ "out of gas in location: %v; gasWanted: %d, gasUsed: %d", err.Descriptor, gasWanted, ctx.GasMeter().GasConsumed(),
+ ),
+ )
+ }
+
+ return newRecoveryMiddleware(handler, next)
+}
+```
+
+`Default` middleware example:
+
+```go
+func newDefaultRecoveryMiddleware() recoveryMiddleware {
+ handler := func(recoveryObj interface{}) error {
+ return errorsmod.Wrap(
+ sdkerrors.ErrPanic, fmt.Sprintf("recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack())),
+ )
+ }
+
+ return newRecoveryMiddleware(handler, nil)
+}
+```
+
+##### Recovery processing
+
+Basic chain of middlewares processing would look like:
+
+```go
+func processRecovery(recoveryObj interface{}, middleware recoveryMiddleware) error {
+ if middleware == nil { return nil }
+
+ next, err := middleware(recoveryObj)
+ if err != nil { return err }
+ if next == nil { return nil }
+
+ return processRecovery(recoveryObj, next)
+}
+```
+
+That way we can create a middleware chain which is executed from left to right, the rightmost middleware is a
+`default` handler which must return an `error`.
+
+##### BaseApp changes
+
+The `default` middleware chain must exist in a `BaseApp` object. `Baseapp` modifications:
+
+```go
+type BaseApp struct {
+ // ...
+ runTxRecoveryMiddleware recoveryMiddleware
+}
+
+func NewBaseApp(...) {
+ // ...
+ app.runTxRecoveryMiddleware = newDefaultRecoveryMiddleware()
+}
+
+func (app *BaseApp) runTx(...) {
+ // ...
+ defer func() {
+ if r := recover(); r != nil {
+ recoveryMW := newOutOfGasRecoveryMiddleware(gasWanted, ctx, app.runTxRecoveryMiddleware)
+ err, result = processRecovery(r, recoveryMW), nil
+ }
+
+ gInfo = sdk.GasInfo{GasWanted: gasWanted, GasUsed: ctx.GasMeter().GasConsumed()}
+ }()
+ // ...
+}
+```
+
+Developers can add their custom `RecoveryHandler`s by providing `AddRunTxRecoveryHandler` as a BaseApp option parameter to the `NewBaseapp` constructor:
+
+```go
+func (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) {
+ for _, h := range handlers {
+ app.runTxRecoveryMiddleware = newRecoveryMiddleware(h, app.runTxRecoveryMiddleware)
+ }
+}
+```
+
+This method would prepend handlers to an existing chain.
+
+## Consequences
+
+### Positive
+
+* Developers of Cosmos SDK based projects can add custom panic handlers to:
+ * add error context for custom panic sources (panic inside of custom keepers);
+ * emit `panic()`: passthrough recovery object to the Tendermint core;
+ * other necessary handling;
+* Developers can use standard Cosmos SDK `BaseApp` implementation, rather that rewriting it in their projects;
+* Proposed solution doesn't break the current "standard" `runTx()` flow;
+
+### Negative
+
+* Introduces changes to the execution model design.
+
+### Neutral
+
+* `OutOfGas` error handler becomes one of the middlewares;
+* Default panic handler becomes one of the middlewares;
+
+## References
+
+* [PR-6053 with proposed solution](https://github.com/cosmos/cosmos-sdk/pull/6053)
+* [Similar solution. ADR-010 Modular AnteHandler](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-010-modular-antehandler.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-023-protobuf-naming.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-023-protobuf-naming.md
new file mode 100644
index 00000000..4360befd
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-023-protobuf-naming.md
@@ -0,0 +1,263 @@
+# ADR 023: Protocol Buffer Naming and Versioning Conventions
+
+## Changelog
+
+* 2020 April 27: Initial Draft
+* 2020 August 5: Update guidelines
+
+## Status
+
+Accepted
+
+## Context
+
+Protocol Buffers provide a basic [style guide](https://developers.google.com/protocol-buffers/docs/style)
+and [Buf](https://buf.build/docs/style-guide) builds upon that. To the
+extent possible, we want to follow industry accepted guidelines and wisdom for
+the effective usage of protobuf, deviating from those only when there is clear
+rationale for our use case.
+
+### Adoption of `Any`
+
+The adoption of `google.protobuf.Any` as the recommended approach for encoding
+interface types (as opposed to `oneof`) makes package naming a central part
+of the encoding as fully-qualified message names now appear in encoded
+messages.
+
+### Current Directory Organization
+
+Thus far we have mostly followed [Buf's](https://buf.build) [DEFAULT](https://buf.build/docs/lint-checkers#default)
+recommendations, with the minor deviation of disabling [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout)
+which although being convenient for developing code comes with the warning
+from Buf that:
+
+> you will have a very bad time with many Protobuf plugins across various languages if you do not do this
+
+### Adoption of gRPC Queries
+
+In [ADR 021](adr-021-protobuf-query-encoding.md), gRPC was adopted for Protobuf
+native queries. The full gRPC service path thus becomes a key part of ABCI query
+path. In the future, gRPC queries may be allowed from within persistent scripts
+by technologies such as CosmWasm and these query routes would be stored within
+script binaries.
+
+## Decision
+
+The goal of this ADR is to provide thoughtful naming conventions that:
+
+* encourage a good user experience for when users interact directly with
+.proto files and fully-qualified protobuf names
+* balance conciseness against the possibility of either over-optimizing (making
+names too short and cryptic) or under-optimizing (just accepting bloated names
+with lots of redundant information)
+
+These guidelines are meant to act as a style guide for both the Cosmos SDK and
+third-party modules.
+
+As a starting point, we should adopt all of the [DEFAULT](https://buf.build/docs/lint-checkers#default)
+checkers in [Buf's](https://buf.build) including [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout),
+except:
+
+* [PACKAGE_VERSION_SUFFIX](https://buf.build/docs/lint-checkers#package_version_suffix)
+* [SERVICE_SUFFIX](https://buf.build/docs/lint-checkers#service_suffix)
+
+Further guidelines to be described below.
+
+### Principles
+
+#### Concise and Descriptive Names
+
+Names should be descriptive enough to convey their meaning and distinguish
+them from other names.
+
+Given that we are using fully-qualifed names within
+`google.protobuf.Any` as well as within gRPC query routes, we should aim to
+keep names concise, without going overboard. The general rule of thumb should
+be if a shorter name would convey more or else the same thing, pick the shorter
+name.
+
+For instance, `cosmos.bank.MsgSend` (19 bytes) conveys roughly the same information
+as `cosmos_sdk.x.bank.v1.MsgSend` (28 bytes) but is more concise.
+
+Such conciseness makes names both more pleasant to work with and take up less
+space within transactions and on the wire.
+
+We should also resist the temptation to over-optimize, by making names
+cryptically short with abbreviations. For instance, we shouldn't try to
+reduce `cosmos.bank.MsgSend` to `csm.bk.MSnd` just to save a few bytes.
+
+The goal is to make names **_concise but not cryptic_**.
+
+#### Names are for Clients First
+
+Package and type names should be chosen for the benefit of users, not
+necessarily because of legacy concerns related to the go code-base.
+
+#### Plan for Longevity
+
+In the interests of long-term support, we should plan on the names we do
+choose to be in usage for a long time, so now is the opportunity to make
+the best choices for the future.
+
+### Versioning
+
+#### Guidelines on Stable Package Versions
+
+In general, schema evolution is the way to update protobuf schemas. That means that new fields,
+messages, and RPC methods are _added_ to existing schemas and old fields, messages and RPC methods
+are maintained as long as possible.
+
+Breaking things is often unacceptable in a blockchain scenario. For instance, immutable smart contracts
+may depend on certain data schemas on the host chain. If the host chain breaks those schemas, the smart
+contract may be irreparably broken. Even when things can be fixed (for instance in client software),
+this often comes at a high cost.
+
+Instead of breaking things, we should make every effort to evolve schemas rather than just breaking them.
+[Buf](https://buf.build) breaking change detection should be used on all stable (non-alpha or beta) packages
+to prevent such breakage.
+
+With that in mind, different stable versions (i.e. `v1` or `v2`) of a package should more or less be considered
+different packages and this should be last resort approach for upgrading protobuf schemas. Scenarios where creating
+a `v2` may make sense are:
+
+* we want to create a new module with similar functionality to an existing module and adding `v2` is the most natural
+way to do this. In that case, there are really just two different, but similar modules with different APIs.
+* we want to add a new revamped API for an existing module and it's just too cumbersome to add it to the existing package,
+so putting it in `v2` is cleaner for users. In this case, care should be made to not deprecate support for
+`v1` if it is actively used in immutable smart contracts.
+
+#### Guidelines on unstable (alpha and beta) package versions
+
+The following guidelines are recommended for marking packages as alpha or beta:
+
+* marking something as `alpha` or `beta` should be a last resort and just putting something in the
+stable package (i.e. `v1` or `v2`) should be preferred
+* a package _should_ be marked as `alpha` _if and only if_ there are active discussions to remove
+or significantly alter the package in the near future
+* a package _should_ be marked as `beta` _if and only if_ there is an active discussion to
+significantly refactor/rework the functionality in the near future but not remove it
+* modules _can and should_ have types in both stable (i.e. `v1` or `v2`) and unstable (`alpha` or `beta`) packages.
+
+_`alpha` and `beta` should not be used to avoid responsibility for maintaining compatibility._
+Whenever code is released into the wild, especially on a blockchain, there is a high cost to changing things. In some
+cases, for instance with immutable smart contracts, a breaking change may be impossible to fix.
+
+When marking something as `alpha` or `beta`, maintainers should ask the questions:
+
+* what is the cost of asking others to change their code vs the benefit of us maintaining the optionality to change it?
+* what is the plan for moving this to `v1` and how will that affect users?
+
+`alpha` or `beta` should really be used to communicate "changes are planned".
+
+As a case study, gRPC reflection is in the package `grpc.reflection.v1alpha`. It hasn't been changed since
+2017 and it is now used in other widely used software like gRPCurl. Some folks probably use it in production services
+and so if they actually went and changed the package to `grpc.reflection.v1`, some software would break and
+they probably don't want to do that... So now the `v1alpha` package is more or less the de-facto `v1`. Let's not do that.
+
+The following are guidelines for working with non-stable packages:
+
+* [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix)
+(ex. `v1alpha1`) _should_ be used for non-stable packages
+* non-stable packages should generally be excluded from breaking change detection
+* immutable smart contract modules (i.e. CosmWasm) _should_ block smart contracts/persistent
+scripts from interacting with `alpha`/`beta` packages
+
+#### Omit v1 suffix
+
+Instead of using [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix),
+we can omit `v1` for packages that don't actually have a second version. This
+allows for more concise names for common use cases like `cosmos.bank.Send`.
+Packages that do have a second or third version can indicate that with `.v2`
+or `.v3`.
+
+### Package Naming
+
+#### Adopt a short, unique top-level package name
+
+Top-level packages should adopt a short name that is known to not collide with
+other names in common usage within the Cosmos ecosystem. In the near future, a
+registry should be created to reserve and index top-level package names used
+within the Cosmos ecosystem. Because the Cosmos SDK is intended to provide
+the top-level types for the Cosmos project, the top-level package name `cosmos`
+is recommended for usage within the Cosmos SDK instead of the longer `cosmos_sdk`.
+[ICS](https://github.com/cosmos/ics) specifications could consider a
+short top-level package like `ics23` based upon the standard number.
+
+#### Limit sub-package depth
+
+Sub-package depth should be increased with caution. Generally a single
+sub-package is needed for a module or a library. Even though `x` or `modules`
+is used in source code to denote modules, this is often unnecessary for .proto
+files as modules are the primary thing sub-packages are used for. Only items which
+are known to be used infrequently should have deep sub-package depths.
+
+For the Cosmos SDK, it is recommended that that we simply write `cosmos.bank`,
+`cosmos.gov`, etc. rather than `cosmos.x.bank`. In practice, most non-module
+types can go straight in the `cosmos` package or we can introduce a
+`cosmos.base` package if needed. Note that this naming _will not_ change
+go package names, i.e. the `cosmos.bank` protobuf package will still live in
+`x/bank`.
+
+### Message Naming
+
+Message type names should be as concise possible without losing clarity. `sdk.Msg`
+types which are used in transactions will retain the `Msg` prefix as that provides
+helpful context.
+
+### Service and RPC Naming
+
+[ADR 021](adr-021-protobuf-query-encoding.md) specifies that modules should
+implement a gRPC query service. We should consider the principle of conciseness
+for query service and RPC names as these may be called from persistent script
+modules such as CosmWasm. Also, users may use these query paths from tools like
+[gRPCurl](https://github.com/fullstorydev/grpcurl). As an example, we can shorten
+`/cosmos_sdk.x.bank.v1.QueryService/QueryBalance` to
+`/cosmos.bank.Query/Balance` without losing much useful information.
+
+RPC request and response types _should_ follow the `ServiceNameMethodNameRequest`/
+`ServiceNameMethodNameResponse` naming convention. i.e. for an RPC method named `Balance`
+on the `Query` service, the request and response types would be `QueryBalanceRequest`
+and `QueryBalanceResponse`. This will be more self-explanatory than `BalanceRequest`
+and `BalanceResponse`.
+
+#### Use just `Query` for the query service
+
+Instead of [Buf's default service suffix recommendation](https://github.com/cosmos/cosmos-sdk/pull/6033),
+we should simply use the shorter `Query` for query services.
+
+For other types of gRPC services, we should consider sticking with Buf's
+default recommendation.
+
+#### Omit `Get` and `Query` from query service RPC names
+
+`Get` and `Query` should be omitted from `Query` service names because they are
+redundant in the fully-qualified name. For instance, `/cosmos.bank.Query/QueryBalance`
+just says `Query` twice without any new information.
+
+## Future Improvements
+
+A registry of top-level package names should be created to coordinate naming
+across the ecosystem, prevent collisions, and also help developers discover
+useful schemas. A simple starting point would be a git repository with
+community-based governance.
+
+## Consequences
+
+### Positive
+
+* names will be more concise and easier to read and type
+* all transactions using `Any` will be at shorter (`_sdk.x` and `.v1` will be removed)
+* `.proto` file imports will be more standard (without `"third_party/proto"` in
+the path)
+* code generation will be easier for clients because .proto files will be
+in a single `proto/` directory which can be copied rather than scattered
+throughout the Cosmos SDK
+
+### Negative
+
+### Neutral
+
+* `.proto` files will need to be reorganized and refactored
+* some modules may need to be marked as alpha or beta
+
+## References
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-024-coin-metadata.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-024-coin-metadata.md
new file mode 100644
index 00000000..71bedac5
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-024-coin-metadata.md
@@ -0,0 +1,140 @@
+# ADR 024: Coin Metadata
+
+## Changelog
+
+* 05/19/2020: Initial draft
+
+## Status
+
+Proposed
+
+## Context
+
+Assets in the Cosmos SDK are represented via a `Coins` type that consists of an `amount` and a `denom`,
+where the `amount` can be any arbitrarily large or small value. In addition, the Cosmos SDK uses an
+account-based model where there are two types of primary accounts -- basic accounts and module accounts.
+All account types have a set of balances that are composed of `Coins`. The `x/bank` module keeps
+track of all balances for all accounts and also keeps track of the total supply of balances in an
+application.
+
+With regards to a balance `amount`, the Cosmos SDK assumes a static and fixed unit of denomination,
+regardless of the denomination itself. In other words, clients and apps built atop a Cosmos-SDK-based
+chain may choose to define and use arbitrary units of denomination to provide a richer UX, however, by
+the time a tx or operation reaches the Cosmos SDK state machine, the `amount` is treated as a single
+unit. For example, for the Cosmos Hub (Gaia), clients assume 1 ATOM = 10^6 uatom, and so all txs and
+operations in the Cosmos SDK work off of units of 10^6.
+
+This clearly provides a poor and limited UX especially as interoperability of networks increases and
+as a result the total amount of asset types increases. We propose to have `x/bank` additionally keep
+track of metadata per `denom` in order to help clients, wallet providers, and explorers improve their
+UX and remove the requirement for making any assumptions on the unit of denomination.
+
+## Decision
+
+The `x/bank` module will be updated to store and index metadata by `denom`, specifically the "base" or
+smallest unit -- the unit the Cosmos SDK state-machine works with.
+
+Metadata may also include a non-zero length list of denominations. Each entry contains the name of
+the denomination `denom`, the exponent to the base and a list of aliases. An entry is to be
+interpreted as `1 denom = 10^exponent base_denom` (e.g. `1 ETH = 10^18 wei` and `1 uatom = 10^0 uatom`).
+
+There are two denominations that are of high importance for clients: the `base`, which is the smallest
+possible unit and the `display`, which is the unit that is commonly referred to in human communication
+and on exchanges. The values in those fields link to an entry in the list of denominations.
+
+The list in `denom_units` and the `display` entry may be changed via governance.
+
+As a result, we can define the type as follows:
+
+```protobuf
+message DenomUnit {
+ string denom = 1;
+ uint32 exponent = 2;
+ repeated string aliases = 3;
+}
+
+message Metadata {
+ string description = 1;
+ repeated DenomUnit denom_units = 2;
+ string base = 3;
+ string display = 4;
+}
+```
+
+As an example, the ATOM's metadata can be defined as follows:
+
+```json
+{
+ "name": "atom",
+ "description": "The native staking token of the Cosmos Hub.",
+ "denom_units": [
+ {
+ "denom": "uatom",
+ "exponent": 0,
+ "aliases": [
+ "microatom"
+ ],
+ },
+ {
+ "denom": "matom",
+ "exponent": 3,
+ "aliases": [
+ "milliatom"
+ ]
+ },
+ {
+ "denom": "atom",
+ "exponent": 6,
+ }
+ ],
+ "base": "uatom",
+ "display": "atom",
+}
+```
+
+Given the above metadata, a client may infer the following things:
+
+* 4.3atom = 4.3 * (10^6) = 4,300,000uatom
+* The string "atom" can be used as a display name in a list of tokens.
+* The balance 4300000 can be displayed as 4,300,000uatom or 4,300matom or 4.3atom.
+ The `display` denomination 4.3atom is a good default if the authors of the client don't make
+ an explicit decision to choose a different representation.
+
+A client should be able to query for metadata by denom both via the CLI and REST interfaces. In
+addition, we will add handlers to these interfaces to convert from any unit to another given unit,
+as the base framework for this already exists in the Cosmos SDK.
+
+Finally, we need to ensure metadata exists in the `GenesisState` of the `x/bank` module which is also
+indexed by the base `denom`.
+
+```go
+type GenesisState struct {
+ SendEnabled bool `json:"send_enabled" yaml:"send_enabled"`
+ Balances []Balance `json:"balances" yaml:"balances"`
+ Supply sdk.Coins `json:"supply" yaml:"supply"`
+ DenomMetadata []Metadata `json:"denom_metadata" yaml:"denom_metadata"`
+}
+```
+
+## Future Work
+
+In order for clients to avoid having to convert assets to the base denomination -- either manually or
+via an endpoint, we may consider supporting automatic conversion of a given unit input.
+
+## Consequences
+
+### Positive
+
+* Provides clients, wallet providers and block explorers with additional data on
+ asset denomination to improve UX and remove any need to make assumptions on
+ denomination units.
+
+### Negative
+
+* A small amount of required additional storage in the `x/bank` module. The amount
+ of additional storage should be minimal as the amount of total assets should not
+ be large.
+
+### Neutral
+
+## References
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-027-deterministic-protobuf-serialization.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-027-deterministic-protobuf-serialization.md
new file mode 100644
index 00000000..e19a45a7
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-027-deterministic-protobuf-serialization.md
@@ -0,0 +1,314 @@
+# ADR 027: Deterministic Protobuf Serialization
+
+## Changelog
+
+* 2020-08-07: Initial Draft
+* 2020-09-01: Further clarify rules
+
+## Status
+
+Proposed
+
+## Abstract
+
+Fully deterministic structure serialization, which works across many languages and clients,
+is needed when signing messages. We need to be sure that whenever we serialize
+a data structure, no matter in which supported language, the raw bytes
+will stay the same.
+[Protobuf](https://developers.google.com/protocol-buffers/docs/proto3)
+serialization is not bijective (i.e. there exist a practically unlimited number of
+valid binary representations for a given protobuf document)1.
+
+This document describes a deterministic serialization scheme for
+a subset of protobuf documents, that covers this use case but can be reused in
+other cases as well.
+
+### Context
+
+For signature verification in Cosmos SDK, the signer and verifier need to agree on
+the same serialization of a `SignDoc` as defined in
+[ADR-020](adr-020-protobuf-transaction-encoding.md) without transmitting the
+serialization.
+
+Currently, for block signatures we are using a workaround: we create a new [TxRaw](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L30)
+instance (as defined in [adr-020-protobuf-transaction-encoding](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-020-protobuf-transaction-encoding.md#transactions))
+by converting all [Tx](https://github.com/cosmos/cosmos-sdk/blob/9e85e81e0e8140067dd893421290c191529c148c/proto/cosmos/tx/v1beta1/tx.proto#L13)
+fields to bytes on the client side. This adds an additional manual
+step when sending and signing transactions.
+
+### Decision
+
+The following encoding scheme is to be used by other ADRs,
+and in particular for `SignDoc` serialization.
+
+## Specification
+
+### Scope
+
+This ADR defines a protobuf3 serializer. The output is a valid protobuf
+serialization, such that every protobuf parser can parse it.
+
+No maps are supported in version 1 due to the complexity of defining a
+deterministic serialization. This might change in future. Implementations must
+reject documents containing maps as invalid input.
+
+### Background - Protobuf3 Encoding
+
+Most numeric types in protobuf3 are encoded as
+[varints](https://developers.google.com/protocol-buffers/docs/encoding#varints).
+Varints are at most 10 bytes, and since each varint byte has 7 bits of data,
+varints are a representation of `uint70` (70-bit unsigned integer). When
+encoding, numeric values are casted from their base type to `uint70`, and when
+decoding, the parsed `uint70` is casted to the appropriate numeric type.
+
+The maximum valid value for a varint that complies with protobuf3 is
+`FF FF FF FF FF FF FF FF FF 7F` (i.e. `2**70 -1`). If the field type is
+`{,u,s}int64`, the highest 6 bits of the 70 are dropped during decoding,
+introducing 6 bits of malleability. If the field type is `{,u,s}int32`, the
+highest 38 bits of the 70 are dropped during decoding, introducing 38 bits of
+malleability.
+
+Among other sources of non-determinism, this ADR eliminates the possibility of
+encoding malleability.
+
+### Serialization rules
+
+The serialization is based on the
+[protobuf3 encoding](https://developers.google.com/protocol-buffers/docs/encoding)
+with the following additions:
+
+1. Fields must be serialized only once in ascending order
+2. Extra fields or any extra data must not be added
+3. [Default values](https://developers.google.com/protocol-buffers/docs/proto3#default)
+ must be omitted
+4. `repeated` fields of scalar numeric types must use
+ [packed encoding](https://developers.google.com/protocol-buffers/docs/encoding#packed)
+5. Varint encoding must not be longer than needed:
+ * No trailing zero bytes (in little endian, i.e. no leading zeroes in big
+ endian). Per rule 3 above, the default value of `0` must be omitted, so
+ this rule does not apply in such cases.
+ * The maximum value for a varint must be `FF FF FF FF FF FF FF FF FF 01`.
+ In other words, when decoded, the highest 6 bits of the 70-bit unsigned
+ integer must be `0`. (10-byte varints are 10 groups of 7 bits, i.e.
+ 70 bits, of which only the lowest 70-6=64 are useful.)
+ * The maximum value for 32-bit values in varint encoding must be `FF FF FF FF 0F`
+ with one exception (below). In other words, when decoded, the highest 38
+ bits of the 70-bit unsigned integer must be `0`.
+ * The one exception to the above is _negative_ `int32`, which must be
+ encoded using the full 10 bytes for sign extension2.
+ * The maximum value for Boolean values in varint encoding must be `01` (i.e.
+ it must be `0` or `1`). Per rule 3 above, the default value of `0` must
+ be omitted, so if a Boolean is included it must have a value of `1`.
+
+While rule number 1. and 2. should be pretty straight forward and describe the
+default behavior of all protobuf encoders the author is aware of, the 3rd rule
+is more interesting. After a protobuf3 deserialization you cannot differentiate
+between unset fields and fields set to the default value3. At
+serialization level however, it is possible to set the fields with an empty
+value or omitting them entirely. This is a significant difference to e.g. JSON
+where a property can be empty (`""`, `0`), `null` or undefined, leading to 3
+different documents.
+
+Omitting fields set to default values is valid because the parser must assign
+the default value to fields missing in the serialization4. For scalar
+types, omitting defaults is required by the spec5. For `repeated`
+fields, not serializing them is the only way to express empty lists. Enums must
+have a first element of numeric value 0, which is the default6. And
+message fields default to unset7.
+
+Omitting defaults allows for some amount of forward compatibility: users of
+newer versions of a protobuf schema produce the same serialization as users of
+older versions as long as newly added fields are not used (i.e. set to their
+default value).
+
+### Implementation
+
+There are three main implementation strategies, ordered from the least to the
+most custom development:
+
+* **Use a protobuf serializer that follows the above rules by default.** E.g.
+ [gogoproto](https://pkg.go.dev/github.com/cosmos/gogoproto/gogoproto) is known to
+ be compliant by in most cases, but not when certain annotations such as
+ `nullable = false` are used. It might also be an option to configure an
+ existing serializer accordingly.
+* **Normalize default values before encoding them.** If your serializer follows
+ rule 1. and 2. and allows you to explicitly unset fields for serialization,
+ you can normalize default values to unset. This can be done when working with
+ [protobuf.js](https://www.npmjs.com/package/protobufjs):
+
+ ```js
+ const bytes = SignDoc.encode({
+ bodyBytes: body.length > 0 ? body : null, // normalize empty bytes to unset
+ authInfoBytes: authInfo.length > 0 ? authInfo : null, // normalize empty bytes to unset
+ chainId: chainId || null, // normalize "" to unset
+ accountNumber: accountNumber || null, // normalize 0 to unset
+ accountSequence: accountSequence || null, // normalize 0 to unset
+ }).finish();
+ ```
+
+* **Use a hand-written serializer for the types you need.** If none of the above
+ ways works for you, you can write a serializer yourself. For SignDoc this
+ would look something like this in Go, building on existing protobuf utilities:
+
+ ```go
+ if !signDoc.body_bytes.empty() {
+ buf.WriteUVarInt64(0xA) // wire type and field number for body_bytes
+ buf.WriteUVarInt64(signDoc.body_bytes.length())
+ buf.WriteBytes(signDoc.body_bytes)
+ }
+
+ if !signDoc.auth_info.empty() {
+ buf.WriteUVarInt64(0x12) // wire type and field number for auth_info
+ buf.WriteUVarInt64(signDoc.auth_info.length())
+ buf.WriteBytes(signDoc.auth_info)
+ }
+
+ if !signDoc.chain_id.empty() {
+ buf.WriteUVarInt64(0x1a) // wire type and field number for chain_id
+ buf.WriteUVarInt64(signDoc.chain_id.length())
+ buf.WriteBytes(signDoc.chain_id)
+ }
+
+ if signDoc.account_number != 0 {
+ buf.WriteUVarInt64(0x20) // wire type and field number for account_number
+ buf.WriteUVarInt(signDoc.account_number)
+ }
+
+ if signDoc.account_sequence != 0 {
+ buf.WriteUVarInt64(0x28) // wire type and field number for account_sequence
+ buf.WriteUVarInt(signDoc.account_sequence)
+ }
+ ```
+
+### Test vectors
+
+Given the protobuf definition `Article.proto`
+
+```protobuf
+package blog;
+syntax = "proto3";
+
+enum Type {
+ UNSPECIFIED = 0;
+ IMAGES = 1;
+ NEWS = 2;
+};
+
+enum Review {
+ UNSPECIFIED = 0;
+ ACCEPTED = 1;
+ REJECTED = 2;
+};
+
+message Article {
+ string title = 1;
+ string description = 2;
+ uint64 created = 3;
+ uint64 updated = 4;
+ bool public = 5;
+ bool promoted = 6;
+ Type type = 7;
+ Review review = 8;
+ repeated string comments = 9;
+ repeated string backlinks = 10;
+};
+```
+
+serializing the values
+
+```yaml
+title: "The world needs change 🌳"
+description: ""
+created: 1596806111080
+updated: 0
+public: true
+promoted: false
+type: Type.NEWS
+review: Review.UNSPECIFIED
+comments: ["Nice one", "Thank you"]
+backlinks: []
+```
+
+must result in the serialization
+
+```text
+0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75
+```
+
+When inspecting the serialized document, you see that every second field is
+omitted:
+
+```shell
+$ echo 0a1b54686520776f726c64206e65656473206368616e676520f09f8cb318e8bebec8bc2e280138024a084e696365206f6e654a095468616e6b20796f75 | xxd -r -p | protoc --decode_raw
+1: "The world needs change \360\237\214\263"
+3: 1596806111080
+5: 1
+7: 2
+9: "Nice one"
+9: "Thank you"
+```
+
+## Consequences
+
+Having such an encoding available allows us to get deterministic serialization
+for all protobuf documents we need in the context of Cosmos SDK signing.
+
+### Positive
+
+* Well defined rules that can be verified independent of a reference
+ implementation
+* Simple enough to keep the barrier to implement transaction signing low
+* It allows us to continue to use 0 and other empty values in SignDoc, avoiding
+ the need to work around 0 sequences. This does not imply the change from
+ https://github.com/cosmos/cosmos-sdk/pull/6949 should not be merged, but not
+ too important anymore.
+
+### Negative
+
+* When implementing transaction signing, the encoding rules above must be
+ understood and implemented.
+* The need for rule number 3. adds some complexity to implementations.
+* Some data structures may require custom code for serialization. Thus
+ the code is not very portable - it will require additional work for each
+ client implementing serialization to properly handle custom data structures.
+
+### Neutral
+
+### Usage in Cosmos SDK
+
+For the reasons mentioned above ("Negative" section) we prefer to keep workarounds
+for shared data structure. Example: the aforementioned `TxRaw` is using raw bytes
+as a workaround. This allows them to use any valid Protobuf library without
+the need of implementing a custom serializer that adheres to this standard (and related risks of bugs).
+
+## References
+
+* 1 _When a message is serialized, there is no guaranteed order for
+ how its known or unknown fields should be written. Serialization order is an
+ implementation detail and the details of any particular implementation may
+ change in the future. Therefore, protocol buffer parsers must be able to parse
+ fields in any order._ from
+ https://developers.google.com/protocol-buffers/docs/encoding#order
+* 2 https://developers.google.com/protocol-buffers/docs/encoding#signed_integers
+* 3 _Note that for scalar message fields, once a message is parsed
+ there's no way of telling whether a field was explicitly set to the default
+ value (for example whether a boolean was set to false) or just not set at all:
+ you should bear this in mind when defining your message types. For example,
+ don't have a boolean that switches on some behavior when set to false if you
+ don't want that behavior to also happen by default._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 4 _When a message is parsed, if the encoded message does not
+ contain a particular singular element, the corresponding field in the parsed
+ object is set to the default value for that field._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 5 _Also note that if a scalar message field is set to its default,
+ the value will not be serialized on the wire._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 6 _For enums, the default value is the first defined enum value,
+ which must be 0._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* 7 _For message fields, the field is not set. Its exact value is
+ language-dependent._ from
+ https://developers.google.com/protocol-buffers/docs/proto3#default
+* Encoding rules and parts of the reasoning taken from
+ [canonical-proto3 Aaron Craelius](https://github.com/regen-network/canonical-proto3)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-028-public-key-addresses.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-028-public-key-addresses.md
new file mode 100644
index 00000000..9f394f7a
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-028-public-key-addresses.md
@@ -0,0 +1,342 @@
+# ADR 028: Public Key Addresses
+
+## Changelog
+
+* 2020/08/18: Initial version
+* 2021/01/15: Analysis and algorithm update
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR defines an address format for all addressable Cosmos SDK accounts. That includes: new public key algorithms, multisig public keys, and module accounts.
+
+## Context
+
+Issue [\#3685](https://github.com/cosmos/cosmos-sdk/issues/3685) identified that public key
+address spaces are currently overlapping. We confirmed that it significantly decreases security of Cosmos SDK.
+
+### Problem
+
+An attacker can control an input for an address generation function. This leads to a birthday attack, which significantly decreases the security space.
+To overcome this, we need to separate the inputs for different kind of account types:
+a security break of one account type shouldn't impact the security of other account types.
+
+### Initial proposals
+
+One initial proposal was extending the address length and
+adding prefixes for different types of addresses.
+
+@ethanfrey explained an alternate approach originally used in https://github.com/iov-one/weave:
+
+> I spent quite a bit of time thinking about this issue while building weave... The other cosmos Sdk.
+> Basically I define a condition to be a type and format as human readable string with some binary data appended. This condition is hashed into an Address (again at 20 bytes). The use of this prefix makes it impossible to find a preimage for a given address with a different condition (eg ed25519 vs secp256k1).
+> This is explained in depth here https://weave.readthedocs.io/en/latest/design/permissions.html
+> And the code is here, look mainly at the top where we process conditions. https://github.com/iov-one/weave/blob/master/conditions.go
+
+And explained how this approach should be sufficiently collision resistant:
+
+> Yeah, AFAIK, 20 bytes should be collision resistance when the preimages are unique and not malleable. A space of 2^160 would expect some collision to be likely around 2^80 elements (birthday paradox). And if you want to find a collision for some existing element in the database, it is still 2^160. 2^80 only is if all these elements are written to state.
+> The good example you brought up was eg. a public key bytes being a valid public key on two algorithms supported by the codec. Meaning if either was broken, you would break accounts even if they were secured with the safer variant. This is only as the issue when no differentiating type info is present in the preimage (before hashing into an address).
+> I would like to hear an argument if the 20 bytes space is an actual issue for security, as I would be happy to increase my address sizes in weave. I just figured cosmos and ethereum and bitcoin all use 20 bytes, it should be good enough. And the arguments above which made me feel it was secure. But I have not done a deeper analysis.
+
+This led to the first proposal (which we proved to be not good enough):
+we concatenate a key type with a public key, hash it and take the first 20 bytes of that hash, summarized as `sha256(keyTypePrefix || keybytes)[:20]`.
+
+### Review and Discussions
+
+In [\#5694](https://github.com/cosmos/cosmos-sdk/issues/5694) we discussed various solutions.
+We agreed that 20 bytes it's not future proof, and extending the address length is the only way to allow addresses of different types, various signature types, etc.
+This disqualifies the initial proposal.
+
+In the issue we discussed various modifications:
+
+* Choice of the hash function.
+* Move the prefix out of the hash function: `keyTypePrefix + sha256(keybytes)[:20]` [post-hash-prefix-proposal].
+* Use double hashing: `sha256(keyTypePrefix + sha256(keybytes)[:20])`.
+* Increase to keybytes hash slice from 20 byte to 32 or 40 bytes. We concluded that 32 bytes, produced by a good hash functions is future secure.
+
+### Requirements
+
+* Support currently used tools - we don't want to break an ecosystem, or add a long adaptation period. Ref: https://github.com/cosmos/cosmos-sdk/issues/8041
+* Try to keep the address length small - addresses are widely used in state, both as part of a key and object value.
+
+### Scope
+
+This ADR only defines a process for the generation of address bytes. For end-user interactions with addresses (through the API, or CLI, etc.), we still use bech32 to format these addresses as strings. This ADR doesn't change that.
+Using Bech32 for string encoding gives us support for checksum error codes and handling of user typos.
+
+## Decision
+
+We define the following account types, for which we define the address function:
+
+1. simple accounts: represented by a regular public key (ie: secp256k1, sr25519)
+2. naive multisig: accounts composed by other addressable objects (ie: naive multisig)
+3. composed accounts with a native address key (ie: bls, group module accounts)
+4. module accounts: basically any accounts which cannot sign transactions and which are managed internally by modules
+
+### Legacy Public Key Addresses Don't Change
+
+Currently (Jan 2021), the only officially supported Cosmos SDK user accounts are `secp256k1` basic accounts and legacy amino multisig.
+They are used in existing Cosmos SDK zones. They use the following address formats:
+
+* secp256k1: `ripemd160(sha256(pk_bytes))[:20]`
+* legacy amino multisig: `sha256(aminoCdc.Marshal(pk))[:20]`
+
+We don't want to change existing addresses. So the addresses for these two key types will remain the same.
+
+The current multisig public keys use amino serialization to generate the address. We will retain
+those public keys and their address formatting, and call them "legacy amino" multisig public keys
+in protobuf. We will also create multisig public keys without amino addresses to be described below.
+
+### Hash Function Choice
+
+As in other parts of the Cosmos SDK, we will use `sha256`.
+
+### Basic Address
+
+We start with defining a base algorithm for generating addresses which we will call `Hash`. Notably, it's used for accounts represented by a single key pair. For each public key schema we have to have an associated `typ` string, explained in the next section. `hash` is the cryptographic hash function defined in the previous section.
+
+```go
+const A_LEN = 32
+
+func Hash(typ string, key []byte) []byte {
+ return hash(hash(typ) + key)[:A_LEN]
+}
+```
+
+The `+` is bytes concatenation, which doesn't use any separator.
+
+This algorithm is the outcome of a consultation session with a professional cryptographer.
+Motivation: this algorithm keeps the address relatively small (length of the `typ` doesn't impact the length of the final address)
+and it's more secure than [post-hash-prefix-proposal] (which uses the first 20 bytes of a pubkey hash, significantly reducing the address space).
+Moreover the cryptographer motivated the choice of adding `typ` in the hash to protect against a switch table attack.
+
+`address.Hash` is a low level function to generate _base_ addresses for new key types. Example:
+
+* BLS: `address.Hash("bls", pubkey)`
+
+### Composed Addresses
+
+For simple composed accounts (like a new naive multisig) we generalize the `address.Hash`. The address is constructed by recursively creating addresses for the sub accounts, sorting the addresses and composing them into a single address. It ensures that the ordering of keys doesn't impact the resulting address.
+
+```go
+// We don't need a PubKey interface - we need anything which is addressable.
+type Addressable interface {
+ Address() []byte
+}
+
+func Composed(typ string, subaccounts []Addressable) []byte {
+ addresses = map(subaccounts, \a -> LengthPrefix(a.Address()))
+ addresses = sort(addresses)
+ return address.Hash(typ, addresses[0] + ... + addresses[n])
+}
+```
+
+The `typ` parameter should be a schema descriptor, containing all significant attributes with deterministic serialization (eg: utf8 string).
+`LengthPrefix` is a function which prepends 1 byte to the address. The value of that byte is the length of the address bits before prepending. The address must be at most 255 bits long.
+We are using `LengthPrefix` to eliminate conflicts - it assures, that for 2 lists of addresses: `as = {a1, a2, ..., an}` and `bs = {b1, b2, ..., bm}` such that every `bi` and `ai` is at most 255 long, `concatenate(map(as, (a) => LengthPrefix(a))) = map(bs, (b) => LengthPrefix(b))` if `as = bs`.
+
+Implementation Tip: account implementations should cache addresses.
+
+#### Multisig Addresses
+
+For a new multisig public keys, we define the `typ` parameter not based on any encoding scheme (amino or protobuf). This avoids issues with non-determinism in the encoding scheme.
+
+Example:
+
+```protobuf
+package cosmos.crypto.multisig;
+
+message PubKey {
+ uint32 threshold = 1;
+ repeated google.protobuf.Any pubkeys = 2;
+}
+```
+
+```go
+func (multisig PubKey) Address() {
+ // first gather all nested pub keys
+ var keys []address.Addressable // cryptotypes.PubKey implements Addressable
+ for _, _key := range multisig.Pubkeys {
+ keys = append(keys, key.GetCachedValue().(cryptotypes.PubKey))
+ }
+
+ // form the type from the message name (cosmos.crypto.multisig.PubKey) and the threshold joined together
+ prefix := fmt.Sprintf("%s/%d", proto.MessageName(multisig), multisig.Threshold)
+
+ // use the Composed function defined above
+ return address.Composed(prefix, keys)
+}
+```
+
+
+### Derived Addresses
+
+We must be able to cryptographically derive one address from another one. The derivation process must guarantee hash properties, hence we use the already defined `Hash` function:
+
+```go
+func Derive(address, derivationKey []byte) []byte {
+ return Hash(addres, derivationKey)
+}
+```
+
+### Module Account Addresses
+
+A module account will have `"module"` type. Module accounts can have sub accounts. The submodule account will be created based on module name, and sequence of derivation keys. Typically, the first derivation key should be a class of the derived accounts. The derivation process has a defined order: module name, submodule key, subsubmodule key... An example module account is created using:
+
+```go
+address.Module(moduleName, key)
+```
+
+An example sub-module account is created using:
+
+```go
+groupPolicyAddresses := []byte{1}
+address.Module(moduleName, groupPolicyAddresses, policyID)
+```
+
+The `address.Module` function is using `address.Hash` with `"module"` as the type argument, and byte representation of the module name concatenated with submodule key. The two last component must be uniquely separated to avoid potential clashes (example: modulename="ab" & submodulekey="bc" will have the same derivation key as modulename="a" & submodulekey="bbc").
+We use a null byte (`'\x00'`) to separate module name from the submodule key. This works, because null byte is not a part of a valid module name. Finally, the sub-submodule accounts are created by applying the `Derive` function recursively.
+We could use `Derive` function also in the first step (rather than concatenating module name with zero byte and the submodule key). We decided to do concatenation to avoid one level of derivation and speed up computation.
+
+For backward compatibility with the existing `authtypes.NewModuleAddress`, we add a special case in `Module` function: when no derivation key is provided, we fallback to the "legacy" implementation.
+
+```go
+func Module(moduleName string, derivationKeys ...[]byte) []byte{
+ if len(derivationKeys) == 0 {
+ return authtypes.NewModuleAddress(modulenName) // legacy case
+ }
+ submoduleAddress := Hash("module", []byte(moduleName) + 0 + key)
+ return fold((a, k) => Derive(a, k), subsubKeys, submoduleAddress)
+}
+```
+
+**Example 1** A lending BTC pool address would be:
+
+```go
+btcPool := address.Module("lending", btc.Address()})
+```
+
+If we want to create an address for a module account depending on more than one key, we can concatenate them:
+
+```go
+btcAtomAMM := address.Module("amm", btc.Address() + atom.Address()})
+```
+
+**Example 2** a smart-contract address could be constructed by:
+
+```go
+smartContractAddr = Module("mySmartContractVM", smartContractsNamespace, smartContractKey})
+
+// which equals to:
+smartContractAddr = Derived(
+ Module("mySmartContractVM", smartContractsNamespace),
+ []{smartContractKey})
+```
+
+### Schema Types
+
+A `typ` parameter used in `Hash` function SHOULD be unique for each account type.
+Since all Cosmos SDK account types are serialized in the state, we propose to use the protobuf message name string.
+
+Example: all public key types have a unique protobuf message type similar to:
+
+```protobuf
+package cosmos.crypto.sr25519;
+
+message PubKey {
+ bytes key = 1;
+}
+```
+
+All protobuf messages have unique fully qualified names, in this example `cosmos.crypto.sr25519.PubKey`.
+These names are derived directly from .proto files in a standardized way and used
+in other places such as the type URL in `Any`s. We can easily obtain the name using
+`proto.MessageName(msg)`.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR is compatible with what was committed and directly supported in the Cosmos SDK repository.
+
+### Positive
+
+* a simple algorithm for generating addresses for new public keys, complex accounts and modules
+* the algorithm generalizes _native composed keys_
+* increased security and collision resistance of addresses
+* the approach is extensible for future use-cases - one can use other address types, as long as they don't conflict with the address length specified here (20 or 32 bytes).
+* support new account types.
+
+### Negative
+
+* addresses do not communicate key type, a prefixed approach would have done this
+* addresses are 60% longer and will consume more storage space
+* requires a refactor of KVStore store keys to handle variable length addresses
+
+### Neutral
+
+* protobuf message names are used as key type prefixes
+
+## Further Discussions
+
+Some accounts can have a fixed name or may be constructed in other way (eg: modules). We were discussing an idea of an account with a predefined name (eg: `me.regen`), which could be used by institutions.
+Without going into details, these kinds of addresses are compatible with the hash based addresses described here as long as they don't have the same length.
+More specifically, any special account address must not have a length equal to 20 or 32 bytes.
+
+## Appendix: Consulting session
+
+End of Dec 2020 we had a session with [Alan Szepieniec](https://scholar.google.be/citations?user=4LyZn8oAAAAJ&hl=en) to consult the approach presented above.
+
+Alan general observations:
+
+* we don’t need 2-preimage resistance
+* we need 32bytes address space for collision resistance
+* when an attacker can control an input for object with an address then we have a problem with birthday attack
+* there is an issue with smart-contracts for hashing
+* sha2 mining can be use to breaking address pre-image
+
+Hashing algorithm
+
+* any attack breaking blake3 will break blake2
+* Alan is pretty confident about the current security analysis of the blake hash algorithm. It was a finalist, and the author is well known in security analysis.
+
+Algorithm:
+
+* Alan recommends to hash the prefix: `address(pub_key) = hash(hash(key_type) + pub_key)[:32]`, main benefits:
+ * we are free to user arbitrary long prefix names
+ * we still don’t risk collisions
+ * switch tables
+* discussion about penalization -> about adding prefix post hash
+* Aaron asked about post hash prefixes (`address(pub_key) = key_type + hash(pub_key)`) and differences. Alan noted that this approach has longer address space and it’s stronger.
+
+Algorithm for complex / composed keys:
+
+* merging tree like addresses with same algorithm are fine
+
+Module addresses: Should module addresses have different size to differentiate it?
+
+* we will need to set a pre-image prefix for module addresse to keept them in 32-byte space: `hash(hash('module') + module_key)`
+* Aaron observation: we already need to deal with variable length (to not break secp256k1 keys).
+
+Discssion about arithmetic hash function for ZKP
+
+* Posseidon / Rescue
+* Problem: much bigger risk because we don’t know much techniques and history of crypto-analysis of arithmetic constructions. It’s still a new ground and area of active research.
+
+Post quantum signature size
+
+* Alan suggestion: Falcon: speed / size ration - very good.
+* Aaron - should we think about it?
+ Alan: based on early extrapolation this thing will get able to break EC cryptography in 2050 . But that’s a lot of uncertainty. But there is magic happening with recurions / linking / simulation and that can speedup the progress.
+
+Other ideas
+
+* Let’s say we use same key and two different address algorithms for 2 different use cases. Is it still safe to use it? Alan: if we want to hide the public key (which is not our use case), then it’s less secure but there are fixes.
+
+### References
+
+* [Notes](https://hackmd.io/_NGWI4xZSbKzj1BkCqyZMw)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-029-fee-grant-module.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-029-fee-grant-module.md
new file mode 100644
index 00000000..6b52556f
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-029-fee-grant-module.md
@@ -0,0 +1,153 @@
+# ADR 029: Fee Grant Module
+
+## Changelog
+
+* 2020/08/18: Initial Draft
+* 2021/05/05: Removed height based expiration support and simplified naming.
+
+## Status
+
+Accepted
+
+## Context
+
+In order to make blockchain transactions, the signing account must possess a sufficient balance of the right denomination
+in order to pay fees. There are classes of transactions where needing to maintain a wallet with sufficient fees is a
+barrier to adoption.
+
+For instance, when proper permissions are setup, someone may temporarily delegate the ability to vote on proposals to
+a "burner" account that is stored on a mobile phone with only minimal security.
+
+Other use cases include workers tracking items in a supply chain or farmers submitting field data for analytics
+or compliance purposes.
+
+For all of these use cases, UX would be significantly enhanced by obviating the need for these accounts to always
+maintain the appropriate fee balance. This is especially true if we wanted to achieve enterprise adoption for something
+like supply chain tracking.
+
+While one solution would be to have a service that fills up these accounts automatically with the appropriate fees, a better UX
+would be provided by allowing these accounts to pull from a common fee pool account with proper spending limits.
+A single pool would reduce the churn of making lots of small "fill up" transactions and also more effectively leverages
+the resources of the organization setting up the pool.
+
+## Decision
+
+As a solution we propose a module, `x/feegrant` which allows one account, the "granter" to grant another account, the "grantee"
+an allowance to spend the granter's account balance for fees within certain well-defined limits.
+
+Fee allowances are defined by the extensible `FeeAllowanceI` interface:
+
+```go
+type FeeAllowanceI {
+ // Accept can use fee payment requested as well as timestamp of the current block
+ // to determine whether or not to process this. This is checked in
+ // Keeper.UseGrantedFees and the return values should match how it is handled there.
+ //
+ // If it returns an error, the fee payment is rejected, otherwise it is accepted.
+ // The FeeAllowance implementation is expected to update it's internal state
+ // and will be saved again after an acceptance.
+ //
+ // If remove is true (regardless of the error), the FeeAllowance will be deleted from storage
+ // (eg. when it is used up). (See call to RevokeFeeAllowance in Keeper.UseGrantedFees)
+ Accept(ctx sdk.Context, fee sdk.Coins, msgs []sdk.Msg) (remove bool, err error)
+
+ // ValidateBasic should evaluate this FeeAllowance for internal consistency.
+ // Don't allow negative amounts, or negative periods for example.
+ ValidateBasic() error
+}
+```
+
+Two basic fee allowance types, `BasicAllowance` and `PeriodicAllowance` are defined to support known use cases:
+
+```protobuf
+// BasicAllowance implements FeeAllowanceI with a one-time grant of tokens
+// that optionally expires. The delegatee can use up to SpendLimit to cover fees.
+message BasicAllowance {
+ // spend_limit specifies the maximum amount of tokens that can be spent
+ // by this allowance and will be updated as tokens are spent. If it is
+ // empty, there is no spend limit and any amount of coins can be spent.
+ repeated cosmos_sdk.v1.Coin spend_limit = 1;
+
+ // expiration specifies an optional time when this allowance expires
+ google.protobuf.Timestamp expiration = 2;
+}
+
+// PeriodicAllowance extends FeeAllowanceI to allow for both a maximum cap,
+// as well as a limit per time period.
+message PeriodicAllowance {
+ BasicAllowance basic = 1;
+
+ // period specifies the time duration in which period_spend_limit coins can
+ // be spent before that allowance is reset
+ google.protobuf.Duration period = 2;
+
+ // period_spend_limit specifies the maximum number of coins that can be spent
+ // in the period
+ repeated cosmos_sdk.v1.Coin period_spend_limit = 3;
+
+ // period_can_spend is the number of coins left to be spent before the period_reset time
+ repeated cosmos_sdk.v1.Coin period_can_spend = 4;
+
+ // period_reset is the time at which this period resets and a new one begins,
+ // it is calculated from the start time of the first transaction after the
+ // last period ended
+ google.protobuf.Timestamp period_reset = 5;
+}
+
+```
+
+Allowances can be granted and revoked using `MsgGrantAllowance` and `MsgRevokeAllowance`:
+
+```protobuf
+// MsgGrantAllowance adds permission for Grantee to spend up to Allowance
+// of fees from the account of Granter.
+message MsgGrantAllowance {
+ string granter = 1;
+ string grantee = 2;
+ google.protobuf.Any allowance = 3;
+ }
+
+ // MsgRevokeAllowance removes any existing FeeAllowance from Granter to Grantee.
+ message MsgRevokeAllowance {
+ string granter = 1;
+ string grantee = 2;
+ }
+```
+
+In order to use allowances in transactions, we add a new field `granter` to the transaction `Fee` type:
+
+```protobuf
+package cosmos.tx.v1beta1;
+
+message Fee {
+ repeated cosmos.base.v1beta1.Coin amount = 1;
+ uint64 gas_limit = 2;
+ string payer = 3;
+ string granter = 4;
+}
+```
+
+`granter` must either be left empty or must correspond to an account which has granted
+a fee allowance to fee payer (either the first signer or the value of the `payer` field).
+
+A new `AnteDecorator` named `DeductGrantedFeeDecorator` will be created in order to process transactions with `fee_payer`
+set and correctly deduct fees based on fee allowances.
+
+## Consequences
+
+### Positive
+
+* improved UX for use cases where it is cumbersome to maintain an account balance just for fees
+
+### Negative
+
+### Neutral
+
+* a new field must be added to the transaction `Fee` message and a new `AnteDecorator` must be
+created to use it
+
+## References
+
+* Blog article describing initial work: https://medium.com/regen-network/hacking-the-cosmos-cosmwasm-and-key-management-a08b9f561d1b
+* Initial public specification: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56
+* Original subkeys proposal from B-harvest which influenced this design: https://github.com/cosmos/cosmos-sdk/issues/4480
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-030-authz-module.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-030-authz-module.md
new file mode 100644
index 00000000..0454138d
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-030-authz-module.md
@@ -0,0 +1,258 @@
+# ADR 030: Authorization Module
+
+## Changelog
+
+* 2019-11-06: Initial Draft
+* 2020-10-12: Updated Draft
+* 2020-11-13: Accepted
+* 2020-05-06: proto API updates, use `sdk.Msg` instead of `sdk.ServiceMsg` (the latter concept was removed from Cosmos SDK)
+* 2022-04-20: Updated the `SendAuthorization` proto docs to clarify the `SpendLimit` is a required field. (Generic authorization can be used with bank msg type url to create limit less bank authorization)
+
+## Status
+
+Accepted
+
+## Abstract
+
+This ADR defines the `x/authz` module which allows accounts to grant authorizations to perform actions
+on behalf of that account to other accounts.
+
+## Context
+
+The concrete use cases which motivated this module include:
+
+* the desire to delegate the ability to vote on proposals to other accounts besides the account which one has
+delegated stake
+* "sub-keys" functionality, as originally proposed in [\#4480](https://github.com/cosmos/cosmos-sdk/issues/4480) which
+is a term used to describe the functionality provided by this module together with
+the `fee_grant` module from [ADR 029](adr-029-fee-grant-module.md) and the [group module](https://github.com/cosmos/cosmos-sdk/tree/main/x/group).
+
+The "sub-keys" functionality roughly refers to the ability for one account to grant some subset of its capabilities to
+other accounts with possibly less robust, but easier to use security measures. For instance, a master account representing
+an organization could grant the ability to spend small amounts of the organization's funds to individual employee accounts.
+Or an individual (or group) with a multisig wallet could grant the ability to vote on proposals to any one of the member
+keys.
+
+The current implementation is based on work done by the [Gaian's team at Hackatom Berlin 2019](https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation).
+
+## Decision
+
+We will create a module named `authz` which provides functionality for
+granting arbitrary privileges from one account (the _granter_) to another account (the _grantee_). Authorizations
+must be granted for a particular `Msg` service methods one by one using an implementation
+of `Authorization` interface.
+
+### Types
+
+Authorizations determine exactly what privileges are granted. They are extensible
+and can be defined for any `Msg` service method even outside of the module where
+the `Msg` method is defined. `Authorization`s reference `Msg`s using their TypeURL.
+
+#### Authorization
+
+```go
+type Authorization interface {
+ proto.Message
+
+ // MsgTypeURL returns the fully-qualified Msg TypeURL (as described in ADR 020),
+ // which will process and accept or reject a request.
+ MsgTypeURL() string
+
+ // Accept determines whether this grant permits the provided sdk.Msg to be performed, and if
+ // so provides an upgraded authorization instance.
+ Accept(ctx sdk.Context, msg sdk.Msg) (AcceptResponse, error)
+
+ // ValidateBasic does a simple validation check that
+ // doesn't require access to any other information.
+ ValidateBasic() error
+}
+
+// AcceptResponse instruments the controller of an authz message if the request is accepted
+// and if it should be updated or deleted.
+type AcceptResponse struct {
+ // If Accept=true, the controller can accept and authorization and handle the update.
+ Accept bool
+ // If Delete=true, the controller must delete the authorization object and release
+ // storage resources.
+ Delete bool
+ // Controller, who is calling Authorization.Accept must check if `Updated != nil`. If yes,
+ // it must use the updated version and handle the update on the storage level.
+ Updated Authorization
+}
+```
+
+For example a `SendAuthorization` like this is defined for `MsgSend` that takes
+a `SpendLimit` and updates it down to zero:
+
+```go
+type SendAuthorization struct {
+ // SpendLimit specifies the maximum amount of tokens that can be spent
+ // by this authorization and will be updated as tokens are spent. This field is required. (Generic authorization
+ // can be used with bank msg type url to create limit less bank authorization).
+ SpendLimit sdk.Coins
+}
+
+func (a SendAuthorization) MsgTypeURL() string {
+ return sdk.MsgTypeURL(&MsgSend{})
+}
+
+func (a SendAuthorization) Accept(ctx sdk.Context, msg sdk.Msg) (authz.AcceptResponse, error) {
+ mSend, ok := msg.(*MsgSend)
+ if !ok {
+ return authz.AcceptResponse{}, sdkerrors.ErrInvalidType.Wrap("type mismatch")
+ }
+ limitLeft, isNegative := a.SpendLimit.SafeSub(mSend.Amount)
+ if isNegative {
+ return authz.AcceptResponse{}, sdkerrors.ErrInsufficientFunds.Wrapf("requested amount is more than spend limit")
+ }
+ if limitLeft.IsZero() {
+ return authz.AcceptResponse{Accept: true, Delete: true}, nil
+ }
+
+ return authz.AcceptResponse{Accept: true, Delete: false, Updated: &SendAuthorization{SpendLimit: limitLeft}}, nil
+}
+```
+
+A different type of capability for `MsgSend` could be implemented
+using the `Authorization` interface with no need to change the underlying
+`bank` module.
+
+##### Small notes on `AcceptResponse`
+
+* The `AcceptResponse.Accept` field will be set to `true` if the authorization is accepted.
+However, if it is rejected, the function `Accept` will raise an error (without setting `AcceptResponse.Accept` to `false`).
+
+* The `AcceptResponse.Updated` field will be set to a non-nil value only if there is a real change to the authorization.
+If authorization remains the same (as is, for instance, always the case for a [`GenericAuthorization`](#genericauthorization)),
+the field will be `nil`.
+
+### `Msg` Service
+
+```protobuf
+service Msg {
+ // Grant grants the provided authorization to the grantee on the granter's
+ // account with the provided expiration time.
+ rpc Grant(MsgGrant) returns (MsgGrantResponse);
+
+ // Exec attempts to execute the provided messages using
+ // authorizations granted to the grantee. Each message should have only
+ // one signer corresponding to the granter of the authorization.
+ rpc Exec(MsgExec) returns (MsgExecResponse);
+
+ // Revoke revokes any authorization corresponding to the provided method name on the
+ // granter's account that has been granted to the grantee.
+ rpc Revoke(MsgRevoke) returns (MsgRevokeResponse);
+}
+
+// Grant gives permissions to execute
+// the provided method with expiration time.
+message Grant {
+ google.protobuf.Any authorization = 1 [(cosmos_proto.accepts_interface) = "cosmos.authz.v1beta1.Authorization"];
+ google.protobuf.Timestamp expiration = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
+}
+
+message MsgGrant {
+ string granter = 1;
+ string grantee = 2;
+
+ Grant grant = 3 [(gogoproto.nullable) = false];
+}
+
+message MsgExecResponse {
+ cosmos.base.abci.v1beta1.Result result = 1;
+}
+
+message MsgExec {
+ string grantee = 1;
+ // Authorization Msg requests to execute. Each msg must implement Authorization interface
+ repeated google.protobuf.Any msgs = 2 [(cosmos_proto.accepts_interface) = "cosmos.base.v1beta1.Msg"];;
+}
+```
+
+### Router Middleware
+
+The `authz` `Keeper` will expose a `DispatchActions` method which allows other modules to send `Msg`s
+to the router based on `Authorization` grants:
+
+```go
+type Keeper interface {
+ // DispatchActions routes the provided msgs to their respective handlers if the grantee was granted an authorization
+ // to send those messages by the first (and only) signer of each msg.
+ DispatchActions(ctx sdk.Context, grantee sdk.AccAddress, msgs []sdk.Msg) sdk.Result`
+}
+```
+
+### CLI
+
+#### `tx exec` Method
+
+When a CLI user wants to run a transaction on behalf of another account using `MsgExec`, they
+can use the `exec` method. For instance `gaiacli tx gov vote 1 yes --from --generate-only | gaiacli tx authz exec --send-as --from `
+would send a transaction like this:
+
+```go
+MsgExec {
+ Grantee: mykey,
+ Msgs: []sdk.Msg{
+ MsgVote {
+ ProposalID: 1,
+ Voter: cosmos3thsdgh983egh823
+ Option: Yes
+ }
+ }
+}
+```
+
+#### `tx grant --from `
+
+This CLI command will send a `MsgGrant` transaction. `authorization` should be encoded as
+JSON on the CLI.
+
+#### `tx revoke --from `
+
+This CLI command will send a `MsgRevoke` transaction.
+
+### Built-in Authorizations
+
+#### `SendAuthorization`
+
+```protobuf
+// SendAuthorization allows the grantee to spend up to spend_limit coins from
+// the granter's account.
+message SendAuthorization {
+ repeated cosmos.base.v1beta1.Coin spend_limit = 1;
+}
+```
+
+#### `GenericAuthorization`
+
+```protobuf
+// GenericAuthorization gives the grantee unrestricted permissions to execute
+// the provided method on behalf of the granter's account.
+message GenericAuthorization {
+ option (cosmos_proto.implements_interface) = "Authorization";
+
+ // Msg, identified by it's type URL, to grant unrestricted permissions to execute
+ string msg = 1;
+}
+```
+
+## Consequences
+
+### Positive
+
+* Users will be able to authorize arbitrary actions on behalf of their accounts to other
+users, improving key management for many use cases
+* The solution is more generic than previously considered approaches and the
+`Authorization` interface approach can be extended to cover other use cases by
+SDK users
+
+### Negative
+
+### Neutral
+
+## References
+
+* Initial Hackatom implementation: https://github.com/cosmos-gaians/cosmos-sdk/tree/hackatom/x/delegation
+* Post-Hackatom spec: https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#delegation-module
+* B-Harvest subkeys spec: https://github.com/cosmos/cosmos-sdk/issues/4480
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-031-msg-service.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-031-msg-service.md
new file mode 100644
index 00000000..b8e4005d
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-031-msg-service.md
@@ -0,0 +1,202 @@
+# ADR 031: Protobuf Msg Services
+
+## Changelog
+
+* 2020-10-05: Initial Draft
+* 2021-04-21: Remove `ServiceMsg`s to follow Protobuf `Any`'s spec, see [#9063](https://github.com/cosmos/cosmos-sdk/issues/9063).
+
+## Status
+
+Accepted
+
+## Abstract
+
+We want to leverage protobuf `service` definitions for defining `Msg`s which will give us significant developer UX
+improvements in terms of the code that is generated and the fact that return types will now be well defined.
+
+## Context
+
+Currently `Msg` handlers in the Cosmos SDK do have return values that are placed in the `data` field of the response.
+These return values, however, are not specified anywhere except in the golang handler code.
+
+In early conversations [it was proposed](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc/edit)
+that `Msg` return types be captured using a protobuf extension field, ex:
+
+```protobuf
+package cosmos.gov;
+
+message MsgSubmitProposal
+ option (cosmos_proto.msg_return) = “uint64”;
+ string delegator_address = 1;
+ string validator_address = 2;
+ repeated sdk.Coin amount = 3;
+}
+```
+
+This was never adopted, however.
+
+Having a well-specified return value for `Msg`s would improve client UX. For instance,
+in `x/gov`, `MsgSubmitProposal` returns the proposal ID as a big-endian `uint64`.
+This isn’t really documented anywhere and clients would need to know the internals
+of the Cosmos SDK to parse that value and return it to users.
+
+Also, there may be cases where we want to use these return values programatically.
+For instance, https://github.com/cosmos/cosmos-sdk/issues/7093 proposes a method for
+doing inter-module Ocaps using the `Msg` router. A well-defined return type would
+improve the developer UX for this approach.
+
+In addition, handler registration of `Msg` types tends to add a bit of
+boilerplate on top of keepers and is usually done through manual type switches.
+This isn't necessarily bad, but it does add overhead to creating modules.
+
+## Decision
+
+We decide to use protobuf `service` definitions for defining `Msg`s as well as
+the code generated by them as a replacement for `Msg` handlers.
+
+Below we define how this will look for the `SubmitProposal` message from `x/gov` module.
+We start with a `Msg` `service` definition:
+
+```protobuf
+package cosmos.gov;
+
+service Msg {
+ rpc SubmitProposal(MsgSubmitProposal) returns (MsgSubmitProposalResponse);
+}
+
+// Note that for backwards compatibility this uses MsgSubmitProposal as the request
+// type instead of the more canonical MsgSubmitProposalRequest
+message MsgSubmitProposal {
+ google.protobuf.Any content = 1;
+ string proposer = 2;
+}
+
+message MsgSubmitProposalResponse {
+ uint64 proposal_id;
+}
+```
+
+While this is most commonly used for gRPC, overloading protobuf `service` definitions like this does not violate
+the intent of the [protobuf spec](https://developers.google.com/protocol-buffers/docs/proto3#services) which says:
+> If you don’t want to use gRPC, it’s also possible to use protocol buffers with your own RPC implementation.
+With this approach, we would get an auto-generated `MsgServer` interface:
+
+In addition to clearly specifying return types, this has the benefit of generating client and server code. On the server
+side, this is almost like an automatically generated keeper method and could maybe be used intead of keepers eventually
+(see [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)):
+
+```go
+package gov
+
+type MsgServer interface {
+ SubmitProposal(context.Context, *MsgSubmitProposal) (*MsgSubmitProposalResponse, error)
+}
+```
+
+On the client side, developers could take advantage of this by creating RPC implementations that encapsulate transaction
+logic. Protobuf libraries that use asynchronous callbacks, like [protobuf.js](https://github.com/protobufjs/protobuf.js#using-services)
+could use this to register callbacks for specific messages even for transactions that include multiple `Msg`s.
+
+Each `Msg` service method should have exactly one request parameter: its corresponding `Msg` type. For example, the `Msg` service method `/cosmos.gov.v1beta1.Msg/SubmitProposal` above has exactly one request parameter, namely the `Msg` type `/cosmos.gov.v1beta1.MsgSubmitProposal`. It is important the reader understands clearly the nomenclature difference between a `Msg` service (a Protobuf service) and a `Msg` type (a Protobuf message), and the differences in their fully-qualified name.
+
+This convention has been decided over the more canonical `Msg...Request` names mainly for backwards compatibility, but also for better readability in `TxBody.messages` (see [Encoding section](#encoding) below): transactions containing `/cosmos.gov.MsgSubmitProposal` read better than those containing `/cosmos.gov.v1beta1.MsgSubmitProposalRequest`.
+
+One consequence of this convention is that each `Msg` type can be the request parameter of only one `Msg` service method. However, we consider this limitation a good practice in explicitness.
+
+### Encoding
+
+Encoding of transactions generated with `Msg` services do not differ from current Protobuf transaction encoding as defined in [ADR-020](adr-020-protobuf-transaction-encoding.md). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the
+binary-encoded `Msg` with its type URL.
+
+### Decoding
+
+Since `Msg` types are packed into `Any`, decoding transactions messages are done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](adr-020-protobuf-transaction-encoding.md#transactions).
+
+### Routing
+
+We propose to add a `msg_service_router` in BaseApp. This router is a key/value map which maps `Msg` types' `type_url`s to their corresponding `Msg` service method handler. Since there is a 1-to-1 mapping between `Msg` types and `Msg` service method, the `msg_service_router` has exactly one entry per `Msg` service method.
+
+When a transaction is processed by BaseApp (in CheckTx or in DeliverTx), its `TxBody.messages` are decoded as `Msg`s. Each `Msg`'s `type_url` is matched against an entry in the `msg_service_router`, and the respective `Msg` service method handler is called.
+
+For backward compatability, the old handlers are not removed yet. If BaseApp receives a legacy `Msg` with no correspoding entry in the `msg_service_router`, it will be routed via its legacy `Route()` method into the legacy handler.
+
+### Module Configuration
+
+In [ADR 021](adr-021-protobuf-query-encoding.md), we introduced a method `RegisterQueryService`
+to `AppModule` which allows for modules to register gRPC queriers.
+
+To register `Msg` services, we attempt a more extensible approach by converting `RegisterQueryService`
+to a more generic `RegisterServices` method:
+
+```go
+type AppModule interface {
+ RegisterServices(Configurator)
+ ...
+}
+
+type Configurator interface {
+ QueryServer() grpc.Server
+ MsgServer() grpc.Server
+}
+
+// example module:
+func (am AppModule) RegisterServices(cfg Configurator) {
+ types.RegisterQueryServer(cfg.QueryServer(), keeper)
+ types.RegisterMsgServer(cfg.MsgServer(), keeper)
+}
+```
+
+The `RegisterServices` method and the `Configurator` interface are intended to
+evolve to satisfy the use cases discussed in [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093)
+and [\#7122](https://github.com/cosmos/cosmos-sdk/issues/7421).
+
+When `Msg` services are registered, the framework _should_ verify that all `Msg` types
+implement the `sdk.Msg` interface and throw an error during initialization rather
+than later when transactions are processed.
+
+### `Msg` Service Implementation
+
+Just like query services, `Msg` service methods can retrieve the `sdk.Context`
+from the `context.Context` parameter method using the `sdk.UnwrapSDKContext`
+method:
+
+```go
+package gov
+
+func (k Keeper) SubmitProposal(goCtx context.Context, params *types.MsgSubmitProposal) (*MsgSubmitProposalResponse, error) {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+ ...
+}
+```
+
+The `sdk.Context` should have an `EventManager` already attached by BaseApp's `msg_service_router`.
+
+Separate handler definition is no longer needed with this approach.
+
+## Consequences
+
+This design changes how a module functionality is exposed and accessed. It deprecates the existing `Handler` interface and `AppModule.Route` in favor of [Protocol Buffer Services](https://developers.google.com/protocol-buffers/docs/proto3#services) and Service Routing described above. This dramatically simplifies the code. We don't need to create handlers and keepers any more. Use of Protocol Buffer auto-generated clients clearly separates the communication interfaces between the module and a modules user. The control logic (aka handlers and keepers) is not exposed any more. A module interface can be seen as a black box accessible through a client API. It's worth to note that the client interfaces are also generated by Protocol Buffers.
+
+This also allows us to change how we perform functional tests. Instead of mocking AppModules and Router, we will mock a client (server will stay hidden). More specifically: we will never mock `moduleA.MsgServer` in `moduleB`, but rather `moduleA.MsgClient`. One can think about it as working with external services (eg DBs, or online servers...). We assume that the transmission between clients and servers is correctly handled by generated Protocol Buffers.
+
+Finally, closing a module to client API opens desirable OCAP patterns discussed in ADR-033. Since server implementation and interface is hidden, nobody can hold "keepers"/servers and will be forced to relay on the client interface, which will drive developers for correct encapsulation and software engineering patterns.
+
+### Pros
+
+* communicates return type clearly
+* manual handler registration and return type marshaling is no longer needed, just implement the interface and register it
+* communication interface is automatically generated, the developer can now focus only on the state transition methods - this would improve the UX of [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) approach (1) if we chose to adopt that
+* generated client code could be useful for clients and tests
+* dramatically reduces and simplifies the code
+
+### Cons
+
+* using `service` definitions outside the context of gRPC could be confusing (but doesn’t violate the proto3 spec)
+
+## References
+
+* [Initial Github Issue \#7122](https://github.com/cosmos/cosmos-sdk/issues/7122)
+* [proto 3 Language Guide: Defining Services](https://developers.google.com/protocol-buffers/docs/proto3#services)
+* [Initial pre-`Any` `Msg` designs](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc)
+* [ADR 020](adr-020-protobuf-transaction-encoding.md)
+* [ADR 021](adr-021-protobuf-query-encoding.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-032-typed-events.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-032-typed-events.md
new file mode 100644
index 00000000..c1dd0a73
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-032-typed-events.md
@@ -0,0 +1,319 @@
+# ADR 032: Typed Events
+
+## Changelog
+
+* 28-Sept-2020: Initial Draft
+
+## Authors
+
+* Anil Kumar (@anilcse)
+* Jack Zampolin (@jackzampolin)
+* Adam Bozanich (@boz)
+
+## Status
+
+Proposed
+
+## Abstract
+
+Currently in the Cosmos SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
+
+## Context
+
+Currently in the Cosmos SDK, events are defined in the handlers for each message, meaning each module doesn't have a cannonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
+
+[Our platform](http://github.com/ovrclk/akash) requires a number of programatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes developement, our team has developed a standard method for defining and consuming typed events in Cosmos SDK modules. We have found that it is extremely useful in building this type of event driven application.
+
+As the Cosmos SDK gets used more extensively for apps like `peggy`, other peg zones, IBC, DeFi, etc... there will be an exploding demand for event driven applications to support new features desired by users. We propose upstreaming our findings into the Cosmos SDK to enable all Cosmos SDK applications to quickly and easily build event driven apps to aid their core application. Wallets, exchanges, explorers, and defi protocols all stand to benefit from this work.
+
+If this proposal is accepted, users will be able to build event driven Cosmos SDK apps in go by just writing `EventHandler`s for their specific event types and passing them to `EventEmitters` that are defined in the Cosmos SDK.
+
+The end of this proposal contains a detailed example of how to consume events after this refactor.
+
+This proposal is specifically about how to consume these events as a client of the blockchain, not for intermodule communication.
+
+## Decision
+
+**Step-1**: Implement additional functionality in the `types` package: `EmitTypedEvent` and `ParseTypedEvent` functions
+
+```go
+// types/events.go
+
+// EmitTypedEvent takes typed event and emits converting it into sdk.Event
+func (em *EventManager) EmitTypedEvent(event proto.Message) error {
+ evtType := proto.MessageName(event)
+ evtJSON, err := codec.ProtoMarshalJSON(event)
+ if err != nil {
+ return err
+ }
+
+ var attrMap map[string]json.RawMessage
+ err = json.Unmarshal(evtJSON, &attrMap)
+ if err != nil {
+ return err
+ }
+
+ var attrs []abci.EventAttribute
+ for k, v := range attrMap {
+ attrs = append(attrs, abci.EventAttribute{
+ Key: []byte(k),
+ Value: v,
+ })
+ }
+
+ em.EmitEvent(Event{
+ Type: evtType,
+ Attributes: attrs,
+ })
+
+ return nil
+}
+
+// ParseTypedEvent converts abci.Event back to typed event
+func ParseTypedEvent(event abci.Event) (proto.Message, error) {
+ concreteGoType := proto.MessageType(event.Type)
+ if concreteGoType == nil {
+ return nil, fmt.Errorf("failed to retrieve the message of type %q", event.Type)
+ }
+
+ var value reflect.Value
+ if concreteGoType.Kind() == reflect.Ptr {
+ value = reflect.New(concreteGoType.Elem())
+ } else {
+ value = reflect.Zero(concreteGoType)
+ }
+
+ protoMsg, ok := value.Interface().(proto.Message)
+ if !ok {
+ return nil, fmt.Errorf("%q does not implement proto.Message", event.Type)
+ }
+
+ attrMap := make(map[string]json.RawMessage)
+ for _, attr := range event.Attributes {
+ attrMap[string(attr.Key)] = attr.Value
+ }
+
+ attrBytes, err := json.Marshal(attrMap)
+ if err != nil {
+ return nil, err
+ }
+
+ err = jsonpb.Unmarshal(strings.NewReader(string(attrBytes)), protoMsg)
+ if err != nil {
+ return nil, err
+ }
+
+ return protoMsg, nil
+}
+```
+
+Here, the `EmitTypedEvent` is a method on `EventManager` which takes typed event as input and apply json serialization on it. Then it maps the JSON key/value pairs to `event.Attributes` and emits it in form of `sdk.Event`. `Event.Type` will be the type URL of the proto message.
+
+When we subscribe to emitted events on the CometBFT websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message.
+
+**Step-2**: Add proto definitions for typed events for msgs in each module:
+
+For example, let's take `MsgSubmitProposal` of `gov` module and implement this event's type.
+
+```protobuf
+// proto/cosmos/gov/v1beta1/gov.proto
+// Add typed event definition
+
+package cosmos.gov.v1beta1;
+
+message EventSubmitProposal {
+ string from_address = 1;
+ uint64 proposal_id = 2;
+ TextProposal proposal = 3;
+}
+```
+
+**Step-3**: Refactor event emission to use the typed event created and emit using `sdk.EmitTypedEvent`:
+
+```go
+// x/gov/handler.go
+func handleMsgSubmitProposal(ctx sdk.Context, keeper keeper.Keeper, msg types.MsgSubmitProposalI) (*sdk.Result, error) {
+ ...
+ types.Context.EventManager().EmitTypedEvent(
+ &EventSubmitProposal{
+ FromAddress: fromAddress,
+ ProposalId: id,
+ Proposal: proposal,
+ },
+ )
+ ...
+}
+```
+
+### How to subscribe to these typed events in `Client`
+
+> NOTE: Full code example below
+
+Users will be able to subscribe using `client.Context.Client.Subscribe` and consume events which are emitted using `EventHandler`s.
+
+Akash Network has built a simple [`pubsub`](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/pubsub/bus.go#L20). This can be used to subscribe to `abci.Events` and [publish](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L21) them as typed events.
+
+Please see the below code sample for more detail on this flow looks for clients.
+
+## Consequences
+
+### Positive
+
+* Improves consistency of implementation for the events currently in the Cosmos SDK
+* Provides a much more ergonomic way to handle events and facilitates writing event driven applications
+* This implementation will support a middleware ecosystem of `EventHandler`s
+
+### Negative
+
+## Detailed code example of publishing events
+
+This ADR also proposes adding affordances to emit and consume these events. This way developers will only need to write
+`EventHandler`s which define the actions they desire to take.
+
+```go
+// EventEmitter is a type that describes event emitter functions
+// This should be defined in `types/events.go`
+type EventEmitter func(context.Context, client.Context, ...EventHandler) error
+
+// EventHandler is a type of function that handles events coming out of the event bus
+// This should be defined in `types/events.go`
+type EventHandler func(proto.Message) error
+
+// Sample use of the functions below
+func main() {
+ ctx, cancel := context.WithCancel(context.Background())
+
+ if err := TxEmitter(ctx, client.Context{}.WithNodeURI("tcp://localhost:26657"), SubmitProposalEventHandler); err != nil {
+ cancel()
+ panic(err)
+ }
+
+ return
+}
+
+// SubmitProposalEventHandler is an example of an event handler that prints proposal details
+// when any EventSubmitProposal is emitted.
+func SubmitProposalEventHandler(ev proto.Message) (err error) {
+ switch event := ev.(type) {
+ // Handle governance proposal events creation events
+ case govtypes.EventSubmitProposal:
+ // Users define business logic here e.g.
+ fmt.Println(ev.FromAddress, ev.ProposalId, ev.Proposal)
+ return nil
+ default:
+ return nil
+ }
+}
+
+// TxEmitter is an example of an event emitter that emits just transaction events. This can and
+// should be implemented somewhere in the Cosmos SDK. The Cosmos SDK can include an EventEmitters for tm.event='Tx'
+// and/or tm.event='NewBlock' (the new block events may contain typed events)
+func TxEmitter(ctx context.Context, cliCtx client.Context, ehs ...EventHandler) (err error) {
+ // Instantiate and start CometBFT RPC client
+ client, err := cliCtx.GetNode()
+ if err != nil {
+ return err
+ }
+
+ if err = client.Start(); err != nil {
+ return err
+ }
+
+ // Start the pubsub bus
+ bus := pubsub.NewBus()
+ defer bus.Close()
+
+ // Initialize a new error group
+ eg, ctx := errgroup.WithContext(ctx)
+
+ // Publish chain events to the pubsub bus
+ eg.Go(func() error {
+ return PublishChainTxEvents(ctx, client, bus, simapp.ModuleBasics)
+ })
+
+ // Subscribe to the bus events
+ subscriber, err := bus.Subscribe()
+ if err != nil {
+ return err
+ }
+
+ // Handle all the events coming out of the bus
+ eg.Go(func() error {
+ var err error
+ for {
+ select {
+ case <-ctx.Done():
+ return nil
+ case <-subscriber.Done():
+ return nil
+ case ev := <-subscriber.Events():
+ for _, eh := range ehs {
+ if err = eh(ev); err != nil {
+ break
+ }
+ }
+ }
+ }
+ return nil
+ })
+
+ return group.Wait()
+}
+
+// PublishChainTxEvents events using cmtclient. Waits on context shutdown signals to exit.
+func PublishChainTxEvents(ctx context.Context, client cmtclient.EventsClient, bus pubsub.Bus, mb module.BasicManager) (err error) {
+ // Subscribe to transaction events
+ txch, err := client.Subscribe(ctx, "txevents", "tm.event='Tx'", 100)
+ if err != nil {
+ return err
+ }
+
+ // Unsubscribe from transaction events on function exit
+ defer func() {
+ err = client.UnsubscribeAll(ctx, "txevents")
+ }()
+
+ // Use errgroup to manage concurrency
+ g, ctx := errgroup.WithContext(ctx)
+
+ // Publish transaction events in a goroutine
+ g.Go(func() error {
+ var err error
+ for {
+ select {
+ case <-ctx.Done():
+ break
+ case ed := <-ch:
+ switch evt := ed.Data.(type) {
+ case cmttypes.EventDataTx:
+ if !evt.Result.IsOK() {
+ continue
+ }
+ // range over events, parse them using the basic manager and
+ // send them to the pubsub bus
+ for _, abciEv := range events {
+ typedEvent, err := sdk.ParseTypedEvent(abciEv)
+ if err != nil {
+ return er
+ }
+ if err := bus.Publish(typedEvent); err != nil {
+ bus.Close()
+ return
+ }
+ continue
+ }
+ }
+ }
+ }
+ return err
+ })
+
+ // Exit on error or context cancelation
+ return g.Wait()
+}
+```
+
+## References
+
+* [Publish Custom Events via a bus](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L19-L58)
+* [Consuming the events in `Client`](https://github.com/ovrclk/deploy/blob/bf6c633ab6c68f3026df59efd9982d6ca1bf0561/cmd/event-handlers.go#L57)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-033-protobuf-inter-module-comm.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-033-protobuf-inter-module-comm.md
new file mode 100644
index 00000000..2ff59fbe
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-033-protobuf-inter-module-comm.md
@@ -0,0 +1,400 @@
+# ADR 033: Protobuf-based Inter-Module Communication
+
+## Changelog
+
+* 2020-10-05: Initial Draft
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR introduces a system for permissioned inter-module communication leveraging the protobuf `Query` and `Msg`
+service definitions defined in [ADR 021](adr-021-protobuf-query-encoding.md) and
+[ADR 031](adr-031-msg-service.md) which provides:
+
+* stable protobuf based module interfaces to potentially later replace the keeper paradigm
+* stronger inter-module object capabilities (OCAPs) guarantees
+* module accounts and sub-account authorization
+
+## Context
+
+In the current Cosmos SDK documentation on the [Object-Capability Model](../../learn/advanced/10-ocap.md), it is stated that:
+
+> We assume that a thriving ecosystem of Cosmos SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules.
+
+There is currently not a thriving ecosystem of Cosmos SDK modules. We hypothesize that this is in part due to:
+
+1. lack of a stable v1.0 Cosmos SDK to build modules off of. Module interfaces are changing, sometimes dramatically, from
+point release to point release, often for good reasons, but this does not create a stable foundation to build on.
+2. lack of a properly implemented object capability or even object-oriented encapsulation system which makes refactors
+of module keeper interfaces inevitable because the current interfaces are poorly constrained.
+
+### `x/bank` Case Study
+
+Currently the `x/bank` keeper gives pretty much unrestricted access to any module which references it. For instance, the
+`SetBalance` method allows the caller to set the balance of any account to anything, bypassing even proper tracking of supply.
+
+There appears to have been some later attempts to implement some semblance of OCAPs using module-level minting, staking
+and burning permissions. These permissions allow a module to mint, burn or delegate tokens with reference to the module’s
+own account. These permissions are actually stored as a `[]string` array on the `ModuleAccount` type in state.
+
+However, these permissions don’t really do much. They control what modules can be referenced in the `MintCoins`,
+`BurnCoins` and `DelegateCoins***` methods, but for one there is no unique object capability token that controls access —
+just a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simple by calling
+`MintCoins(“staking”)`. Furthermore, all modules which have access to these keeper methods, also have access to
+`SetBalance` negating any other attempt at OCAPs and breaking even basic object-oriented encapsulation.
+
+## Decision
+
+Based on [ADR-021](adr-021-protobuf-query-encoding.md) and [ADR-031](adr-031-msg-service.md), we introduce the
+Inter-Module Communication framework for secure module authorization and OCAPs.
+When implemented, this could also serve as an alternative to the existing paradigm of passing keepers between
+modules. The approach outlined here-in is intended to form the basis of a Cosmos SDK v1.0 that provides the necessary
+stability and encapsulation guarantees that allow a thriving module ecosystem to emerge.
+
+Of particular note — the decision is to _enable_ this functionality for modules to adopt at their own discretion.
+Proposals to migrate existing modules to this new paradigm will have to be a separate conversation, potentially
+addressed as amendments to this ADR.
+
+### New "Keeper" Paradigm
+
+In [ADR 021](adr-021-protobuf-query-encoding.md), a mechanism for using protobuf service definitions to define queriers
+was introduced and in [ADR 31](adr-031-msg-service.md), a mechanism for using protobuf service to define `Msg`s was added.
+Protobuf service definitions generate two golang interfaces representing the client and server sides of a service plus
+some helper code. Here is a minimal example for the bank `cosmos.bank.Msg/Send` message type:
+
+```go
+package bank
+
+type MsgClient interface {
+ Send(context.Context, *MsgSend, opts ...grpc.CallOption) (*MsgSendResponse, error)
+}
+
+type MsgServer interface {
+ Send(context.Context, *MsgSend) (*MsgSendResponse, error)
+}
+```
+
+[ADR 021](adr-021-protobuf-query-encoding.md) and [ADR 31](adr-031-msg-service.md) specifies how modules can implement the generated `QueryServer`
+and `MsgServer` interfaces as replacements for the legacy queriers and `Msg` handlers respectively.
+
+In this ADR we explain how modules can make queries and send `Msg`s to other modules using the generated `QueryClient`
+and `MsgClient` interfaces and propose this mechanism as a replacement for the existing `Keeper` paradigm. To be clear,
+this ADR does not necessitate the creation of new protobuf definitions or services. Rather, it leverages the same proto
+based service interfaces already used by clients for inter-module communication.
+
+Using this `QueryClient`/`MsgClient` approach has the following key benefits over exposing keepers to external modules:
+
+1. Protobuf types are checked for breaking changes using [buf](https://buf.build/docs/breaking-overview) and because of
+the way protobuf is designed this will give us strong backwards compatibility guarantees while allowing for forward
+evolution.
+2. The separation between the client and server interfaces will allow us to insert permission checking code in between
+the two which checks if one module is authorized to send the specified `Msg` to the other module providing a proper
+object capability system (see below).
+3. The router for inter-module communication gives us a convenient place to handle rollback of transactions,
+enabling atomicy of operations ([currently a problem](https://github.com/cosmos/cosmos-sdk/issues/8030)). Any failure within a module-to-module call would result in a failure of the entire
+transaction
+
+This mechanism has the added benefits of:
+
+* reducing boilerplate through code generation, and
+* allowing for modules in other languages either via a VM like CosmWasm or sub-processes using gRPC
+
+### Inter-module Communication
+
+To use the `Client` generated by the protobuf compiler we need a `grpc.ClientConn` [interface](https://github.com/grpc/grpc-go/blob/v1.49.x/clientconn.go#L441-L450)
+implementation. For this we introduce
+a new type, `ModuleKey`, which implements the `grpc.ClientConn` interface. `ModuleKey` can be thought of as the "private
+key" corresponding to a module account, where authentication is provided through use of a special `Invoker()` function,
+described in more detail below.
+
+Blockchain users (external clients) use their account's private key to sign transactions containing `Msg`s where they are listed as signers (each
+message specifies required signers with `Msg.GetSigner`). The authentication checks is performed by `AnteHandler`.
+
+Here, we extend this process, by allowing modules to be identified in `Msg.GetSigners`. When a module wants to trigger the execution a `Msg` in another module,
+its `ModuleKey` acts as the sender (through the `ClientConn` interface we describe below) and is set as a sole "signer". It's worth to note
+that we don't use any cryptographic signature in this case.
+For example, module `A` could use its `A.ModuleKey` to create `MsgSend` object for `/cosmos.bank.Msg/Send` transaction. `MsgSend` validation
+will assure that the `from` account (`A.ModuleKey` in this case) is the signer.
+
+Here's an example of a hypothetical module `foo` interacting with `x/bank`:
+
+```go
+package foo
+
+
+type FooMsgServer {
+ // ...
+
+ bankQuery bank.QueryClient
+ bankMsg bank.MsgClient
+}
+
+func NewFooMsgServer(moduleKey RootModuleKey, ...) FooMsgServer {
+ // ...
+
+ return FooMsgServer {
+ // ...
+ modouleKey: moduleKey,
+ bankQuery: bank.NewQueryClient(moduleKey),
+ bankMsg: bank.NewMsgClient(moduleKey),
+ }
+}
+
+func (foo *FooMsgServer) Bar(ctx context.Context, req *MsgBarRequest) (*MsgBarResponse, error) {
+ balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{Address: fooMsgServer.moduleKey.Address(), Denom: "foo"})
+
+ ...
+
+ res, err := foo.bankMsg.Send(ctx, &bank.MsgSendRequest{FromAddress: fooMsgServer.moduleKey.Address(), ...})
+
+ ...
+}
+```
+
+This design is also intended to be extensible to cover use cases of more fine grained permissioning like minting by
+denom prefix being restricted to certain modules (as discussed in
+[#7459](https://github.com/cosmos/cosmos-sdk/pull/7459#discussion_r529545528)).
+
+### `ModuleKey`s and `ModuleID`s
+
+A `ModuleKey` can be thought of as a "private key" for a module account and a `ModuleID` can be thought of as the
+corresponding "public key". From the [ADR 028](adr-028-public-key-addresses.md), modules can have both a root module account and any number of sub-accounts
+or derived accounts that can be used for different pools (ex. staking pools) or managed accounts (ex. group
+accounts). We can also think of module sub-accounts as similar to derived keys - there is a root key and then some
+derivation path. `ModuleID` is a simple struct which contains the module name and optional "derivation" path,
+and forms its address based on the `AddressHash` method from [the ADR-028](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-028-public-key-addresses.md):
+
+```go
+type ModuleID struct {
+ ModuleName string
+ Path []byte
+}
+
+func (key ModuleID) Address() []byte {
+ return AddressHash(key.ModuleName, key.Path)
+}
+```
+
+In addition to being able to generate a `ModuleID` and address, a `ModuleKey` contains a special function called
+`Invoker` which is the key to safe inter-module access. The `Invoker` creates an `InvokeFn` closure which is used as an `Invoke` method in
+the `grpc.ClientConn` interface and under the hood is able to route messages to the appropriate `Msg` and `Query` handlers
+performing appropriate security checks on `Msg`s. This allows for even safer inter-module access than keeper's whose
+private member variables could be manipulated through reflection. Golang does not support reflection on a function
+closure's captured variables and direct manipulation of memory would be needed for a truly malicious module to bypass
+the `ModuleKey` security.
+
+The two `ModuleKey` types are `RootModuleKey` and `DerivedModuleKey`:
+
+```go
+type Invoker func(callInfo CallInfo) func(ctx context.Context, request, response interface{}, opts ...interface{}) error
+
+type CallInfo {
+ Method string
+ Caller ModuleID
+}
+
+type RootModuleKey struct {
+ moduleName string
+ invoker Invoker
+}
+
+func (rm RootModuleKey) Derive(path []byte) DerivedModuleKey { /* ... */}
+
+type DerivedModuleKey struct {
+ moduleName string
+ path []byte
+ invoker Invoker
+}
+```
+
+A module can get access to a `DerivedModuleKey`, using the `Derive(path []byte)` method on `RootModuleKey` and then
+would use this key to authenticate `Msg`s from a sub-account. Ex:
+
+```go
+package foo
+
+func (fooMsgServer *MsgServer) Bar(ctx context.Context, req *MsgBar) (*MsgBarResponse, error) {
+ derivedKey := fooMsgServer.moduleKey.Derive(req.SomePath)
+ bankMsgClient := bank.NewMsgClient(derivedKey)
+ res, err := bankMsgClient.Balance(ctx, &bank.MsgSend{FromAddress: derivedKey.Address(), ...})
+ ...
+}
+```
+
+In this way, a module can gain permissioned access to a root account and any number of sub-accounts and send
+authenticated `Msg`s from these accounts. The `Invoker` `callInfo.Caller` parameter is used under the hood to
+distinguish between different module accounts, but either way the function returned by `Invoker` only allows `Msg`s
+from either the root or a derived module account to pass through.
+
+Note that `Invoker` itself returns a function closure based on the `CallInfo` passed in. This will allow client implementations
+in the future that cache the invoke function for each method type avoiding the overhead of hash table lookup.
+This would reduce the performance overhead of this inter-module communication method to the bare minimum required for
+checking permissions.
+
+To re-iterate, the closure only allows access to authorized calls. There is no access to anything else regardless of any
+name impersonation.
+
+Below is a rough sketch of the implementation of `grpc.ClientConn.Invoke` for `RootModuleKey`:
+
+```go
+func (key RootModuleKey) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...grpc.CallOption) error {
+ f := key.invoker(CallInfo {Method: method, Caller: ModuleID {ModuleName: key.moduleName}})
+ return f(ctx, args, reply)
+}
+```
+
+### `AppModule` Wiring and Requirements
+
+In [ADR 031](adr-031-msg-service.md), the `AppModule.RegisterService(Configurator)` method was introduced. To support
+inter-module communication, we extend the `Configurator` interface to pass in the `ModuleKey` and to allow modules to
+specify their dependencies on other modules using `RequireServer()`:
+
+```go
+type Configurator interface {
+ MsgServer() grpc.Server
+ QueryServer() grpc.Server
+
+ ModuleKey() ModuleKey
+ RequireServer(msgServer interface{})
+}
+```
+
+The `ModuleKey` is passed to modules in the `RegisterService` method itself so that `RegisterServices` serves as a single
+entry point for configuring module services. This is intended to also have the side-effect of greatly reducing boilerplate in
+`app.go`. For now, `ModuleKey`s will be created based on `AppModuleBasic.Name()`, but a more flexible system may be
+introduced in the future. The `ModuleManager` will handle creation of module accounts behind the scenes.
+
+Because modules do not get direct access to each other anymore, modules may have unfulfilled dependencies. To make sure
+that module dependencies are resolved at startup, the `Configurator.RequireServer` method should be added. The `ModuleManager`
+will make sure that all dependencies declared with `RequireServer` can be resolved before the app starts. An example
+module `foo` could declare it's dependency on `x/bank` like this:
+
+```go
+package foo
+
+func (am AppModule) RegisterServices(cfg Configurator) {
+ cfg.RequireServer((*bank.QueryServer)(nil))
+ cfg.RequireServer((*bank.MsgServer)(nil))
+}
+```
+
+### Security Considerations
+
+In addition to checking for `ModuleKey` permissions, a few additional security precautions will need to be taken by
+the underlying router infrastructure.
+
+#### Recursion and Re-entry
+
+Recursive or re-entrant method invocations pose a potential security threat. This can be a problem if Module A
+calls Module B and Module B calls module A again in the same call.
+
+One basic way for the router system to deal with this is to maintain a call stack which prevents a module from
+being referenced more than once in the call stack so that there is no re-entry. A `map[string]interface{}` table
+in the router could be used to perform this security check.
+
+#### Queries
+
+Queries in Cosmos SDK are generally un-permissioned so allowing one module to query another module should not pose
+any major security threats assuming basic precautions are taken. The basic precaution that the router system will
+need to take is making sure that the `sdk.Context` passed to query methods does not allow writing to the store. This
+can be done for now with a `CacheMultiStore` as is currently done for `BaseApp` queries.
+
+### Internal Methods
+
+In many cases, we may wish for modules to call methods on other modules which are not exposed to clients at all. For this
+purpose, we add the `InternalServer` method to `Configurator`:
+
+```go
+type Configurator interface {
+ MsgServer() grpc.Server
+ QueryServer() grpc.Server
+ InternalServer() grpc.Server
+}
+```
+
+As an example, x/slashing's Slash must call x/staking's Slash, but we don't want to expose x/staking's Slash to end users
+and clients.
+
+Internal protobuf services will be defined in a corresponding `internal.proto` file in the given module's
+proto package.
+
+Services registered against `InternalServer` will be callable from other modules but not by external clients.
+
+An alternative solution to internal-only methods could involve hooks / plugins as discussed [here](https://github.com/cosmos/cosmos-sdk/pull/7459#issuecomment-733807753).
+A more detailed evaluation of a hooks / plugin system will be addressed later in follow-ups to this ADR or as a separate
+ADR.
+
+### Authorization
+
+By default, the inter-module router requires that messages are sent by the first signer returned by `GetSigners`. The
+inter-module router should also accept authorization middleware such as that provided by [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md).
+This middleware will allow accounts to otherwise specific module accounts to perform actions on their behalf.
+Authorization middleware should take into account the need to grant certain modules effectively "admin" privileges to
+other modules. This will be addressed in separate ADRs or updates to this ADR.
+
+### Future Work
+
+Other future improvements may include:
+
+* custom code generation that:
+ * simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`)
+ * optimizes inter-module calls - for instance caching resolved methods after first invocation
+* combining `StoreKey`s and `ModuleKey`s into a single interface so that modules have a single OCAPs handle
+* code generation which makes inter-module communication more performant
+* decoupling `ModuleKey` creation from `AppModuleBasic.Name()` so that app's can override root module account names
+* inter-module hooks and plugins
+
+## Alternatives
+
+### MsgServices vs `x/capability`
+
+The `x/capability` module does provide a proper object-capability implementation that can be used by any module in the
+Cosmos SDK and could even be used for inter-module OCAPs as described in [\#5931](https://github.com/cosmos/cosmos-sdk/issues/5931).
+
+The advantages of the approach described in this ADR are mostly around how it integrates with other parts of the Cosmos SDK,
+specifically:
+
+* protobuf so that:
+ * code generation of interfaces can be leveraged for a better dev UX
+ * module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview)
+* sub-module accounts as per ADR 028
+* the general `Msg` passing paradigm and the way signers are specified by `GetSigners`
+
+Also, this is a complete replacement for keepers and could be applied to _all_ inter-module communication whereas the
+`x/capability` approach in #5931 would need to be applied method by method.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR is intended to provide a pathway to a scenario where there is greater long term compatibility between modules.
+In the short-term, this will likely result in breaking certain `Keeper` interfaces which are too permissive and/or
+replacing `Keeper` interfaces altogether.
+
+### Positive
+
+* an alternative to keepers which can more easily lead to stable inter-module interfaces
+* proper inter-module OCAPs
+* improved module developer DevX, as commented on by several particpants on
+ [Architecture Review Call, Dec 3](https://hackmd.io/E0wxxOvRQ5qVmTf6N_k84Q)
+* lays the groundwork for what can be a greatly simplified `app.go`
+* router can be setup to enforce atomic transactions for module-to-module calls
+
+### Negative
+
+* modules which adopt this will need significant refactoring
+
+### Neutral
+
+## Test Cases [optional]
+
+## References
+
+* [ADR 021](adr-021-protobuf-query-encoding.md)
+* [ADR 031](adr-031-msg-service.md)
+* [ADR 028](adr-028-public-key-addresses.md)
+* [ADR 030 draft](https://github.com/cosmos/cosmos-sdk/pull/7105)
+* [Object-Capability Model](https://docs.network.com/main/core/ocap)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-034-account-rekeying.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-034-account-rekeying.md
new file mode 100644
index 00000000..cd9b9146
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-034-account-rekeying.md
@@ -0,0 +1,76 @@
+# ADR 034: Account Rekeying
+
+## Changelog
+
+* 30-09-2020: Initial Draft
+
+## Status
+
+PROPOSED
+
+## Abstract
+
+Account rekeying is a process hat allows an account to replace its authentication pubkey with a new one.
+
+## Context
+
+Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it can not be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons.
+
+Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferrable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.
+
+## Decision
+
+We propose the addition of a new feature to `x/auth` that allows accounts to update the public key associated with their account, while keeping the address the same.
+
+This is possible because the Cosmos SDK `BaseAccount` stores the public key for an account in state, instead of making the assumption that the public key is included in the transaction (whether explicitly or implicitly through the signature) as in other blockchains such as Bitcoin and Ethereum. Because the public key is stored on chain, it is okay for the public key to not hash to the address of an account, as the address is not pertinent to the signature checking process.
+
+To build this system, we design a new Msg type as follows:
+
+```protobuf
+service Msg {
+ rpc ChangePubKey(MsgChangePubKey) returns (MsgChangePubKeyResponse);
+}
+
+message MsgChangePubKey {
+ string address = 1;
+ google.protobuf.Any pub_key = 2;
+}
+
+message MsgChangePubKeyResponse {}
+```
+
+The MsgChangePubKey transaction needs to be signed by the existing pubkey in state.
+
+Once, approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg.
+
+An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this this externality (this bound gas amount is configured as parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action.
+
+```go
+ amount := ak.GetParams(ctx).PubKeyChangeCost
+ ctx.GasMeter().ConsumeGas(amount, "pubkey change fee")
+```
+
+Everytime a key for an address is changed, we will store a log of this change in the state of the chain, thus creating a stack of all previous keys for an address and the time intervals for which they were active. This allows dapps and clients to easily query past keys for an account which may be useful for features such as verifying timestamped off-chain signed messages.
+
+## Consequences
+
+### Positive
+
+* Will allow users and validator operators to employ better operational security practices with key rotation.
+* Will allow organizations or groups to easily change and add/remove multisig signers.
+
+### Negative
+
+Breaks the current assumed relationship between address and pubkeys as H(pubkey) = address. This has a couple of consequences.
+
+* This makes wallets that support this feature more complicated. For example, if an address on chain was updated, the corresponding key in the CLI wallet also needs to be updated.
+* Cannot automatically prune accounts with 0 balance that have had their pubkey changed.
+
+### Neutral
+
+* While the purpose of this is intended to allow the owner of an account to update to a new pubkey they own, this could technically also be used to transfer ownership of an account to a new owner. For example, this could be use used to sell a staked position without unbonding or an account that has vesting tokens. However, the friction of this is very high as this would essentially have to be done as a very specific OTC trade. Furthermore, additional constraints could be added to prevent accouns with Vesting tokens to use this feature.
+* Will require that PubKeys for an account are included in the genesis exports.
+
+## References
+
+* https://www.algorand.com/resources/blog/announcing-rekeying
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-035-rosetta-api-support.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-035-rosetta-api-support.md
new file mode 100644
index 00000000..01a81048
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-035-rosetta-api-support.md
@@ -0,0 +1,211 @@
+# ADR 035: Rosetta API Support
+
+## Authors
+
+* Jonathan Gimeno (@jgimeno)
+* David Grierson (@senormonito)
+* Alessio Treglia (@alessio)
+* Frojdy Dymylja (@fdymylja)
+
+## Changelog
+
+* 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK.
+
+## Context
+
+[Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to
+standardise blockchain interactions.
+
+Through the use of a standard API for integrating blockchain applications it will
+
+* Be easier for a user to interact with a given blockchain
+* Allow exchanges to integrate new blockchains quickly and easily
+* Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at
+ considerably lower cost and effort.
+
+## Decision
+
+It is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and
+Cosmos SDK based chains in the ecosystem. How it is implemented is key.
+
+The driving principles of the proposed design are:
+
+1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network
+ configurations to expose Rosetta API-compliant services.
+2. **Long term support:** This proposal aims to provide support for all the supported Cosmos SDK release series.
+3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable
+ branches of Cosmos SDK is a cost that needs to be reduced.
+
+We will achieve these delivering on these principles by the following:
+
+1. There will be a package `rosetta/lib`
+ for the implementation of the core Rosetta API features, particularly:
+ a. The types and interfaces (`Client`, `OfflineClient`...), this separates design from implementation detail.
+ b. The `Server` functionality as this is independent of the Cosmos SDK version.
+ c. The `Online/OfflineNetwork`, which is not exported, and implements the rosetta API using the `Client` interface to query the node, build tx and so on.
+ d. The `errors` package to extend rosetta errors.
+2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface.
+3. There will be two options for starting an API service in applications:
+ a. API shares the application process
+ b. API-specific process.
+
+## Architecture
+
+### The External Repo
+
+As section will describe the proposed external library, including the service implementation, plus the defined types and interfaces.
+
+#### Server
+
+`Server` is a simple `struct` that is started and listens to the port specified in the settings. This is meant to be used across all the Cosmos SDK versions that are actively supported.
+
+The constructor follows:
+
+`func NewServer(settings Settings) (Server, error)`
+
+`Settings`, which are used to construct a new server, are the following:
+
+```go
+// Settings define the rosetta server settings
+type Settings struct {
+ // Network contains the information regarding the network
+ Network *types.NetworkIdentifier
+ // Client is the online API handler
+ Client crgtypes.Client
+ // Listen is the address the handler will listen at
+ Listen string
+ // Offline defines if the rosetta service should be exposed in offline mode
+ Offline bool
+ // Retries is the number of readiness checks that will be attempted when instantiating the handler
+ // valid only for online API
+ Retries int
+ // RetryWait is the time that will be waited between retries
+ RetryWait time.Duration
+}
+```
+
+#### Types
+
+Package types uses a mixture of rosetta types and custom defined type wrappers, that the client must parse and return while executing operations.
+
+##### Interfaces
+
+Every SDK version uses a different format to connect (rpc, gRPC, etc), query and build transactions, we have abstracted this in what is the `Client` interface.
+The client uses rosetta types, whilst the `Online/OfflineNetwork` takes care of returning correctly parsed rosetta responses and errors.
+
+Each Cosmos SDK release series will have their own `Client` implementations.
+Developers can implement their own custom `Client`s as required.
+
+```go
+// Client defines the API the client implementation should provide.
+type Client interface {
+ // Needed if the client needs to perform some action before connecting.
+ Bootstrap() error
+ // Ready checks if the servicer constraints for queries are satisfied
+ // for example the node might still not be ready, it's useful in process
+ // when the rosetta instance might come up before the node itself
+ // the servicer must return nil if the node is ready
+ Ready() error
+
+ // Data API
+
+ // Balances fetches the balance of the given address
+ // if height is not nil, then the balance will be displayed
+ // at the provided height, otherwise last block balance will be returned
+ Balances(ctx context.Context, addr string, height *int64) ([]*types.Amount, error)
+ // BlockByHashAlt gets a block and its transaction at the provided height
+ BlockByHash(ctx context.Context, hash string) (BlockResponse, error)
+ // BlockByHeightAlt gets a block given its height, if height is nil then last block is returned
+ BlockByHeight(ctx context.Context, height *int64) (BlockResponse, error)
+ // BlockTransactionsByHash gets the block, parent block and transactions
+ // given the block hash.
+ BlockTransactionsByHash(ctx context.Context, hash string) (BlockTransactionsResponse, error)
+ // BlockTransactionsByHash gets the block, parent block and transactions
+ // given the block hash.
+ BlockTransactionsByHeight(ctx context.Context, height *int64) (BlockTransactionsResponse, error)
+ // GetTx gets a transaction given its hash
+ GetTx(ctx context.Context, hash string) (*types.Transaction, error)
+ // GetUnconfirmedTx gets an unconfirmed Tx given its hash
+ // NOTE(fdymylja): NOT IMPLEMENTED YET!
+ GetUnconfirmedTx(ctx context.Context, hash string) (*types.Transaction, error)
+ // Mempool returns the list of the current non confirmed transactions
+ Mempool(ctx context.Context) ([]*types.TransactionIdentifier, error)
+ // Peers gets the peers currently connected to the node
+ Peers(ctx context.Context) ([]*types.Peer, error)
+ // Status returns the node status, such as sync data, version etc
+ Status(ctx context.Context) (*types.SyncStatus, error)
+
+ // Construction API
+
+ // PostTx posts txBytes to the node and returns the transaction identifier plus metadata related
+ // to the transaction itself.
+ PostTx(txBytes []byte) (res *types.TransactionIdentifier, meta map[string]interface{}, err error)
+ // ConstructionMetadataFromOptions
+ ConstructionMetadataFromOptions(ctx context.Context, options map[string]interface{}) (meta map[string]interface{}, err error)
+ OfflineClient
+}
+
+// OfflineClient defines the functionalities supported without having access to the node
+type OfflineClient interface {
+ NetworkInformationProvider
+ // SignedTx returns the signed transaction given the tx bytes (msgs) plus the signatures
+ SignedTx(ctx context.Context, txBytes []byte, sigs []*types.Signature) (signedTxBytes []byte, err error)
+ // TxOperationsAndSignersAccountIdentifiers returns the operations related to a transaction and the account
+ // identifiers if the transaction is signed
+ TxOperationsAndSignersAccountIdentifiers(signed bool, hexBytes []byte) (ops []*types.Operation, signers []*types.AccountIdentifier, err error)
+ // ConstructionPayload returns the construction payload given the request
+ ConstructionPayload(ctx context.Context, req *types.ConstructionPayloadsRequest) (resp *types.ConstructionPayloadsResponse, err error)
+ // PreprocessOperationsToOptions returns the options given the preprocess operations
+ PreprocessOperationsToOptions(ctx context.Context, req *types.ConstructionPreprocessRequest) (options map[string]interface{}, err error)
+ // AccountIdentifierFromPublicKey returns the account identifier given the public key
+ AccountIdentifierFromPublicKey(pubKey *types.PublicKey) (*types.AccountIdentifier, error)
+}
+```
+
+### 2. Cosmos SDK Implementation
+
+The Cosmos SDK implementation, based on version, takes care of satisfying the `Client` interface.
+In Stargate, Launchpad and 0.37, we have introduced the concept of rosetta.Msg, this message is not in the shared repository as the sdk.Msg type differs between Cosmos SDK versions.
+
+The rosetta.Msg interface follows:
+
+```go
+// Msg represents a cosmos-sdk message that can be converted from and to a rosetta operation.
+type Msg interface {
+ sdk.Msg
+ ToOperations(withStatus, hasError bool) []*types.Operation
+ FromOperations(ops []*types.Operation) (sdk.Msg, error)
+}
+```
+
+Hence developers who want to extend the rosetta set of supported operations just need to extend their module's sdk.Msgs with the `ToOperations` and `FromOperations` methods.
+
+### 3. API service invocation
+
+As stated at the start, application developers will have two methods for invocation of the Rosetta API service:
+
+1. Shared process for both application and API
+2. Standalone API service
+
+#### Shared Process (Only Stargate)
+
+Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spinned in offline mode (tx building capabilities only).
+
+#### Separate API service
+
+Client application developers can write a new command to launch a Rosetta API server as a separate process too, using the rosetta command contained in the `/server/rosetta` package. Construction of the command depends on Cosmos SDK version. Examples can be found inside `simd` for stargate, and `contrib/rosetta/simapp` for other release series.
+
+## Status
+
+Proposed
+
+## Consequences
+
+### Positive
+
+* Out-of-the-box Rosetta API support within Cosmos SDK.
+* Blockchain interface standardisation
+
+## References
+
+* https://www.rosetta-api.org/
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-036-arbitrary-signature.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-036-arbitrary-signature.md
new file mode 100644
index 00000000..fe9dada5
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-036-arbitrary-signature.md
@@ -0,0 +1,132 @@
+# ADR 036: Arbitrary Message Signature Specification
+
+## Changelog
+
+* 28/10/2020 - Initial draft
+
+## Authors
+
+* Antoine Herzog (@antoineherzog)
+* Zaki Manian (@zmanian)
+* Aleksandr Bezobchuk (alexanderbez) [1]
+* Frojdi Dymylja (@fdymylja)
+
+## Status
+
+Draft
+
+## Abstract
+
+Currently, in the Cosmos SDK, there is no convention to sign arbitrary message like on Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages.
+
+This specification serves the purpose of covering every use case, this means that cosmos-sdk applications developers decide how to serialize and represent `Data` to users.
+
+## Context
+
+Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data includes, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device.
+
+Further context and use cases can be found in the references links.
+
+## Decision
+
+The aim is being able to sign arbitrary messages, even using Ledger or similar HSM devices.
+
+As a result signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values.
+
+Cosmos SDK 0.40 also introduces a concept of “auth_info” this can specify SIGN_MODES.
+
+A spec should include an `auth_info` that supports SIGN_MODE_DIRECT and SIGN_MODE_LEGACY_AMINO.
+
+Create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages.
+
+An offchain transaction follows these rules:
+
+* the memo must be empty
+* nonce, sequence number must be equal to 0
+* chain-id must be equal to “”
+* fee gas must be equal to 0
+* fee amount must be an empty array
+
+Verification of an offchain transaction follows the same rules as an onchain one, except for the spec differences highlighted above.
+
+The first message added to the `offchain` package is `MsgSignData`.
+
+`MsgSignData` allows developers to sign arbitrary bytes valid offchain only. Where `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context.
+
+It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent.
+
+Proto definition:
+
+```protobuf
+// MsgSignData defines an arbitrary, general-purpose, off-chain message
+message MsgSignData {
+ // Signer is the sdk.AccAddress of the message signer
+ bytes Signer = 1 [(gogoproto.jsontag) = "signer", (gogoproto.casttype) = "github.com/cosmos/cosmos-sdk/types.AccAddress"];
+ // Data represents the raw bytes of the content that is signed (text, json, etc)
+ bytes Data = 2 [(gogoproto.jsontag) = "data"];
+}
+```
+
+Signed MsgSignData json example:
+
+```json
+{
+ "type": "cosmos-sdk/StdTx",
+ "value": {
+ "msg": [
+ {
+ "type": "sign/MsgSignData",
+ "value": {
+ "signer": "cosmos1hftz5ugqmpg9243xeegsqqav62f8hnywsjr4xr",
+ "data": "cmFuZG9t"
+ }
+ }
+ ],
+ "fee": {
+ "amount": [],
+ "gas": "0"
+ },
+ "signatures": [
+ {
+ "pub_key": {
+ "type": "tendermint/PubKeySecp256k1",
+ "value": "AqnDSiRoFmTPfq97xxEb2VkQ/Hm28cPsqsZm9jEVsYK9"
+ },
+ "signature": "8y8i34qJakkjse9pOD2De+dnlc4KvFgh0wQpes4eydN66D9kv7cmCEouRrkka9tlW9cAkIL52ErB+6ye7X5aEg=="
+ }
+ ],
+ "memo": ""
+ }
+}
+```
+
+## Consequences
+
+There is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed.
+
+### Backwards Compatibility
+
+Backwards compatibility is maintained as this is a new message spec definition.
+
+### Positive
+
+* A common format that can be used by multiple applications to sign and verify off-chain messages.
+* The specification is primitive which means it can cover every use case without limiting what is possible to fit inside it.
+* It gives room for other off-chain messages specifications that aim to target more specific and common use cases such as off-chain-based authN/authZ layers [2].
+
+### Negative
+
+* Current proposal requires a fixed relationship between an account address and a public key.
+* Doesn't work with multisig accounts.
+
+## Further discussion
+
+* Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content laying in `Data` non-replayable when, and if, needed.
+* the offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general.
+
+## References
+
+1. https://github.com/cosmos/ics/pull/33
+2. https://github.com/cosmos/cosmos-sdk/pull/7727#discussion_r515668204
+3. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-722478477
+4. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-037-gov-split-vote.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-037-gov-split-vote.md
new file mode 100644
index 00000000..0a3b9bc4
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-037-gov-split-vote.md
@@ -0,0 +1,111 @@
+# ADR 037: Governance split votes
+
+## Changelog
+
+* 2020/10/28: Intial draft
+
+## Status
+
+Accepted
+
+## Abstract
+
+This ADR defines a modification to the governance module that would allow a staker to split their votes into several voting options. For example, it could use 70% of its voting power to vote Yes and 30% of its voting power to vote No.
+
+## Context
+
+Currently, an address can cast a vote with only one options (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice.
+
+However, often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll.
+
+## Decision
+
+We modify the vote structs to be
+
+```go
+type WeightedVoteOption struct {
+ Option string
+ Weight sdk.Dec
+}
+
+type Vote struct {
+ ProposalID int64
+ Voter sdk.Address
+ Options []WeightedVoteOption
+}
+```
+
+And for backwards compatibility, we introduce `MsgVoteWeighted` while keeping `MsgVote`.
+
+```go
+type MsgVote struct {
+ ProposalID int64
+ Voter sdk.Address
+ Option Option
+}
+
+type MsgVoteWeighted struct {
+ ProposalID int64
+ Voter sdk.Address
+ Options []WeightedVoteOption
+}
+```
+
+The `ValidateBasic` of a `MsgVoteWeighted` struct would require that
+
+1. The sum of all the Rates is equal to 1.0
+2. No Option is repeated
+
+The governance tally function will iterate over all the options in a vote and add to the tally the result of the voter's voting power * the rate for that option.
+
+```go
+tally() {
+ results := map[types.VoteOption]sdk.Dec
+
+ for _, vote := range votes {
+ for i, weightedOption := range vote.Options {
+ results[weightedOption.Option] += getVotingPower(vote.voter) * weightedOption.Weight
+ }
+ }
+}
+```
+
+The CLI command for creating a multi-option vote would be as such:
+
+```shell
+simd tx gov vote 1 "yes=0.6,no=0.3,abstain=0.05,no_with_veto=0.05" --from mykey
+```
+
+To create a single-option vote a user can do either
+
+```shell
+simd tx gov vote 1 "yes=1" --from mykey
+```
+
+or
+
+```shell
+simd tx gov vote 1 yes --from mykey
+```
+
+to maintain backwards compatibility.
+
+## Consequences
+
+### Backwards Compatibility
+
+* Previous VoteMsg types will remain the same and so clients will not have to update their procedure unless they want to support the WeightedVoteMsg feature.
+* When querying a Vote struct from state, its structure will be different, and so clients wanting to display all voters and their respective votes will have to handle the new format and the fact that a single voter can have split votes.
+* The result of querying the tally function should have the same API for clients.
+
+### Positive
+
+* Can make the voting process more accurate for addresses representing multiple stakeholders, often some of the largest addresses.
+
+### Negative
+
+* Is more complex than simple voting, and so may be harder to explain to users. However, this is mostly mitigated because the feature is opt-in.
+
+### Neutral
+
+* Relatively minor change to governance tally function.
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-038-state-listening.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-038-state-listening.md
new file mode 100644
index 00000000..212d275d
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-038-state-listening.md
@@ -0,0 +1,822 @@
+# ADR 038: KVStore state listening
+
+## Changelog
+
+* 11/23/2020: Initial draft
+* 10/06/2022: Introduce plugin system based on hashicorp/go-plugin
+* 10/14/2022:
+ * Add `ListenCommit`, flatten the state writes in a block to a single batch.
+ * Remove listeners from cache stores, should only listen to `rootmulti.Store`.
+ * Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if don't want to propogate errors.
+
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR defines a set of changes to enable listening to state changes of individual KVStores and exposing these data to consumers.
+
+## Context
+
+Currently, KVStore data can be remotely accessed through [Queries](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules/02-messages-and-queries.md#queries)
+which proceed either through Tendermint and the ABCI, or through the gRPC server.
+In addition to these request/response queries, it would be beneficial to have a means of listening to state changes as they occur in real time.
+
+## Decision
+
+We will modify the `CommitMultiStore` interface and its concrete (`rootmulti`) implementations and introduce a new `listenkv.Store` to allow listening to state changes in underlying KVStores. We don't need to listen to cache stores, because we can't be sure that the writes will be committed eventually, and the writes are duplicated in `rootmulti.Store` eventually, so we should only listen to `rootmulti.Store`.
+We will introduce a plugin system for configuring and running streaming services that write these state changes and their surrounding ABCI message context to different destinations.
+
+### Listening
+
+In a new file, `store/types/listening.go`, we will create a `MemoryListener` struct for streaming out protobuf encoded KV pairs state changes from a KVStore.
+The `MemoryListener` will be used internally by the concrete `rootmulti` implementation to collect state changes from KVStores.
+
+```go
+// MemoryListener listens to the state writes and accumulate the records in memory.
+type MemoryListener struct {
+ stateCache []StoreKVPair
+}
+
+// NewMemoryListener creates a listener that accumulate the state writes in memory.
+func NewMemoryListener() *MemoryListener {
+ return &MemoryListener{}
+}
+
+// OnWrite writes state change events to the internal cache
+func (fl *MemoryListener) OnWrite(storeKey StoreKey, key []byte, value []byte, delete bool) {
+ fl.stateCache = append(fl.stateCache, StoreKVPair{
+ StoreKey: storeKey.Name(),
+ Delete: delete,
+ Key: key,
+ Value: value,
+ })
+}
+
+// PopStateCache returns the current state caches and set to nil
+func (fl *MemoryListener) PopStateCache() []StoreKVPair {
+ res := fl.stateCache
+ fl.stateCache = nil
+ return res
+}
+```
+
+We will also define a protobuf type for the KV pairs. In addition to the key and value fields this message
+will include the StoreKey for the originating KVStore so that we can collect information from separate KVStores and determine the source of each KV pair.
+
+```protobuf
+message StoreKVPair {
+ optional string store_key = 1; // the store key for the KVStore this pair originates from
+ required bool set = 2; // true indicates a set operation, false indicates a delete operation
+ required bytes key = 3;
+ required bytes value = 4;
+}
+```
+
+### ListenKVStore
+
+We will create a new `Store` type `listenkv.Store` that the `rootmulti` store will use to wrap a `KVStore` to enable state listening.
+We will configure the `Store` with a `MemoryListener` which will collect state changes for output to specific destinations.
+
+```go
+// Store implements the KVStore interface with listening enabled.
+// Operations are traced on each advanced KVStore call and written to any of the
+// underlying listeners with the proper key and operation permissions
+type Store struct {
+ parent types.KVStore
+ listener *types.MemoryListener
+ parentStoreKey types.StoreKey
+}
+
+// NewStore returns a reference to a new traceKVStore given a parent
+// KVStore implementation and a buffered writer.
+func NewStore(parent types.KVStore, psk types.StoreKey, listener *types.MemoryListener) *Store {
+ return &Store{parent: parent, listener: listener, parentStoreKey: psk}
+}
+
+// Set implements the KVStore interface. It traces a write operation and
+// delegates the Set call to the parent KVStore.
+func (s *Store) Set(key []byte, value []byte) {
+ types.AssertValidKey(key)
+ s.parent.Set(key, value)
+ s.listener.OnWrite(s.parentStoreKey, key, value, false)
+}
+
+// Delete implements the KVStore interface. It traces a write operation and
+// delegates the Delete call to the parent KVStore.
+func (s *Store) Delete(key []byte) {
+ s.parent.Delete(key)
+ s.listener.OnWrite(s.parentStoreKey, key, nil, true)
+}
+```
+
+### MultiStore interface updates
+
+We will update the `CommitMultiStore` interface to allow us to wrap a `Memorylistener` to a specific `KVStore`.
+Note that the `MemoryListener` will be attached internally by the concrete `rootmulti` implementation.
+
+```go
+type CommitMultiStore interface {
+ ...
+
+ // AddListeners adds a listener for the KVStore belonging to the provided StoreKey
+ AddListeners(keys []StoreKey)
+
+ // PopStateCache returns the accumulated state change messages from MemoryListener
+ PopStateCache() []StoreKVPair
+}
+```
+
+
+### MultiStore implementation updates
+
+We will adjust the `rootmulti` `GetKVStore` method to wrap the returned `KVStore` with a `listenkv.Store` if listening is turned on for that `Store`.
+
+```go
+func (rs *Store) GetKVStore(key types.StoreKey) types.KVStore {
+ store := rs.stores[key].(types.KVStore)
+
+ if rs.TracingEnabled() {
+ store = tracekv.NewStore(store, rs.traceWriter, rs.traceContext)
+ }
+ if rs.ListeningEnabled(key) {
+ store = listenkv.NewStore(store, key, rs.listeners[key])
+ }
+
+ return store
+}
+```
+
+We will implement `AddListeners` to manage KVStore listeners internally and implement `PopStateCache`
+for a means of retrieving the current state.
+
+```go
+// AddListeners adds state change listener for a specific KVStore
+func (rs *Store) AddListeners(keys []types.StoreKey) {
+ listener := types.NewMemoryListener()
+ for i := range keys {
+ rs.listeners[keys[i]] = listener
+ }
+}
+```
+
+```go
+func (rs *Store) PopStateCache() []types.StoreKVPair {
+ var cache []types.StoreKVPair
+ for _, ls := range rs.listeners {
+ cache = append(cache, ls.PopStateCache()...)
+ }
+ sort.SliceStable(cache, func(i, j int) bool {
+ return cache[i].StoreKey < cache[j].StoreKey
+ })
+ return cache
+}
+```
+
+We will also adjust the `rootmulti` `CacheMultiStore` and `CacheMultiStoreWithVersion` methods to enable listening in
+the cache layer.
+
+```go
+func (rs *Store) CacheMultiStore() types.CacheMultiStore {
+ stores := make(map[types.StoreKey]types.CacheWrapper)
+ for k, v := range rs.stores {
+ store := v.(types.KVStore)
+ // Wire the listenkv.Store to allow listeners to observe the writes from the cache store,
+ // set same listeners on cache store will observe duplicated writes.
+ if rs.ListeningEnabled(k) {
+ store = listenkv.NewStore(store, k, rs.listeners[k])
+ }
+ stores[k] = store
+ }
+ return cachemulti.NewStore(rs.db, stores, rs.keysByName, rs.traceWriter, rs.getTracingContext())
+}
+```
+
+```go
+func (rs *Store) CacheMultiStoreWithVersion(version int64) (types.CacheMultiStore, error) {
+ // ...
+
+ // Wire the listenkv.Store to allow listeners to observe the writes from the cache store,
+ // set same listeners on cache store will observe duplicated writes.
+ if rs.ListeningEnabled(key) {
+ cacheStore = listenkv.NewStore(cacheStore, key, rs.listeners[key])
+ }
+
+ cachedStores[key] = cacheStore
+ }
+
+ return cachemulti.NewStore(rs.db, cachedStores, rs.keysByName, rs.traceWriter, rs.getTracingContext()), nil
+}
+```
+
+### Exposing the data
+
+#### Streaming Service
+
+We will introduce a new `ABCIListener` interface that plugs into the BaseApp and relays ABCI requests and responses
+so that the service can group the state changes with the ABCI requests.
+
+```go
+// baseapp/streaming.go
+
+// ABCIListener is the interface that we're exposing as a streaming service.
+type ABCIListener interface {
+ // ListenBeginBlock updates the streaming service with the latest BeginBlock messages
+ ListenBeginBlock(ctx context.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error
+ // ListenEndBlock updates the steaming service with the latest EndBlock messages
+ ListenEndBlock(ctx types.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) error
+ // ListenDeliverTx updates the steaming service with the latest DeliverTx messages
+ ListenDeliverTx(ctx context.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) error
+ // ListenCommit updates the steaming service with the latest Commit messages and state changes
+ ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []*store.StoreKVPair) error
+}
+```
+
+#### BaseApp Registration
+
+We will add a new method to the `BaseApp` to enable the registration of `StreamingService`s:
+
+ ```go
+ // SetStreamingService is used to set a streaming service into the BaseApp hooks and load the listeners into the multistore
+func (app *BaseApp) SetStreamingService(s ABCIListener) {
+ // register the StreamingService within the BaseApp
+ // BaseApp will pass BeginBlock, DeliverTx, and EndBlock requests and responses to the streaming services to update their ABCI context
+ app.abciListeners = append(app.abciListeners, s)
+}
+```
+
+We will add two new fields to the `BaseApp` struct:
+
+```go
+type BaseApp struct {
+
+ ...
+
+ // abciListenersAsync for determining if abciListeners will run asynchronously.
+ // When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node.
+ // When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored.
+ abciListenersAsync bool
+
+ // stopNodeOnABCIListenerErr halts the node when ABCI streaming service listening results in an error.
+ // stopNodeOnABCIListenerErr=true must be paired with abciListenersAsync=false.
+ stopNodeOnABCIListenerErr bool
+}
+```
+
+#### ABCI Event Hooks
+
+We will modify the `BeginBlock`, `EndBlock`, `DeliverTx` and `Commit` methods to pass ABCI requests and responses
+to any streaming service hooks registered with the `BaseApp`.
+
+```go
+func (app *BaseApp) BeginBlock(req abci.RequestBeginBlock) (res abci.ResponseBeginBlock) {
+
+ ...
+
+ // call the streaming service hook with the BeginBlock messages
+ for _, abciListener := range app.abciListeners {
+ ctx := app.deliverState.ctx
+ blockHeight := ctx.BlockHeight()
+ if app.abciListenersAsync {
+ go func(req abci.RequestBeginBlock, res abci.ResponseBeginBlock) {
+ if err := app.abciListener.ListenBeginBlock(ctx, req, res); err != nil {
+ app.logger.Error("BeginBlock listening hook failed", "height", blockHeight, "err", err)
+ }
+ }(req, res)
+ } else {
+ if err := app.abciListener.ListenBeginBlock(ctx, req, res); err != nil {
+ app.logger.Error("BeginBlock listening hook failed", "height", blockHeight, "err", err)
+ if app.stopNodeOnABCIListenerErr {
+ os.Exit(1)
+ }
+ }
+ }
+ }
+
+ return res
+}
+```
+
+```go
+func (app *BaseApp) EndBlock(req abci.RequestEndBlock) (res abci.ResponseEndBlock) {
+
+ ...
+
+ // call the streaming service hook with the EndBlock messages
+ for _, abciListener := range app.abciListeners {
+ ctx := app.deliverState.ctx
+ blockHeight := ctx.BlockHeight()
+ if app.abciListenersAsync {
+ go func(req abci.RequestEndBlock, res abci.ResponseEndBlock) {
+ if err := app.abciListener.ListenEndBlock(blockHeight, req, res); err != nil {
+ app.logger.Error("EndBlock listening hook failed", "height", blockHeight, "err", err)
+ }
+ }(req, res)
+ } else {
+ if err := app.abciListener.ListenEndBlock(blockHeight, req, res); err != nil {
+ app.logger.Error("EndBlock listening hook failed", "height", blockHeight, "err", err)
+ if app.stopNodeOnABCIListenerErr {
+ os.Exit(1)
+ }
+ }
+ }
+ }
+
+ return res
+}
+```
+
+```go
+func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
+
+ var abciRes abci.ResponseDeliverTx
+ defer func() {
+ // call the streaming service hook with the EndBlock messages
+ for _, abciListener := range app.abciListeners {
+ ctx := app.deliverState.ctx
+ blockHeight := ctx.BlockHeight()
+ if app.abciListenersAsync {
+ go func(req abci.RequestDeliverTx, res abci.ResponseDeliverTx) {
+ if err := app.abciListener.ListenDeliverTx(blockHeight, req, res); err != nil {
+ app.logger.Error("DeliverTx listening hook failed", "height", blockHeight, "err", err)
+ }
+ }(req, abciRes)
+ } else {
+ if err := app.abciListener.ListenDeliverTx(blockHeight, req, res); err != nil {
+ app.logger.Error("DeliverTx listening hook failed", "height", blockHeight, "err", err)
+ if app.stopNodeOnABCIListenerErr {
+ os.Exit(1)
+ }
+ }
+ }
+ }
+ }()
+
+ ...
+
+ return abciRes
+}
+```
+
+```go
+func (app *BaseApp) Commit() abci.ResponseCommit {
+
+ ...
+
+ res := abci.ResponseCommit{
+ Data: commitID.Hash,
+ RetainHeight: retainHeight,
+ }
+
+ // call the streaming service hook with the Commit messages
+ for _, abciListener := range app.abciListeners {
+ ctx := app.deliverState.ctx
+ blockHeight := ctx.BlockHeight()
+ changeSet := app.cms.PopStateCache()
+ if app.abciListenersAsync {
+ go func(res abci.ResponseCommit, changeSet []store.StoreKVPair) {
+ if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil {
+ app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err)
+ }
+ }(res, changeSet)
+ } else {
+ if err := app.abciListener.ListenCommit(ctx, res, changeSet); err != nil {
+ app.logger.Error("ListenCommit listening hook failed", "height", blockHeight, "err", err)
+ if app.stopNodeOnABCIListenerErr {
+ os.Exit(1)
+ }
+ }
+ }
+ }
+
+ ...
+
+ return res
+}
+```
+
+#### Go Plugin System
+
+We propose a plugin architecture to load and run `Streaming` plugins and other types of implementations. We will introduce a plugin
+system over gRPC that is used to load and run Cosmos-SDK plugins. The plugin system uses [hashicorp/go-plugin](https://github.com/hashicorp/go-plugin).
+Each plugin must have a struct that implements the `plugin.Plugin` interface and an `Impl` interface for processing messages over gRPC.
+Each plugin must also have a message protocol defined for the gRPC service:
+
+```go
+// streaming/plugins/abci/{plugin_version}/interface.go
+
+// Handshake is a common handshake that is shared by streaming and host.
+// This prevents users from executing bad plugins or executing a plugin
+// directory. It is a UX feature, not a security feature.
+var Handshake = plugin.HandshakeConfig{
+ ProtocolVersion: 1,
+ MagicCookieKey: "ABCI_LISTENER_PLUGIN",
+ MagicCookieValue: "ef78114d-7bdf-411c-868f-347c99a78345",
+}
+
+// ListenerPlugin is the base struc for all kinds of go-plugin implementations
+// It will be included in interfaces of different Plugins
+type ABCIListenerPlugin struct {
+ // GRPCPlugin must still implement the Plugin interface
+ plugin.Plugin
+ // Concrete implementation, written in Go. This is only used for plugins
+ // that are written in Go.
+ Impl baseapp.ABCIListener
+}
+
+func (p *ListenerGRPCPlugin) GRPCServer(_ *plugin.GRPCBroker, s *grpc.Server) error {
+ RegisterABCIListenerServiceServer(s, &GRPCServer{Impl: p.Impl})
+ return nil
+}
+
+func (p *ListenerGRPCPlugin) GRPCClient(
+ _ context.Context,
+ _ *plugin.GRPCBroker,
+ c *grpc.ClientConn,
+) (interface{}, error) {
+ return &GRPCClient{client: NewABCIListenerServiceClient(c)}, nil
+}
+```
+
+The `plugin.Plugin` interface has two methods `Client` and `Server`. For our GRPC service these are `GRPCClient` and `GRPCServer`
+The `Impl` field holds the concrete implementation of our `baseapp.ABCIListener` interface written in Go.
+Note: this is only used for plugin implementations written in Go.
+
+The advantage of having such a plugin system is that within each plugin authors can define the message protocol in a way that fits their use case.
+For example, when state change listening is desired, the `ABCIListener` message protocol can be defined as below (*for illustrative purposes only*).
+When state change listening is not desired than `ListenCommit` can be omitted from the protocol.
+
+```protobuf
+syntax = "proto3";
+
+...
+
+message Empty {}
+
+message ListenBeginBlockRequest {
+ RequestBeginBlock req = 1;
+ ResponseBeginBlock res = 2;
+}
+message ListenEndBlockRequest {
+ RequestEndBlock req = 1;
+ ResponseEndBlock res = 2;
+}
+message ListenDeliverTxRequest {
+ int64 block_height = 1;
+ RequestDeliverTx req = 2;
+ ResponseDeliverTx res = 3;
+}
+message ListenCommitRequest {
+ int64 block_height = 1;
+ ResponseCommit res = 2;
+ repeated StoreKVPair changeSet = 3;
+}
+
+// plugin that listens to state changes
+service ABCIListenerService {
+ rpc ListenBeginBlock(ListenBeginBlockRequest) returns (Empty);
+ rpc ListenEndBlock(ListenEndBlockRequest) returns (Empty);
+ rpc ListenDeliverTx(ListenDeliverTxRequest) returns (Empty);
+ rpc ListenCommit(ListenCommitRequest) returns (Empty);
+}
+```
+
+```protobuf
+...
+// plugin that doesn't listen to state changes
+service ABCIListenerService {
+ rpc ListenBeginBlock(ListenBeginBlockRequest) returns (Empty);
+ rpc ListenEndBlock(ListenEndBlockRequest) returns (Empty);
+ rpc ListenDeliverTx(ListenDeliverTxRequest) returns (Empty);
+ rpc ListenCommit(ListenCommitRequest) returns (Empty);
+}
+```
+
+Implementing the service above:
+
+```go
+// streaming/plugins/abci/{plugin_version}/grpc.go
+
+var (
+ _ baseapp.ABCIListener = (*GRPCClient)(nil)
+)
+
+// GRPCClient is an implementation of the ABCIListener and ABCIListenerPlugin interfaces that talks over RPC.
+type GRPCClient struct {
+ client ABCIListenerServiceClient
+}
+
+func (m *GRPCClient) ListenBeginBlock(ctx context.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error {
+ _, err := m.client.ListenBeginBlock(ctx, &ListenBeginBlockRequest{Req: req, Res: res})
+ return err
+}
+
+func (m *GRPCClient) ListenEndBlock(goCtx context.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) error {
+ _, err := m.client.ListenEndBlock(ctx, &ListenEndBlockRequest{Req: req, Res: res})
+ return err
+}
+
+func (m *GRPCClient) ListenDeliverTx(goCtx context.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) error {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+ _, err := m.client.ListenDeliverTx(ctx, &ListenDeliverTxRequest{BlockHeight: ctx.BlockHeight(), Req: req, Res: res})
+ return err
+}
+
+func (m *GRPCClient) ListenCommit(goCtx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+ _, err := m.client.ListenCommit(ctx, &ListenCommitRequest{BlockHeight: ctx.BlockHeight(), Res: res, ChangeSet: changeSet})
+ return err
+}
+
+// GRPCServer is the gRPC server that GRPCClient talks to.
+type GRPCServer struct {
+ // This is the real implementation
+ Impl baseapp.ABCIListener
+}
+
+func (m *GRPCServer) ListenBeginBlock(ctx context.Context, req *ListenBeginBlockRequest) (*Empty, error) {
+ return &Empty{}, m.Impl.ListenBeginBlock(ctx, req.Req, req.Res)
+}
+
+func (m *GRPCServer) ListenEndBlock(ctx context.Context, req *ListenEndBlockRequest) (*Empty, error) {
+ return &Empty{}, m.Impl.ListenEndBlock(ctx, req.Req, req.Res)
+}
+
+func (m *GRPCServer) ListenDeliverTx(ctx context.Context, req *ListenDeliverTxRequest) (*Empty, error) {
+ return &Empty{}, m.Impl.ListenDeliverTx(ctx, req.Req, req.Res)
+}
+
+func (m *GRPCServer) ListenCommit(ctx context.Context, req *ListenCommitRequest) (*Empty, error) {
+ return &Empty{}, m.Impl.ListenCommit(ctx, req.Res, req.ChangeSet)
+}
+
+```
+
+And the pre-compiled Go plugin `Impl`(*this is only used for plugins that are written in Go*):
+
+```go
+// streaming/plugins/abci/{plugin_version}/impl/plugin.go
+
+// Plugins are pre-compiled and loaded by the plugin system
+
+// ABCIListener is the implementation of the baseapp.ABCIListener interface
+type ABCIListener struct{}
+
+func (m *ABCIListenerPlugin) ListenBeginBlock(ctx context.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error {
+ // send data to external system
+}
+
+func (m *ABCIListenerPlugin) ListenEndBlock(ctx context.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error {
+ // send data to external system
+}
+
+func (m *ABCIListenerPlugin) ListenDeliverTxBlock(ctx context.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error {
+ // send data to external system
+}
+
+func (m *ABCIListenerPlugin) ListenCommit(ctx context.Context, res abci.ResponseCommit, changeSet []store.StoreKVPair) error {
+ // send data to external system
+}
+
+func main() {
+ plugin.Serve(&plugin.ServeConfig{
+ HandshakeConfig: grpc_abci_v1.Handshake,
+ Plugins: map[string]plugin.Plugin{
+ "grpc_plugin_v1": &grpc_abci_v1.ABCIListenerGRPCPlugin{Impl: &ABCIListenerPlugin{}},
+ },
+
+ // A non-nil value here enables gRPC serving for this streaming...
+ GRPCServer: plugin.DefaultGRPCServer,
+ })
+}
+```
+
+We will introduce a plugin loading system that will return `(interface{}, error)`.
+This provides the advantage of using versioned plugins where the plugin interface and gRPC protocol change over time.
+In addition, it allows for building independent plugin that can expose different parts of the system over gRPC.
+
+```go
+func NewStreamingPlugin(name string, logLevel string) (interface{}, error) {
+ logger := hclog.New(&hclog.LoggerOptions{
+ Output: hclog.DefaultOutput,
+ Level: toHclogLevel(logLevel),
+ Name: fmt.Sprintf("plugin.%s", name),
+ })
+
+ // We're a host. Start by launching the streaming process.
+ env := os.Getenv(GetPluginEnvKey(name))
+ client := plugin.NewClient(&plugin.ClientConfig{
+ HandshakeConfig: HandshakeMap[name],
+ Plugins: PluginMap,
+ Cmd: exec.Command("sh", "-c", env),
+ Logger: logger,
+ AllowedProtocols: []plugin.Protocol{
+ plugin.ProtocolNetRPC, plugin.ProtocolGRPC},
+ })
+
+ // Connect via RPC
+ rpcClient, err := client.Client()
+ if err != nil {
+ return nil, err
+ }
+
+ // Request streaming plugin
+ return rpcClient.Dispense(name)
+}
+
+```
+
+We propose a `RegisterStreamingPlugin` function for the App to register `NewStreamingPlugin`s with the App's BaseApp.
+Streaming plugins can be of `Any` type; therefore, the function takes in an interface vs a concrete type.
+For example, we could have plugins of `ABCIListener`, `WasmListener` or `IBCListener`. Note that `RegisterStreamingPluing` function
+is helper function and not a requirement. Plugin registration can easily be moved from the App to the BaseApp directly.
+
+```go
+// baseapp/streaming.go
+
+// RegisterStreamingPlugin registers streaming plugins with the App.
+// This method returns an error if a plugin is not supported.
+func RegisterStreamingPlugin(
+ bApp *BaseApp,
+ appOpts servertypes.AppOptions,
+ keys map[string]*types.KVStoreKey,
+ streamingPlugin interface{},
+) error {
+ switch t := streamingPlugin.(type) {
+ case ABCIListener:
+ registerABCIListenerPlugin(bApp, appOpts, keys, t)
+ default:
+ return fmt.Errorf("unexpected plugin type %T", t)
+ }
+ return nil
+}
+```
+
+```go
+func registerABCIListenerPlugin(
+ bApp *BaseApp,
+ appOpts servertypes.AppOptions,
+ keys map[string]*store.KVStoreKey,
+ abciListener ABCIListener,
+) {
+ asyncKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIAsync)
+ async := cast.ToBool(appOpts.Get(asyncKey))
+ stopNodeOnErrKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIStopNodeOnErrTomlKey)
+ stopNodeOnErr := cast.ToBool(appOpts.Get(stopNodeOnErrKey))
+ keysKey := fmt.Sprintf("%s.%s.%s", StreamingTomlKey, StreamingABCITomlKey, StreamingABCIKeysTomlKey)
+ exposeKeysStr := cast.ToStringSlice(appOpts.Get(keysKey))
+ exposedKeys := exposeStoreKeysSorted(exposeKeysStr, keys)
+ bApp.cms.AddListeners(exposedKeys)
+ bApp.SetStreamingService(abciListener)
+ bApp.stopNodeOnABCIListenerErr = stopNodeOnErr
+ bApp.abciListenersAsync = async
+}
+```
+
+```go
+func exposeAll(list []string) bool {
+ for _, ele := range list {
+ if ele == "*" {
+ return true
+ }
+ }
+ return false
+}
+
+func exposeStoreKeys(keysStr []string, keys map[string]*types.KVStoreKey) []types.StoreKey {
+ var exposeStoreKeys []types.StoreKey
+ if exposeAll(keysStr) {
+ exposeStoreKeys = make([]types.StoreKey, 0, len(keys))
+ for _, storeKey := range keys {
+ exposeStoreKeys = append(exposeStoreKeys, storeKey)
+ }
+ } else {
+ exposeStoreKeys = make([]types.StoreKey, 0, len(keysStr))
+ for _, keyStr := range keysStr {
+ if storeKey, ok := keys[keyStr]; ok {
+ exposeStoreKeys = append(exposeStoreKeys, storeKey)
+ }
+ }
+ }
+ // sort storeKeys for deterministic output
+ sort.SliceStable(exposeStoreKeys, func(i, j int) bool {
+ return exposeStoreKeys[i].Name() < exposeStoreKeys[j].Name()
+ })
+
+ return exposeStoreKeys
+}
+```
+
+The `NewStreamingPlugin` and `RegisterStreamingPlugin` functions are used to register a plugin with the App's BaseApp.
+
+e.g. in `NewSimApp`:
+
+```go
+func NewSimApp(
+ logger log.Logger,
+ db dbm.DB,
+ traceStore io.Writer,
+ loadLatest bool,
+ appOpts servertypes.AppOptions,
+ baseAppOptions ...func(*baseapp.BaseApp),
+) *SimApp {
+
+ ...
+
+ keys := sdk.NewKVStoreKeys(
+ authtypes.StoreKey, banktypes.StoreKey, stakingtypes.StoreKey,
+ minttypes.StoreKey, distrtypes.StoreKey, slashingtypes.StoreKey,
+ govtypes.StoreKey, paramstypes.StoreKey, ibchost.StoreKey, upgradetypes.StoreKey,
+ evidencetypes.StoreKey, ibctransfertypes.StoreKey, capabilitytypes.StoreKey,
+ )
+
+ ...
+
+ // register streaming services
+ streamingCfg := cast.ToStringMap(appOpts.Get(baseapp.StreamingTomlKey))
+ for service := range streamingCfg {
+ pluginKey := fmt.Sprintf("%s.%s.%s", baseapp.StreamingTomlKey, service, baseapp.StreamingPluginTomlKey)
+ pluginName := strings.TrimSpace(cast.ToString(appOpts.Get(pluginKey)))
+ if len(pluginName) > 0 {
+ logLevel := cast.ToString(appOpts.Get(flags.FlagLogLevel))
+ plugin, err := streaming.NewStreamingPlugin(pluginName, logLevel)
+ if err != nil {
+ tmos.Exit(err.Error())
+ }
+ if err := baseapp.RegisterStreamingPlugin(bApp, appOpts, keys, plugin); err != nil {
+ tmos.Exit(err.Error())
+ }
+ }
+ }
+
+ return app
+```
+
+#### Configuration
+
+The plugin system will be configured within an App's TOML configuration files.
+
+```toml
+# gRPC streaming
+[streaming]
+
+# ABCI streaming service
+[streaming.abci]
+
+# The plugin version to use for ABCI listening
+plugin = "abci_v1"
+
+# List of kv store keys to listen to for state changes.
+# Set to ["*"] to expose all keys.
+keys = ["*"]
+
+# Enable abciListeners to run asynchronously.
+# When abciListenersAsync=false and stopNodeOnABCIListenerErr=false listeners will run synchronized but will not stop the node.
+# When abciListenersAsync=true stopNodeOnABCIListenerErr will be ignored.
+async = false
+
+# Whether to stop the node on message deliver error.
+stop-node-on-err = true
+```
+
+There will be four parameters for configuring `ABCIListener` plugin: `streaming.abci.plugin`, `streaming.abci.keys`, `streaming.abci.async` and `streaming.abci.stop-node-on-err`.
+`streaming.abci.plugin` is the name of the plugin we want to use for streaming, `streaming.abci.keys` is a set of store keys for stores it listens to,
+`streaming.abci.async` is bool enabling asynchronous listening and `streaming.abci.stop-node-on-err` is a bool that stops the node when true and when operating
+on synchronized mode `streaming.abci.async=false`. Note that `streaming.abci.stop-node-on-err=true` will be ignored if `streaming.abci.async=true`.
+
+The configuration above support additional streaming plugins by adding the plugin to the `[streaming]` configuration section
+and registering the plugin with `RegisterStreamingPlugin` helper function.
+
+Note the that each plugin must include `streaming.{service}.plugin` property as it is a requirement for doing the lookup and registration of the plugin
+with the App. All other properties are unique to the individual services.
+
+#### Encoding and decoding streams
+
+ADR-038 introduces the interfaces and types for streaming state changes out from KVStores, associating this
+data with their related ABCI requests and responses, and registering a service for consuming this data and streaming it to some destination in a final format.
+Instead of prescribing a final data format in this ADR, it is left to a specific plugin implementation to define and document this format.
+We take this approach because flexibility in the final format is necessary to support a wide range of streaming service plugins. For example,
+the data format for a streaming service that writes the data out to a set of files will differ from the data format that is written to a Kafka topic.
+
+## Consequences
+
+These changes will provide a means of subscribing to KVStore state changes in real time.
+
+### Backwards Compatibility
+
+* This ADR changes the `CommitMultiStore` interface, implementations supporting the previous version of this interface will not support the new one
+
+### Positive
+
+* Ability to listen to KVStore state changes in real time and expose these events to external consumers
+
+### Negative
+
+* Changes `CommitMultiStore` interface and its implementations
+
+### Neutral
+
+* Introduces additional- but optional- complexity to configuring and running a cosmos application
+* If an application developer opts to use these features to expose data, they need to be aware of the ramifications/risks of that data exposure as it pertains to the specifics of their application
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-039-epoched-staking.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-039-epoched-staking.md
new file mode 100644
index 00000000..29418fc8
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-039-epoched-staking.md
@@ -0,0 +1,122 @@
+# ADR 039: Epoched Staking
+
+## Changelog
+
+* 10-Feb-2021: Initial Draft
+
+## Authors
+
+* Dev Ojha (@valardragon)
+* Sunny Aggarwal (@sunnya97)
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR updates the proof of stake module to buffer the staking weight updates for a number of blocks before updating the consensus' staking weights. The length of the buffer is dubbed an epoch. The prior functionality of the staking module is then a special case of the abstracted module, with the epoch being set to 1 block.
+
+## Context
+
+The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was implementationally simplest, and because we at the time believed that this would lead to better UX for clients.
+
+An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This 'epoch'd proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.
+
+Additionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed.
+
+Furthermore, it has become clearer over time that immediate execution of staking events comes with limitations, such as:
+
+* Threshold based cryptography. One of the main limitations is that because the validator set can change so regularly, it makes the running of multiparty computation by a fixed validator set difficult. Many threshold-based cryptographic features for blockchains such as randomness beacons and threshold decryption require a computationally-expensive DKG process (will take much longer than 1 block to create). To productively use these, we need to guarantee that the result of the DKG will be used for a reasonably long time. It wouldn't be feasible to rerun the DKG every block. By epoching staking, it guarantees we'll only need to run a new DKG once every epoch.
+
+* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more header in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn.
+
+* Fairness of deterministic leader election. Currently we have no ways of reasoning of fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes)
+
+* Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs.
+
+## Design considerations
+
+### Slashing
+
+There is a design consideration for whether to apply a slash immediately or at the end of an epoch. A slash event should apply to only members who are actually staked during the time of the infraction, namely during the epoch the slash event occured.
+
+Applying it immediately can be viewed as offering greater consensus layer security, at potential costs to the aforementioned usecases. The benefits of immediate slashing for consensus layer security can be all be obtained by executing the validator jailing immediately (thus removing it from the validator set), and delaying the actual slash change to the validator's weight until the epoch boundary. For the use cases mentioned above, workarounds can be integrated to avoid problems, as follows:
+
+* For threshold based cryptography, this setting will have the threshold cryptography use the original epoch weights, while consensus has an update that lets it more rapidly benefit from additional security. If the threshold based cryptography blocks liveness of the chain, then we have effectively raised the liveness threshold of the remaining validators for the rest of the epoch. (Alternatively, jailed nodes could still contribute shares) This plan will fail in the extreme case that more than 1/3rd of the validators have been jailed within a single epoch. For such an extreme scenario, the chain already have its own custom incident response plan, and defining how to handle the threshold cryptography should be a part of that.
+* For light client efficiency, there can be a bit included in the header indicating an intra-epoch slash (ala https://github.com/tendermint/spec/issues/199).
+* For fairness of deterministic leader election, applying a slash or jailing within an epoch would break the guarantee we were seeking to provide. This then re-introduces a new (but significantly simpler) problem for trying to provide fairness guarantees. Namely, that validators can adversarially elect to remove themself from the set of proposers. From a security perspective, this could potentially be handled by two different mechanisms (or prove to still be too difficult to achieve). One is making a security statement acknowledging the ability for an adversary to force an ahead-of-time fixed threshold of users to drop out of the proposer set within an epoch. The second method would be to parameterize such that the cost of a slash within the epoch far outweights benefits due to being a proposer. However, this latter criterion is quite dubious, since being a proposer can have many advantageous side-effects in chains with complex state machines. (Namely, DeFi games such as Fomo3D)
+* For staking derivative design, there is no issue introduced. This does not increase the state size of staking records, since whether a slash has occured is fully queryable given the validator address.
+
+### Token lockup
+
+When someone makes a transaction to delegate, even though they are not immediately staked, their tokens should be moved into a pool managed by the staking module which will then be used at the end of an epoch. This prevents concerns where they stake, and then spend those tokens not realizing they were already allocated for staking, and thus having their staking tx fail.
+
+### Pipelining the epochs
+
+For threshold based cryptography in particular, we need a pipeline for epoch changes. This is because when we are in epoch N, we want the epoch N+1 weights to be fixed so that the validator set can do the DKG accordingly. So if we are currently in epoch N, the stake weights for epoch N+1 should already be fixed, and new stake changes should be getting applied to epoch N + 2.
+
+This can be handled by making a parameter for the epoch pipeline length. This parameter should not be alterable except during hard forks, to mitigate implementation complexity of switching the pipeline length.
+
+With pipeline length 1, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+1.
+With pipeline length 2, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+2.
+
+### Rewards
+
+Even though all staking updates are applied at epoch boundaries, rewards can still be distributed immediately when they are claimed. This is because they do not affect the current stake weights, as we do not implement auto-bonding of rewards. If such a feature were to be implemented, it would have to be setup so that rewards are auto-bonded at the epoch boundary.
+
+### Parameterizing the epoch length
+
+When choosing the epoch length, there is a trade-off queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain.
+
+Until an ABCI mechanism for variable block times is introduced, it is ill-advised to be using high epoch lengths due to the computation buildup. This is because when a block's execution time is greater than the expected block time from Tendermint, rounds may increment.
+
+## Decision
+
+**Step-1**: Implement buffering of all staking and slashing messages.
+
+First we create a pool for storing tokens that are being bonded, but should be applied at the epoch boundary called the `EpochDelegationPool`. Then, we have two separate queues, one for staking, one for slashing. We describe what happens on each message being delivered below:
+
+### Staking messages
+
+* **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account.
+* **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch.
+* **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account.
+* **MsgBeginRedelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
+* **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
+
+### Slashing messages
+
+* **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch.
+* **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be setup such that this slash applies immediately.
+
+### Evidence Messages
+
+* **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued.
+
+Then we add methods to the end blockers, to ensure that at the epoch boundary the queues are cleared and delegation updates are applied.
+
+**Step-2**: Implement querying of queued staking txs.
+
+When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events.
+
+As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is do-able by maintaining an auxilliary hashmap for indexing upcoming staking events by address)
+
+**Step-3**: Adjust gas
+
+Currently gas represents the cost of executing a transaction when its done immediately. (Merging together costs of p2p overhead, state access overhead, and computational overhead) However, now a transaction can cause computation in a future block, namely at the epoch boundary.
+
+To handle this, we should initially include parameters for estimating the amount of future computation (denominated in gas), and add that as a flat charge needed for the message.
+We leave it as out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that the are weighted equally for now.
+
+## Consequences
+
+### Positive
+
+* Abstracts the proof of stake module that allows retaining the existing functionality
+* Enables new features such as validator-set based threshold cryptography
+
+### Negative
+
+* Increases complexity of integrating more complex gas pricing mechanisms, as they now have to consider future execution costs as well.
+* When epoch > 1, validators can no longer leave the network immediately, and must wait until an epoch boundary.
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-040-storage-and-smt-state-commitments.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-040-storage-and-smt-state-commitments.md
new file mode 100644
index 00000000..b089bfb1
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-040-storage-and-smt-state-commitments.md
@@ -0,0 +1,289 @@
+# ADR 040: Storage and SMT State Commitments
+
+## Changelog
+
+* 2020-01-15: Draft
+
+## Status
+
+DRAFT Not Implemented
+
+## Abstract
+
+Sparse Merkle Tree ([SMT](https://osf.io/8mcnh/)) is a version of a Merkle Tree with various storage and performance optimizations. This ADR defines a separation of state commitments from data storage and the Cosmos SDK transition from IAVL to SMT.
+
+## Context
+
+Currently, Cosmos SDK uses IAVL for both state [commitments](https://cryptography.fandom.com/wiki/Commitment_scheme) and data storage.
+
+IAVL has effectively become an orphaned project within the Cosmos ecosystem and it's proven to be an inefficient state commitment data structure.
+In the current design, IAVL is used for both data storage and as a Merkle Tree for state commitments. IAVL is meant to be a standalone Merkelized key/value database, however it's using a KV DB engine to store all tree nodes. So, each node is stored in a separate record in the KV DB. This causes many inefficiencies and problems:
+
+* Each object query requires a tree traversal from the root. Subsequent queries for the same object are cached on the Cosmos SDK level.
+* Each edge traversal requires a DB query.
+* Creating snapshots is [expensive](https://github.com/cosmos/cosmos-sdk/issues/7215#issuecomment-684804950). It takes about 30 seconds to export less than 100 MB of state (as of March 2020).
+* Updates in IAVL may trigger tree reorganization and possible O(log(n)) hashes re-computation, which can become a CPU bottleneck.
+* The node structure is pretty expensive - it contains a standard tree node elements (key, value, left and right element) and additional metadata such as height, version (which is not required by the Cosmos SDK). The entire node is hashed, and that hash is used as the key in the underlying database, [ref](https://github.com/cosmos/iavl/blob/master/docs/node/03-node.md
+).
+
+Moreover, the IAVL project lacks support and a maintainer and we already see better and well-established alternatives. Instead of optimizing the IAVL, we are looking into other solutions for both storage and state commitments.
+
+## Decision
+
+We propose to separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine. Finally we replace IAVL with [Celestia's SMT](https://github.com/lazyledger/smt). Celestia SMT is based on Diem (called jellyfish) design [*] - it uses a compute-optimised SMT by replacing subtrees with only default values with a single node (same approach is used by Ethereum2) and implements compact proofs.
+
+The storage model presented here doesn't deal with data structure nor serialization. It's a Key-Value database, where both key and value are binaries. The storage user is responsible for data serialization.
+
+### Decouple state commitment from storage
+
+Separation of storage and commitment (by the SMT) will allow the optimization of different components according to their usage and access patterns.
+
+`SC` (SMT) is used to commit to a data and compute Merkle proofs. `SS` is used to directly access data. To avoid collisions, both `SS` and `SC` will use a separate storage namespace (they could use the same database underneath). `SS` will store each record directly (mapping `(key, value)` as `key → value`).
+
+SMT is a merkle tree structure: we don't store keys directly. For every `(key, value)` pair, `hash(key)` is used as leaf path (we hash a key to uniformly distribute leaves in the tree) and `hash(value)` as the leaf contents. The tree structure is specified in more depth [below](#smt-for-state-commitment).
+
+For data access we propose 2 additional KV buckets (implemented as namespaces for the key-value pairs, sometimes called [column family](https://github.com/facebook/rocksdb/wiki/Terminology)):
+
+1. B1: `key → value`: the principal object storage, used by a state machine, behind the Cosmos SDK `KVStore` interface: provides direct access by key and allows prefix iteration (KV DB backend must support it).
+2. B2: `hash(key) → key`: a reverse index to get a key from an SMT path. Internally the SMT will store `(key, value)` as `prefix || hash(key) || hash(value)`. So, we can get an object value by composing `hash(key) → B2 → B1`.
+3. We could use more buckets to optimize the app usage if needed.
+
+We propose to use a KV database for both `SS` and `SC`. The store interface will allow to use the same physical DB backend for both `SS` and `SC` as well two separate DBs. The latter option allows for the separation of `SS` and `SC` into different hardware units, providing support for more complex setup scenarios and improving overall performance: one can use different backends (eg RocksDB and Badger) as well as independently tuning the underlying DB configuration.
+
+### Requirements
+
+State Storage requirements:
+
+* range queries
+* quick (key, value) access
+* creating a snapshot
+* historical versioning
+* pruning (garbage collection)
+
+State Commitment requirements:
+
+* fast updates
+* tree path should be short
+* query historical commitment proofs using ICS-23 standard
+* pruning (garbage collection)
+
+### SMT for State Commitment
+
+A Sparse Merkle tree is based on the idea of a complete Merkle tree of an intractable size. The assumption here is that as the size of the tree is intractable, there would only be a few leaf nodes with valid data blocks relative to the tree size, rendering a sparse tree.
+
+The full specification can be found at [Celestia](https://github.com/celestiaorg/celestia-specs/blob/ec98170398dfc6394423ee79b00b71038879e211/src/specs/data_structures.md#sparse-merkle-tree). In summary:
+
+* The SMT consists of a binary Merkle tree, constructed in the same fashion as described in [Certificate Transparency (RFC-6962)](https://tools.ietf.org/html/rfc6962), but using as the hashing function SHA-2-256 as defined in [FIPS 180-4](https://doi.org/10.6028/NIST.FIPS.180-4).
+* Leaves and internal nodes are hashed differently: the one-byte `0x00` is prepended for leaf nodes while `0x01` is prepended for internal nodes.
+* Default values are given to leaf nodes with empty leaves.
+* While the above rule is sufficient to pre-compute the values of intermediate nodes that are roots of empty subtrees, a further simplification is to extend this default value to all nodes that are roots of empty subtrees. The 32-byte zero is used as the default value. This rule takes precedence over the above one.
+* An internal node that is the root of a subtree that contains exactly one non-empty leaf is replaced by that leaf's leaf node.
+
+### Snapshots for storage sync and state versioning
+
+Below, with simple _snapshot_ we refer to a database snapshot mechanism, not to a _ABCI snapshot sync_. The latter will be referred as _snapshot sync_ (which will directly use DB snapshot as described below).
+
+Database snapshot is a view of DB state at a certain time or transaction. It's not a full copy of a database (it would be too big). Usually a snapshot mechanism is based on a _copy on write_ and it allows DB state to be efficiently delivered at a certain stage.
+Some DB engines support snapshotting. Hence, we propose to reuse that functionality for the state sync and versioning (described below). We limit the supported DB engines to ones which efficiently implement snapshots. In a final section we discuss the evaluated DBs.
+
+One of the Stargate core features is a _snapshot sync_ delivered in the `/snapshot` package. It provides a way to trustlessly sync a blockchain without repeating all transactions from the genesis. This feature is implemented in Cosmos SDK and requires storage support. Currently IAVL is the only supported backend. It works by streaming to a client a snapshot of a `SS` at a certain version together with a header chain.
+
+A new database snapshot will be created in every `EndBlocker` and identified by a block height. The `root` store keeps track of the available snapshots to offer `SS` at a certain version. The `root` store implements the `RootStore` interface described below. In essence, `RootStore` encapsulates a `Committer` interface. `Committer` has a `Commit`, `SetPruning`, `GetPruning` functions which will be used for creating and removing snapshots. The `rootStore.Commit` function creates a new snapshot and increments the version on each call, and checks if it needs to remove old versions. We will need to update the SMT interface to implement the `Committer` interface.
+NOTE: `Commit` must be called exactly once per block. Otherwise we risk going out of sync for the version number and block height.
+NOTE: For the Cosmos SDK storage, we may consider splitting that interface into `Committer` and `PruningCommitter` - only the multiroot should implement `PruningCommitter` (cache and prefix store don't need pruning).
+
+Number of historical versions for `abci.RequestQuery` and state sync snapshots is part of a node configuration, not a chain configuration (configuration implied by the blockchain consensus). A configuration should allow to specify number of past blocks and number of past blocks modulo some number (eg: 100 past blocks and one snapshot every 100 blocks for past 2000 blocks). Archival nodes can keep all past versions.
+
+Pruning old snapshots is effectively done by a database. Whenever we update a record in `SC`, SMT won't update nodes - instead it creates new nodes on the update path, without removing the old one. Since we are snapshotting each block, we need to change that mechanism to immediately remove orphaned nodes from the database. This is a safe operation - snapshots will keep track of the records and make it available when accessing past versions.
+
+To manage the active snapshots we will either use a DB _max number of snapshots_ option (if available), or we will remove DB snapshots in the `EndBlocker`. The latter option can be done efficiently by identifying snapshots with block height and calling a store function to remove past versions.
+
+#### Accessing old state versions
+
+One of the functional requirements is to access old state. This is done through `abci.RequestQuery` structure. The version is specified by a block height (so we query for an object by a key `K` at block height `H`). The number of old versions supported for `abci.RequestQuery` is configurable. Accessing an old state is done by using available snapshots.
+`abci.RequestQuery` doesn't need old state of `SC` unless the `prove=true` parameter is set. The SMT merkle proof must be included in the `abci.ResponseQuery` only if both `SC` and `SS` have a snapshot for requested version.
+
+Moreover, Cosmos SDK could provide a way to directly access a historical state. However, a state machine shouldn't do that - since the number of snapshots is configurable, it would lead to nondeterministic execution.
+
+We positively [validated](https://github.com/cosmos/cosmos-sdk/discussions/8297) a versioning and snapshot mechanism for querying old state with regards to the database we evaluated.
+
+### State Proofs
+
+For any object stored in State Store (SS), we have corresponding object in `SC`. A proof for object `V` identified by a key `K` is a branch of `SC`, where the path corresponds to the key `hash(K)`, and the leaf is `hash(K, V)`.
+
+### Rollbacks
+
+We need to be able to process transactions and roll-back state updates if a transaction fails. This can be done in the following way: during transaction processing, we keep all state change requests (writes) in a `CacheWrapper` abstraction (as it's done today). Once we finish the block processing, in the `Endblocker`, we commit a root store - at that time, all changes are written to the SMT and to the `SS` and a snapshot is created.
+
+### Committing to an object without saving it
+
+We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly.
+
+### Refactor MultiStore
+
+The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)).
+
+We propose to reduce the multistore concept from the SDK, and to use a single instance of `SC` and `SS` in a `RootStore` object. To avoid confusion, we should rename the `MultiStore` interface to `RootStore`. The `RootStore` will have the following interface; the methods for configuring tracing and listeners are omitted for brevity.
+
+```go
+// Used where read-only access to versions is needed.
+type BasicRootStore interface {
+ Store
+ GetKVStore(StoreKey) KVStore
+ CacheRootStore() CacheRootStore
+}
+
+// Used as the main app state, replacing CommitMultiStore.
+type CommitRootStore interface {
+ BasicRootStore
+ Committer
+ Snapshotter
+
+ GetVersion(uint64) (BasicRootStore, error)
+ SetInitialVersion(uint64) error
+
+ ... // Trace and Listen methods
+}
+
+// Replaces CacheMultiStore for branched state.
+type CacheRootStore interface {
+ BasicRootStore
+ Write()
+
+ ... // Trace and Listen methods
+}
+
+// Example of constructor parameters for the concrete type.
+type RootStoreConfig struct {
+ Upgrades *StoreUpgrades
+ InitialVersion uint64
+
+ ReservePrefix(StoreKey, StoreType)
+}
+```
+
+
+
+
+In contrast to `MultiStore`, `RootStore` doesn't allow to dynamically mount sub-stores or provide an arbitrary backing DB for individual sub-stores.
+
+NOTE: modules will be able to use a special commitment and their own DBs. For example: a module which will use ZK proofs for state can store and commit this proof in the `RootStore` (usually as a single record) and manage the specialized store privately or using the `SC` low level interface.
+
+#### Compatibility support
+
+To ease the transition to this new interface for users, we can create a shim which wraps a `CommitMultiStore` but provides a `CommitRootStore` interface, and expose functions to safely create and access the underlying `CommitMultiStore`.
+
+The new `RootStore` and supporting types can be implemented in a `store/v2alpha1` package to avoid breaking existing code.
+
+#### Merkle Proofs and IBC
+
+Currently, an IBC (v1.0) Merkle proof path consists of two elements (`["", ""]`), with each key corresponding to a separate proof. These are each verified according to individual [ICS-23 specs](https://github.com/cosmos/ibc-go/blob/f7051429e1cf833a6f65d51e6c3df1609290a549/modules/core/23-commitment/types/merkle.go#L17), and the result hash of each step is used as the committed value of the next step, until a root commitment hash is obtained.
+The root hash of the proof for `""` is hashed with the `""` to validate against the App Hash.
+
+This is not compatible with the `RootStore`, which stores all records in a single Merkle tree structure, and won't produce separate proofs for the store- and record-key. Ideally, the store-key component of the proof could just be omitted, and updated to use a "no-op" spec, so only the record-key is used. However, because the IBC verification code hardcodes the `"ibc"` prefix and applies it to the SDK proof as a separate element of the proof path, this isn't possible without a breaking change. Breaking this behavior would severely impact the Cosmos ecosystem which already widely adopts the IBC module. Requesting an update of the IBC module across the chains is a time consuming effort and not easily feasible.
+
+As a workaround, the `RootStore` will have to use two separate SMTs (they could use the same underlying DB): one for IBC state and one for everything else. A simple Merkle map that reference these SMTs will act as a Merkle Tree to create a final App hash. The Merkle map is not stored in a DBs - it's constructed in the runtime. The IBC substore key must be `"ibc"`.
+
+The workaround can still guarantee atomic syncs: the [proposed DB backends](#evaluated-kv-databases) support atomic transactions and efficient rollbacks, which will be used in the commit phase.
+
+The presented workaround can be used until the IBC module is fully upgraded to supports single-element commitment proofs.
+
+### Optimization: compress module key prefixes
+
+We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely:
+
+* each module has it's own namespace;
+* when accessing a module namespace we create a KVStore with embedded prefix;
+* that prefix will be compressed only when accessing and managing `SS`.
+
+We need to assure that the codes won't change. We can fix the mapping in a static variable (provided by an app) or SS state under a special key.
+
+TODO: need to make decision about the key compression.
+
+## Optimization: SS key compression
+
+Some objects may be saved with key, which contains a Protobuf message type. Such keys are long. We could save a lot of space if we can map Protobuf message types in varints.
+
+TODO: finalize this or move to another ADR.
+
+## Migration
+
+Using the new store will require a migration. 2 Migrations are proposed:
+
+1. Genesis export -- it will reset the blockchain history.
+2. In place migration: we can reuse `UpgradeKeeper.SetUpgradeHandler` to provide the migration logic:
+
+```go
+app.UpgradeKeeper.SetUpgradeHandler("adr-40", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {
+
+ storev2.Migrate(iavlstore, v2.store)
+
+ // RunMigrations returns the VersionMap
+ // with the updated module ConsensusVersions
+ return app.mm.RunMigrations(ctx, vm)
+})
+```
+
+The `Migrate` function will read all entries from a store/v1 DB and save them to the AD-40 combined KV store.
+Cache layer should not be used and the operation must finish with a single Commit call.
+
+Inserting records to the `SC` (SMT) component is the bottleneck. Unfortunately SMT doesn't support batch transactions.
+Adding batch transactions to `SC` layer is considered as a feature after the main release.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR doesn't introduce any Cosmos SDK level API changes.
+
+We change the storage layout of the state machine, a storage hard fork and network upgrade is required to incorporate these changes. SMT provides a merkle proof functionality, however it is not compatible with ICS23. Updating the proofs for ICS23 compatibility is required.
+
+### Positive
+
+* Decoupling state from state commitment introduce better engineering opportunities for further optimizations and better storage patterns.
+* Performance improvements.
+* Joining SMT based camp which has wider and proven adoption than IAVL. Example projects which decided on SMT: Ethereum2, Diem (Libra), Trillan, Tezos, Celestia.
+* Multistore removal fixes a longstanding issue with the current MultiStore design.
+* Simplifies merkle proofs - all modules, except IBC, have only one pass for merkle proof.
+
+### Negative
+
+* Storage migration
+* LL SMT doesn't support pruning - we will need to add and test that functionality.
+* `SS` keys will have an overhead of a key prefix. This doesn't impact `SC` because all keys in `SC` have same size (they are hashed).
+
+### Neutral
+
+* Deprecating IAVL, which is one of the core proposals of Cosmos Whitepaper.
+
+## Alternative designs
+
+Most of the alternative designs were evaluated in [state commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h).
+
+Ethereum research published [Verkle Trie](https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1.html) - an idea of combining polynomial commitments with merkle tree in order to reduce the tree height. This concept has a very good potential, but we think it's too early to implement it. The current, SMT based design could be easily updated to the Verkle Trie once other research implement all necessary libraries. The main advantage of the design described in this ADR is the separation of state commitments from the data storage and designing a more powerful interface.
+
+## Further Discussions
+
+### Evaluated KV Databases
+
+We verified existing databases KV databases for evaluating snapshot support. The following databases provide efficient snapshot mechanism: Badger, RocksDB, [Pebble](https://github.com/cockroachdb/pebble). Databases which don't provide such support or are not production ready: boltdb, leveldb, goleveldb, membdb, lmdb.
+
+### RDBMS
+
+Use of RDBMS instead of simple KV store for state. Use of RDBMS will require a Cosmos SDK API breaking change (`KVStore` interface) and will allow better data extraction and indexing solutions. Instead of saving an object as a single blob of bytes, we could save it as record in a table in the state storage layer, and as a `hash(key, protobuf(object))` in the SMT as outlined above. To verify that an object registered in RDBMS is same as the one committed to SMT, one will need to load it from RDBMS, marshal using protobuf, hash and do SMT search.
+
+### Off Chain Store
+
+We were discussing use case where modules can use a support database, which is not automatically committed. Module will responsible for having a sound storage model and can optionally use the feature discussed in __Committing to an object without saving it_ section.
+
+## References
+
+* [IAVL What's Next?](https://github.com/cosmos/cosmos-sdk/issues/7100)
+* [IAVL overview](https://docs.google.com/document/d/16Z_hW2rSAmoyMENO-RlAhQjAG3mSNKsQueMnKpmcBv0/edit#heading=h.yd2th7x3o1iv) of it's state v0.15
+* [State commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h)
+* [Celestia (LazyLedger) SMT](https://github.com/lazyledger/smt)
+* Facebook Diem (Libra) SMT [design](https://developers.diem.com/papers/jellyfish-merkle-tree/2021-01-14.pdf)
+* [Trillian Revocation Transparency](https://github.com/google/trillian/blob/master/docs/papers/RevocationTransparency.pdf), [Trillian Verifiable Data Structures](https://github.com/google/trillian/blob/master/docs/papers/VerifiableDataStructures.pdf).
+* Design and implementation [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297).
+* [How to Upgrade IBC Chains and their Clients](https://github.com/cosmos/ibc-go/blob/main/docs/ibc/upgrades/quick-guide.md)
+* [ADR-40 Effect on IBC](https://github.com/cosmos/ibc-go/discussions/256)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-041-in-place-store-migrations.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-041-in-place-store-migrations.md
new file mode 100644
index 00000000..2237b610
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-041-in-place-store-migrations.md
@@ -0,0 +1,167 @@
+# ADR 041: In-Place Store Migrations
+
+## Changelog
+
+* 17.02.2021: Initial Draft
+
+## Status
+
+Accepted
+
+## Abstract
+
+This ADR introduces a mechanism to perform in-place state store migrations during chain software upgrades.
+
+## Context
+
+When a chain upgrade introduces state-breaking changes inside modules, the current procedure consists of exporting the whole state into a JSON file (via the `simd export` command), running migration scripts on the JSON file (`simd genesis migrate` command), clearing the stores (`simd unsafe-reset-all` command), and starting a new chain with the migrated JSON file as new genesis (optionally with a custom initial block height). An example of such a procedure can be seen [in the Cosmos Hub 3->4 migration guide](https://github.com/cosmos/gaia/blob/v4.0.3/docs/migration/cosmoshub-3.md#upgrade-procedure).
+
+This procedure is cumbersome for multiple reasons:
+
+* The procedure takes time. It can take hours to run the `export` command, plus some additional hours to run `InitChain` on the fresh chain using the migrated JSON.
+* The exported JSON file can be heavy (~100MB-1GB), making it difficult to view, edit and transfer, which in turn introduces additional work to solve these problems (such as [streaming genesis](https://github.com/cosmos/cosmos-sdk/issues/6936)).
+
+## Decision
+
+We propose a migration procedure based on modifying the KV store in-place without involving the JSON export-process-import flow described above.
+
+### Module `ConsensusVersion`
+
+We introduce a new method on the `AppModule` interface:
+
+```go
+type AppModule interface {
+ // --snip--
+ ConsensusVersion() uint64
+}
+```
+
+This methods returns an `uint64` which serves as state-breaking version of the module. It MUST be incremented on each consensus-breaking change introduced by the module. To avoid potential errors with default values, the initial version of a module MUST be set to 1. In the Cosmos SDK, version 1 corresponds to the modules in the v0.41 series.
+
+### Module-Specific Migration Functions
+
+For each consensus-breaking change introduced by the module, a migration script from ConsensusVersion `N` to version `N+1` MUST be registered in the `Configurator` using its newly-added `RegisterMigration` method. All modules receive a reference to the configurator in their `RegisterServices` method on `AppModule`, and this is where the migration functions should be registered. The migration functions should be registered in increasing order.
+
+```go
+func (am AppModule) RegisterServices(cfg module.Configurator) {
+ // --snip--
+ cfg.RegisterMigration(types.ModuleName, 1, func(ctx sdk.Context) error {
+ // Perform in-place store migrations from ConsensusVersion 1 to 2.
+ })
+ cfg.RegisterMigration(types.ModuleName, 2, func(ctx sdk.Context) error {
+ // Perform in-place store migrations from ConsensusVersion 2 to 3.
+ })
+ // etc.
+}
+```
+
+For example, if the new ConsensusVersion of a module is `N` , then `N-1` migration functions MUST be registered in the configurator.
+
+In the Cosmos SDK, the migration functions are handled by each module's keeper, because the keeper holds the `sdk.StoreKey` used to perform in-place store migrations. To not overload the keeper, a `Migrator` wrapper is used by each module to handle the migration functions:
+
+```go
+// Migrator is a struct for handling in-place store migrations.
+type Migrator struct {
+ BaseKeeper
+}
+```
+
+Migration functions should live inside the `migrations/` folder of each module, and be called by the Migrator's methods. We propose the format `Migrate{M}to{N}` for method names.
+
+```go
+// Migrate1to2 migrates from version 1 to 2.
+func (m Migrator) Migrate1to2(ctx sdk.Context) error {
+ return v2bank.MigrateStore(ctx, m.keeper.storeKey) // v043bank is package `x/bank/migrations/v2`.
+}
+```
+
+Each module's migration functions are specific to the module's store evolutions, and are not described in this ADR. An example of x/bank store key migrations after the introduction of ADR-028 length-prefixed addresses can be seen in this [store.go code](https://github.com/cosmos/cosmos-sdk/blob/36f68eb9e041e20a5bb47e216ac5eb8b91f95471/x/bank/legacy/v043/store.go#L41-L62).
+
+### Tracking Module Versions in `x/upgrade`
+
+We introduce a new prefix store in `x/upgrade`'s store. This store will track each module's current version, it can be modelized as a `map[string]uint64` of module name to module ConsensusVersion, and will be used when running the migrations (see next section for details). The key prefix used is `0x1`, and the key/value format is:
+
+```text
+0x2 | {bytes(module_name)} => BigEndian(module_consensus_version)
+```
+
+The initial state of the store is set from `app.go`'s `InitChainer` method.
+
+The UpgradeHandler signature needs to be updated to take a `VersionMap`, as well as return an upgraded `VersionMap` and an error:
+
+```diff
+- type UpgradeHandler func(ctx sdk.Context, plan Plan)
++ type UpgradeHandler func(ctx sdk.Context, plan Plan, versionMap VersionMap) (VersionMap, error)
+```
+
+To apply an upgrade, we query the `VersionMap` from the `x/upgrade` store and pass it into the handler. The handler runs the actual migration functions (see next section), and if successful, returns an updated `VersionMap` to be stored in state.
+
+```diff
+func (k UpgradeKeeper) ApplyUpgrade(ctx sdk.Context, plan types.Plan) {
+ // --snip--
+- handler(ctx, plan)
++ updatedVM, err := handler(ctx, plan, k.GetModuleVersionMap(ctx)) // k.GetModuleVersionMap() fetches the VersionMap stored in state.
++ if err != nil {
++ return err
++ }
++
++ // Set the updated consensus versions to state
++ k.SetModuleVersionMap(ctx, updatedVM)
+}
+```
+
+A gRPC query endpoint to query the `VersionMap` stored in `x/upgrade`'s state will also be added, so that app developers can double-check the `VersionMap` before the upgrade handler runs.
+
+### Running Migrations
+
+Once all the migration handlers are registered inside the configurator (which happens at startup), running migrations can happen by calling the `RunMigrations` method on `module.Manager`. This function will loop through all modules, and for each module:
+
+* Get the old ConsensusVersion of the module from its `VersionMap` argument (let's call it `M`).
+* Fetch the new ConsensusVersion of the module from the `ConsensusVersion()` method on `AppModule` (call it `N`).
+* If `N>M`, run all registered migrations for the module sequentially `M -> M+1 -> M+2...` until `N`.
+ * There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store.
+
+If a required migration is missing (e.g. if it has not been registered in the `Configurator`), then the `RunMigrations` function will error.
+
+In practice, the `RunMigrations` method should be called from inside an `UpgradeHandler`.
+
+```go
+app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {
+ return app.mm.RunMigrations(ctx, vm)
+})
+```
+
+Assuming a chain upgrades at block `n`, the procedure should run as follows:
+
+* the old binary will halt in `BeginBlock` when starting block `N`. In its store, the ConsensusVersions of the old binary's modules are stored.
+* the new binary will start at block `N`. The UpgradeHandler is set in the new binary, so will run at `BeginBlock` of the new binary. Inside `x/upgrade`'s `ApplyUpgrade`, the `VersionMap` will be retrieved from the (old binary's) store, and passed into the `RunMigrations` functon, migrating all module stores in-place before the modules' own `BeginBlock`s.
+
+## Consequences
+
+### Backwards Compatibility
+
+This ADR introduces a new method `ConsensusVersion()` on `AppModule`, which all modules need to implement. It also alters the UpgradeHandler function signature. As such, it is not backwards-compatible.
+
+While modules MUST register their migration functions when bumping ConsensusVersions, running those scripts using an upgrade handler is optional. An application may perfectly well decide to not call the `RunMigrations` inside its upgrade handler, and continue using the legacy JSON migration path.
+
+### Positive
+
+* Perform chain upgrades without manipulating JSON files.
+* While no benchmark has been made yet, it is probable that in-place store migrations will take less time than JSON migrations. The main reason supporting this claim is that both the `simd export` command on the old binary and the `InitChain` function on the new binary will be skipped.
+
+### Negative
+
+* Module developers MUST correctly track consensus-breaking changes in their modules. If a consensus-breaking change is introduced in a module without its corresponding `ConsensusVersion()` bump, then the `RunMigrations` function won't detect the migration, and the chain upgrade might be unsuccessful. Documentation should clearly reflect this.
+
+### Neutral
+
+* The Cosmos SDK will continue to support JSON migrations via the existing `simd export` and `simd genesis migrate` commands.
+* The current ADR does not allow creating, renaming or deleting stores, only modifying existing store keys and values. The Cosmos SDK already has the `StoreLoader` for those operations.
+
+## Further Discussions
+
+## References
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/8429
+* Implementation of `ConsensusVersion` and `RunMigrations`: https://github.com/cosmos/cosmos-sdk/pull/8485
+* Issue discussing `x/upgrade` design: https://github.com/cosmos/cosmos-sdk/issues/8514
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-042-group-module.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-042-group-module.md
new file mode 100644
index 00000000..834ec455
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-042-group-module.md
@@ -0,0 +1,279 @@
+# ADR 042: Group Module
+
+## Changelog
+
+* 2020/04/09: Initial Draft
+
+## Status
+
+Draft
+
+## Abstract
+
+This ADR defines the `x/group` module which allows the creation and management of on-chain multi-signature accounts and enables voting for message execution based on configurable decision policies.
+
+## Context
+
+The legacy amino multi-signature mechanism of the Cosmos SDK has certain limitations:
+
+* Key rotation is not possible, although this can be solved with [account rekeying](adr-034-account-rekeying.md).
+* Thresholds can't be changed.
+* UX is cumbersome for non-technical users ([#5661](https://github.com/cosmos/cosmos-sdk/issues/5661)).
+* It requires `legacy_amino` sign mode ([#8141](https://github.com/cosmos/cosmos-sdk/issues/8141)).
+
+While the group module is not meant to be a total replacement for the current multi-signature accounts, it provides a solution to the limitations described above, with a more flexible key management system where keys can be added, updated or removed, as well as configurable thresholds.
+It's meant to be used with other access control modules such as [`x/feegrant`](adr-029-fee-grant-module.md) ans [`x/authz`](adr-030-authz-module.md) to simplify key management for individuals and organizations.
+
+The proof of concept of the group module can be found in https://github.com/regen-network/regen-ledger/tree/master/proto/regen/group/v1alpha1 and https://github.com/regen-network/regen-ledger/tree/master/x/group.
+
+## Decision
+
+We propose merging the `x/group` module with its supporting [ORM/Table Store package](https://github.com/regen-network/regen-ledger/tree/master/orm) ([#7098](https://github.com/cosmos/cosmos-sdk/issues/7098)) into the Cosmos SDK and continuing development here. There will be a dedicated ADR for the ORM package.
+
+### Group
+
+A group is a composition of accounts with associated weights. It is not
+an account and doesn't have a balance. It doesn't in and of itself have any
+sort of voting or decision weight.
+Group members can create proposals and vote on them through group accounts using different decision policies.
+
+It has an `admin` account which can manage members in the group, update the group
+metadata and set a new admin.
+
+```protobuf
+message GroupInfo {
+
+ // group_id is the unique ID of this group.
+ uint64 group_id = 1;
+
+ // admin is the account address of the group's admin.
+ string admin = 2;
+
+ // metadata is any arbitrary metadata to attached to the group.
+ bytes metadata = 3;
+
+ // version is used to track changes to a group's membership structure that
+ // would break existing proposals. Whenever a member weight has changed,
+ // or any member is added or removed, the version is incremented and will
+ // invalidate all proposals from older versions.
+ uint64 version = 4;
+
+ // total_weight is the sum of the group members' weights.
+ string total_weight = 5;
+}
+```
+
+```protobuf
+message GroupMember {
+
+ // group_id is the unique ID of the group.
+ uint64 group_id = 1;
+
+ // member is the member data.
+ Member member = 2;
+}
+
+// Member represents a group member with an account address,
+// non-zero weight and metadata.
+message Member {
+
+ // address is the member's account address.
+ string address = 1;
+
+ // weight is the member's voting weight that should be greater than 0.
+ string weight = 2;
+
+ // metadata is any arbitrary metadata to attached to the member.
+ bytes metadata = 3;
+}
+```
+
+### Group Account
+
+A group account is an account associated with a group and a decision policy.
+A group account does have a balance.
+
+Group accounts are abstracted from groups because a single group may have
+multiple decision policies for different types of actions. Managing group
+membership separately from decision policies results in the least overhead
+and keeps membership consistent across different policies. The pattern that
+is recommended is to have a single master group account for a given group,
+and then to create separate group accounts with different decision policies
+and delegate the desired permissions from the master account to
+those "sub-accounts" using the [`x/authz` module](adr-030-authz-module.md).
+
+```protobuf
+message GroupAccountInfo {
+
+ // address is the group account address.
+ string address = 1;
+
+ // group_id is the ID of the Group the GroupAccount belongs to.
+ uint64 group_id = 2;
+
+ // admin is the account address of the group admin.
+ string admin = 3;
+
+ // metadata is any arbitrary metadata of this group account.
+ bytes metadata = 4;
+
+ // version is used to track changes to a group's GroupAccountInfo structure that
+ // invalidates active proposal from old versions.
+ uint64 version = 5;
+
+ // decision_policy specifies the group account's decision policy.
+ google.protobuf.Any decision_policy = 6 [(cosmos_proto.accepts_interface) = "cosmos.group.v1.DecisionPolicy"];
+}
+```
+
+Similarly to a group admin, a group account admin can update its metadata, decision policy or set a new group account admin.
+
+A group account can also be an admin or a member of a group.
+For instance, a group admin could be another group account which could "elects" the members or it could be the same group that elects itself.
+
+### Decision Policy
+
+A decision policy is the mechanism by which members of a group can vote on
+proposals.
+
+All decision policies should have a minimum and maximum voting window.
+The minimum voting window is the minimum duration that must pass in order
+for a proposal to potentially pass, and it may be set to 0. The maximum voting
+window is the maximum time that a proposal may be voted on and executed if
+it reached enough support before it is closed.
+Both of these values must be less than a chain-wide max voting window parameter.
+
+We define the `DecisionPolicy` interface that all decision policies must implement:
+
+```go
+type DecisionPolicy interface {
+ codec.ProtoMarshaler
+
+ ValidateBasic() error
+ GetTimeout() types.Duration
+ Allow(tally Tally, totalPower string, votingDuration time.Duration) (DecisionPolicyResult, error)
+ Validate(g GroupInfo) error
+}
+
+type DecisionPolicyResult struct {
+ Allow bool
+ Final bool
+}
+```
+
+#### Threshold decision policy
+
+A threshold decision policy defines a minimum support votes (_yes_), based on a tally
+of voter weights, for a proposal to pass. For
+this decision policy, abstain and veto are treated as no support (_no_).
+
+```protobuf
+message ThresholdDecisionPolicy {
+
+ // threshold is the minimum weighted sum of support votes for a proposal to succeed.
+ string threshold = 1;
+
+ // voting_period is the duration from submission of a proposal to the end of voting period
+ // Within this period, votes and exec messages can be submitted.
+ google.protobuf.Duration voting_period = 2 [(gogoproto.nullable) = false];
+}
+```
+
+### Proposal
+
+Any member of a group can submit a proposal for a group account to decide upon.
+A proposal consists of a set of `sdk.Msg`s that will be executed if the proposal
+passes as well as any metadata associated with the proposal. These `sdk.Msg`s get validated as part of the `Msg/CreateProposal` request validation. They should also have their signer set as the group account.
+
+Internally, a proposal also tracks:
+
+* its current `Status`: submitted, closed or aborted
+* its `Result`: unfinalized, accepted or rejected
+* its `VoteState` in the form of a `Tally`, which is calculated on new votes and when executing the proposal.
+
+```protobuf
+// Tally represents the sum of weighted votes.
+message Tally {
+ option (gogoproto.goproto_getters) = false;
+
+ // yes_count is the weighted sum of yes votes.
+ string yes_count = 1;
+
+ // no_count is the weighted sum of no votes.
+ string no_count = 2;
+
+ // abstain_count is the weighted sum of abstainers.
+ string abstain_count = 3;
+
+ // veto_count is the weighted sum of vetoes.
+ string veto_count = 4;
+}
+```
+
+### Voting
+
+Members of a group can vote on proposals. There are four choices to choose while voting - yes, no, abstain and veto. Not
+all decision policies will support them. Votes can contain some optional metadata.
+In the current implementation, the voting window begins as soon as a proposal
+is submitted.
+
+Voting internally updates the proposal `VoteState` as well as `Status` and `Result` if needed.
+
+### Executing Proposals
+
+Proposals will not be automatically executed by the chain in this current design,
+but rather a user must submit a `Msg/Exec` transaction to attempt to execute the
+proposal based on the current votes and decision policy. A future upgrade could
+automate this and have the group account (or a fee granter) pay.
+
+#### Changing Group Membership
+
+In the current implementation, updating a group or a group account after submitting a proposal will make it invalid. It will simply fail if someone calls `Msg/Exec` and will eventually be garbage collected.
+
+### Notes on current implementation
+
+This section outlines the current implementation used in the proof of concept of the group module but this could be subject to changes and iterated on.
+
+#### ORM
+
+The [ORM package](https://github.com/cosmos/cosmos-sdk/discussions/9156) defines tables, sequences and secondary indexes which are used in the group module.
+
+Groups are stored in state as part of a `groupTable`, the `group_id` being an auto-increment integer. Group members are stored in a `groupMemberTable`.
+
+Group accounts are stored in a `groupAccountTable`. The group account address is generated based on an auto-increment integer which is used to derive the group module `RootModuleKey` into a `DerivedModuleKey`, as stated in [ADR-033](adr-033-protobuf-inter-module-comm.md#modulekeys-and-moduleids). The group account is added as a new `ModuleAccount` through `x/auth`.
+
+Proposals are stored as part of the `proposalTable` using the `Proposal` type. The `proposal_id` is an auto-increment integer.
+
+Votes are stored in the `voteTable`. The primary key is based on the vote's `proposal_id` and `voter` account address.
+
+#### ADR-033 to route proposal messages
+
+Inter-module communication introduced by [ADR-033](adr-033-protobuf-inter-module-comm.md) can be used to route a proposal's messages using the `DerivedModuleKey` corresponding to the proposal's group account.
+
+## Consequences
+
+### Positive
+
+* Improved UX for multi-signature accounts allowing key rotation and custom decision policies.
+
+### Negative
+
+### Neutral
+
+* It uses ADR 033 so it will need to be implemented within the Cosmos SDK, but this doesn't imply necessarily any large refactoring of existing Cosmos SDK modules.
+* The current implementation of the group module uses the ORM package.
+
+## Further Discussions
+
+* Convergence of `/group` and `x/gov` as both support proposals and voting: https://github.com/cosmos/cosmos-sdk/discussions/9066
+* `x/group` possible future improvements:
+ * Execute proposals on submission (https://github.com/regen-network/regen-ledger/issues/288)
+ * Withdraw a proposal (https://github.com/regen-network/cosmos-modules/issues/41)
+ * Make `Tally` more flexible and support non-binary choices
+
+## References
+
+* Initial specification:
+ * https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module
+ * [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236)
+* Proposal to add `x/group` into the Cosmos SDK: [#7633](https://github.com/cosmos/cosmos-sdk/issues/7633)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-043-nft-module.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-043-nft-module.md
new file mode 100644
index 00000000..87b4dbb5
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-043-nft-module.md
@@ -0,0 +1,349 @@
+# ADR 43: NFT Module
+
+## Changelog
+
+* 2021-05-01: Initial Draft
+* 2021-07-02: Review updates
+* 2022-06-15: Add batch operation
+* 2022-11-11: Remove strict validation of classID and tokenID
+
+## Status
+
+PROPOSED
+
+## Abstract
+
+This ADR defines the `x/nft` module which is a generic implementation of NFTs, roughly "compatible" with ERC721. **Applications using the `x/nft` module must implement the following functions**:
+
+* `MsgNewClass` - Receive the user's request to create a class, and call the `NewClass` of the `x/nft` module.
+* `MsgUpdateClass` - Receive the user's request to update a class, and call the `UpdateClass` of the `x/nft` module.
+* `MsgMintNFT` - Receive the user's request to mint a nft, and call the `MintNFT` of the `x/nft` module.
+* `BurnNFT` - Receive the user's request to burn a nft, and call the `BurnNFT` of the `x/nft` module.
+* `UpdateNFT` - Receive the user's request to update a nft, and call the `UpdateNFT` of the `x/nft` module.
+
+## Context
+
+NFTs are more than just crypto art, which is very helpful for accruing value to the Cosmos ecosystem. As a result, Cosmos Hub should implement NFT functions and enable a unified mechanism for storing and sending the ownership representative of NFTs as discussed in https://github.com/cosmos/cosmos-sdk/discussions/9065.
+
+As discussed in [#9065](https://github.com/cosmos/cosmos-sdk/discussions/9065), several potential solutions can be considered:
+
+* irismod/nft and modules/incubator/nft
+* CW721
+* DID NFTs
+* interNFT
+
+Since functions/use cases of NFTs are tightly connected with their logic, it is almost impossible to support all the NFTs' use cases in one Cosmos SDK module by defining and implementing different transaction types.
+
+Considering generic usage and compatibility of interchain protocols including IBC and Gravity Bridge, it is preferred to have a generic NFT module design which handles the generic NFTs logic.
+This design idea can enable composability that application-specific functions should be managed by other modules on Cosmos Hub or on other Zones by importing the NFT module.
+
+The current design is based on the work done by [IRISnet team](https://github.com/irisnet/irismod/tree/master/modules/nft) and an older implementation in the [Cosmos repository](https://github.com/cosmos/modules/tree/master/incubator/nft).
+
+## Decision
+
+We create a `x/nft` module, which contains the following functionality:
+
+* Store NFTs and track their ownership.
+* Expose `Keeper` interface for composing modules to transfer, mint and burn NFTs.
+* Expose external `Message` interface for users to transfer ownership of their NFTs.
+* Query NFTs and their supply information.
+
+The proposed module is a base module for NFT app logic. It's goal it to provide a common layer for storage, basic transfer functionality and IBC. The module should not be used as a standalone.
+Instead an app should create a specialized module to handle app specific logic (eg: NFT ID construction, royalty), user level minting and burning. Moreover an app specialized module should handle auxiliary data to support the app logic (eg indexes, ORM, business data).
+
+All data carried over IBC must be part of the `NFT` or `Class` type described below. The app specific NFT data should be encoded in `NFT.data` for cross-chain integrity. Other objects related to NFT, which are not important for integrity can be part of the app specific module.
+
+### Types
+
+We propose two main types:
+
+* `Class` -- describes NFT class. We can think about it as a smart contract address.
+* `NFT` -- object representing unique, non fungible asset. Each NFT is associated with a Class.
+
+#### Class
+
+NFT **Class** is comparable to an ERC-721 smart contract (provides description of a smart contract), under which a collection of NFTs can be created and managed.
+
+```protobuf
+message Class {
+ string id = 1;
+ string name = 2;
+ string symbol = 3;
+ string description = 4;
+ string uri = 5;
+ string uri_hash = 6;
+ google.protobuf.Any data = 7;
+}
+```
+
+* `id` is used as the primary index for storing the class; _required_
+* `name` is a descriptive name of the NFT class; _optional_
+* `symbol` is the symbol usually shown on exchanges for the NFT class; _optional_
+* `description` is a detailed description of the NFT class; _optional_
+* `uri` is a URI for the class metadata stored off chain. It should be a JSON file that contains metadata about the NFT class and NFT data schema ([OpenSea example](https://docs.opensea.io/docs/contract-level-metadata)); _optional_
+* `uri_hash` is a hash of the document pointed by uri; _optional_
+* `data` is app specific metadata of the class; _optional_
+
+#### NFT
+
+We define a general model for `NFT` as follows.
+
+```protobuf
+message NFT {
+ string class_id = 1;
+ string id = 2;
+ string uri = 3;
+ string uri_hash = 4;
+ google.protobuf.Any data = 10;
+}
+```
+
+* `class_id` is the identifier of the NFT class where the NFT belongs; _required_
+* `id` is an identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_
+
+ ```text
+ {class_id}/{id} --> NFT (bytes)
+ ```
+
+* `uri` is a URI for the NFT metadata stored off chain. Should point to a JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_
+* `uri_hash` is a hash of the document pointed by uri; _optional_
+* `data` is an app specific data of the NFT. CAN be used by composing modules to specify additional properties of the NFT; _optional_
+
+This ADR doesn't specify values that `data` can take; however, best practices recommend upper-level NFT modules clearly specify their contents. Although the value of this field doesn't provide the additional context required to manage NFT records, which means that the field can technically be removed from the specification, the field's existence allows basic informational/UI functionality.
+
+### `Keeper` Interface
+
+```go
+type Keeper interface {
+ NewClass(ctx sdk.Context,class Class)
+ UpdateClass(ctx sdk.Context,class Class)
+
+ Mint(ctx sdk.Context,nft NFT,receiver sdk.AccAddress) // updates totalSupply
+ BatchMint(ctx sdk.Context, tokens []NFT,receiver sdk.AccAddress) error
+
+ Burn(ctx sdk.Context, classId string, nftId string) // updates totalSupply
+ BatchBurn(ctx sdk.Context, classID string, nftIDs []string) error
+
+ Update(ctx sdk.Context, nft NFT)
+ BatchUpdate(ctx sdk.Context, tokens []NFT) error
+
+ Transfer(ctx sdk.Context, classId string, nftId string, receiver sdk.AccAddress)
+ BatchTransfer(ctx sdk.Context, classID string, nftIDs []string, receiver sdk.AccAddress) error
+
+ GetClass(ctx sdk.Context, classId string) Class
+ GetClasses(ctx sdk.Context) []Class
+
+ GetNFT(ctx sdk.Context, classId string, nftId string) NFT
+ GetNFTsOfClassByOwner(ctx sdk.Context, classId string, owner sdk.AccAddress) []NFT
+ GetNFTsOfClass(ctx sdk.Context, classId string) []NFT
+
+ GetOwner(ctx sdk.Context, classId string, nftId string) sdk.AccAddress
+ GetBalance(ctx sdk.Context, classId string, owner sdk.AccAddress) uint64
+ GetTotalSupply(ctx sdk.Context, classId string) uint64
+}
+```
+
+Other business logic implementations should be defined in composing modules that import `x/nft` and use its `Keeper`.
+
+### `Msg` Service
+
+```protobuf
+service Msg {
+ rpc Send(MsgSend) returns (MsgSendResponse);
+}
+
+message MsgSend {
+ string class_id = 1;
+ string id = 2;
+ string sender = 3;
+ string reveiver = 4;
+}
+message MsgSendResponse {}
+```
+
+`MsgSend` can be used to transfer the ownership of an NFT to another address.
+
+The implementation outline of the server is as follows:
+
+```go
+type msgServer struct{
+ k Keeper
+}
+
+func (m msgServer) Send(ctx context.Context, msg *types.MsgSend) (*types.MsgSendResponse, error) {
+ // check current ownership
+ assertEqual(msg.Sender, m.k.GetOwner(msg.ClassId, msg.Id))
+
+ // transfer ownership
+ m.k.Transfer(msg.ClassId, msg.Id, msg.Receiver)
+
+ return &types.MsgSendResponse{}, nil
+}
+```
+
+The query service methods for the `x/nft` module are:
+
+```protobuf
+service Query {
+ // Balance queries the number of NFTs of a given class owned by the owner, same as balanceOf in ERC721
+ rpc Balance(QueryBalanceRequest) returns (QueryBalanceResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/balance/{owner}/{class_id}";
+ }
+
+ // Owner queries the owner of the NFT based on its class and id, same as ownerOf in ERC721
+ rpc Owner(QueryOwnerRequest) returns (QueryOwnerResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/owner/{class_id}/{id}";
+ }
+
+ // Supply queries the number of NFTs from the given class, same as totalSupply of ERC721.
+ rpc Supply(QuerySupplyRequest) returns (QuerySupplyResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/supply/{class_id}";
+ }
+
+ // NFTs queries all NFTs of a given class or owner,choose at least one of the two, similar to tokenByIndex in ERC721Enumerable
+ rpc NFTs(QueryNFTsRequest) returns (QueryNFTsResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/nfts";
+ }
+
+ // NFT queries an NFT based on its class and id.
+ rpc NFT(QueryNFTRequest) returns (QueryNFTResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/nfts/{class_id}/{id}";
+ }
+
+ // Class queries an NFT class based on its id
+ rpc Class(QueryClassRequest) returns (QueryClassResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/classes/{class_id}";
+ }
+
+ // Classes queries all NFT classes
+ rpc Classes(QueryClassesRequest) returns (QueryClassesResponse) {
+ option (google.api.http).get = "/cosmos/nft/v1beta1/classes";
+ }
+}
+
+// QueryBalanceRequest is the request type for the Query/Balance RPC method
+message QueryBalanceRequest {
+ string class_id = 1;
+ string owner = 2;
+}
+
+// QueryBalanceResponse is the response type for the Query/Balance RPC method
+message QueryBalanceResponse {
+ uint64 amount = 1;
+}
+
+// QueryOwnerRequest is the request type for the Query/Owner RPC method
+message QueryOwnerRequest {
+ string class_id = 1;
+ string id = 2;
+}
+
+// QueryOwnerResponse is the response type for the Query/Owner RPC method
+message QueryOwnerResponse {
+ string owner = 1;
+}
+
+// QuerySupplyRequest is the request type for the Query/Supply RPC method
+message QuerySupplyRequest {
+ string class_id = 1;
+}
+
+// QuerySupplyResponse is the response type for the Query/Supply RPC method
+message QuerySupplyResponse {
+ uint64 amount = 1;
+}
+
+// QueryNFTstRequest is the request type for the Query/NFTs RPC method
+message QueryNFTsRequest {
+ string class_id = 1;
+ string owner = 2;
+ cosmos.base.query.v1beta1.PageRequest pagination = 3;
+}
+
+// QueryNFTsResponse is the response type for the Query/NFTs RPC methods
+message QueryNFTsResponse {
+ repeated cosmos.nft.v1beta1.NFT nfts = 1;
+ cosmos.base.query.v1beta1.PageResponse pagination = 2;
+}
+
+// QueryNFTRequest is the request type for the Query/NFT RPC method
+message QueryNFTRequest {
+ string class_id = 1;
+ string id = 2;
+}
+
+// QueryNFTResponse is the response type for the Query/NFT RPC method
+message QueryNFTResponse {
+ cosmos.nft.v1beta1.NFT nft = 1;
+}
+
+// QueryClassRequest is the request type for the Query/Class RPC method
+message QueryClassRequest {
+ string class_id = 1;
+}
+
+// QueryClassResponse is the response type for the Query/Class RPC method
+message QueryClassResponse {
+ cosmos.nft.v1beta1.Class class = 1;
+}
+
+// QueryClassesRequest is the request type for the Query/Classes RPC method
+message QueryClassesRequest {
+ // pagination defines an optional pagination for the request.
+ cosmos.base.query.v1beta1.PageRequest pagination = 1;
+}
+
+// QueryClassesResponse is the response type for the Query/Classes RPC method
+message QueryClassesResponse {
+ repeated cosmos.nft.v1beta1.Class classes = 1;
+ cosmos.base.query.v1beta1.PageResponse pagination = 2;
+}
+```
+
+### Interoperability
+
+Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side.
+IBC is implemented per module. Here, we aligned that NFTs will be recorded and managed in the x/nft. This requires creation of a new IBC standard and implementation of it.
+
+For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC.
+
+## Consequences
+
+### Backward Compatibility
+
+No backward incompatibilities.
+
+### Forward Compatibility
+
+This specification conforms to the ERC-721 smart contract specification for NFT identifiers. Note that ERC-721 defines uniqueness based on (contract address, uint256 tokenId), and we conform to this implicitly because a single module is currently aimed to track NFT identifiers. Note: use of the (mutable) data field to determine uniqueness is not safe.s
+
+### Positive
+
+* NFT identifiers available on Cosmos Hub.
+* Ability to build different NFT modules for the Cosmos Hub, e.g., ERC-721.
+* NFT module which supports interoperability with IBC and other cross-chain infrastructures like Gravity Bridge
+
+### Negative
+
+* New IBC app is required for x/nft
+* CW721 adapter is required
+
+### Neutral
+
+* Other functions need more modules. For example, a custody module is needed for NFT trading function, a collectible module is needed for defining NFT properties.
+
+## Further Discussions
+
+For other kinds of applications on the Hub, more app-specific modules can be developed in the future:
+
+* `x/nft/custody`: custody of NFTs to support trading functionality.
+* `x/nft/marketplace`: selling and buying NFTs using sdk.Coins.
+* `x/fractional`: a module to split an ownership of an asset (NFT or other assets) for multiple stakeholder. `x/group` should work for most of the cases.
+
+Other networks in the Cosmos ecosystem could design and implement their own NFT modules for specific NFT applications and use cases.
+
+## References
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/discussions/9065
+* x/nft: initialize module: https://github.com/cosmos/cosmos-sdk/pull/9174
+* [ADR 033](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-033-protobuf-inter-module-comm.md)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-044-protobuf-updates-guidelines.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-044-protobuf-updates-guidelines.md
new file mode 100644
index 00000000..245adcff
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-044-protobuf-updates-guidelines.md
@@ -0,0 +1,129 @@
+# ADR 044: Guidelines for Updating Protobuf Definitions
+
+## Changelog
+
+* 28.06.2021: Initial Draft
+* 02.12.2021: Add `Since:` comment for new fields
+* 21.07.2022: Remove the rule of no new `Msg` in the same proto version.
+
+## Status
+
+Draft
+
+## Abstract
+
+This ADR provides guidelines and recommended practices when updating Protobuf definitions. These guidelines are targeting module developers.
+
+## Context
+
+The Cosmos SDK maintains a set of [Protobuf definitions](https://github.com/cosmos/cosmos-sdk/tree/main/proto/cosmos). It is important to correctly design Protobuf definitions to avoid any breaking changes within the same version. The reasons are to not break tooling (including indexers and explorers), wallets and other third-party integrations.
+
+When making changes to these Protobuf definitions, the Cosmos SDK currently only follows [Buf's](https://docs.buf.build/) recommendations. We noticed however that Buf's recommendations might still result in breaking changes in the SDK in some cases. For example:
+
+* Adding fields to `Msg`s. Adding fields is a not a Protobuf spec-breaking operation. However, when adding new fields to `Msg`s, the unknown field rejection will throw an error when sending the new `Msg` to an older node.
+* Marking fields as `reserved`. Protobuf proposes the `reserved` keyword for removing fields without the need to bump the package version. However, by doing so, client backwards compatibility is broken as Protobuf doesn't generate anything for `reserved` fields. See [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) for more details on this issue.
+
+Moreover, module developers often face other questions around Protobuf definitions such as "Can I rename a field?" or "Can I deprecate a field?" This ADR aims to answer all these questions by providing clear guidelines about allowed updates for Protobuf definitions.
+
+## Decision
+
+We decide to keep [Buf's](https://docs.buf.build/) recommendations with the following exceptions:
+
+* `UNARY_RPC`: the Cosmos SDK currently does not support streaming RPCs.
+* `COMMENT_FIELD`: the Cosmos SDK allows fields with no comments.
+* `SERVICE_SUFFIX`: we use the `Query` and `Msg` service naming convention, which doesn't use the `-Service` suffix.
+* `PACKAGE_VERSION_SUFFIX`: some packages, such as `cosmos.crypto.ed25519`, don't use a version suffix.
+* `RPC_REQUEST_STANDARD_NAME`: Requests for the `Msg` service don't have the `-Request` suffix to keep backwards compatibility.
+
+On top of Buf's recommendations we add the following guidelines that are specific to the Cosmos SDK.
+
+### Updating Protobuf Definition Without Bumping Version
+
+#### 1. Module developers MAY add new Protobuf definitions
+
+Module developers MAY add new `message`s, new `Service`s, new `rpc` endpoints, and new fields to existing messages. This recommendation follows the Protobuf specification, but is added in this document for clarity, as the SDK requires one additional change.
+
+The SDK requires the Protobuf comment of the new addition to contain one line with the following format:
+
+```protobuf
+// Since: cosmos-sdk {, ...}
+```
+
+Where each `version` denotes a minor ("0.45") or patch ("0.44.5") version from which the field is available. This will greatly help client libraries, who can optionally use reflection or custom code generation to show/hide these fields depending on the targetted node version.
+
+As examples, the following comments are valid:
+
+```protobuf
+// Since: cosmos-sdk 0.44
+
+// Since: cosmos-sdk 0.42.11, 0.44.5
+```
+
+and the following ones are NOT valid:
+
+```protobuf
+// Since cosmos-sdk v0.44
+
+// since: cosmos-sdk 0.44
+
+// Since: cosmos-sdk 0.42.11 0.44.5
+
+// Since: Cosmos SDK 0.42.11, 0.44.5
+```
+
+#### 2. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields
+
+Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version).
+
+As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-breaking changes, listed below. Instead of bumping the package versions from `v1beta1` to `v1`, the SDK team decided to follow this guideline, by reverting the breaking changes, marking those changes as deprecated, and modifying the node implementation when processing messages with deprecated fields. More specifically:
+
+* The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty.
+* The Cosmos SDK now supports [governance split votes](adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`.
+
+#### 3. Fields MUST NOT be renamed
+
+Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI.
+
+### Incrementing Protobuf Package Version
+
+TODO, needs architecture review. Some topics:
+
+* Bumping versions frequency
+* When bumping versions, should the Cosmos SDK support both versions?
+ * i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions?
+* mention ADR-023 Protobuf naming
+
+## Consequences
+
+> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.
+
+### Backwards Compatibility
+
+> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright.
+
+### Positive
+
+* less pain to tool developers
+* more compatibility in the ecosystem
+* ...
+
+### Negative
+
+{negative consequences}
+
+### Neutral
+
+* more rigor in Protobuf review
+
+## Further Discussions
+
+This ADR is still in the DRAFT stage, and the "Incrementing Protobuf Package Version" will be filled in once we make a decision on how to correctly do it.
+
+## Test Cases [optional]
+
+Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.
+
+## References
+
+* [#9445](https://github.com/cosmos/cosmos-sdk/issues/9445) Release proto definitions v1
+* [#9446](https://github.com/cosmos/cosmos-sdk/issues/9446) Address v1beta1 proto breaking changes
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-045-check-delivertx-middlewares.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-045-check-delivertx-middlewares.md
new file mode 100644
index 00000000..756fa5a2
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-045-check-delivertx-middlewares.md
@@ -0,0 +1,312 @@
+# ADR 045: BaseApp `{Check,Deliver}Tx` as Middlewares
+
+## Changelog
+
+* 20.08.2021: Initial draft.
+* 07.12.2021: Update `tx.Handler` interface ([\#10693](https://github.com/cosmos/cosmos-sdk/pull/10693)).
+* 17.05.2022: ADR is abandoned, as middlewares are deemed too hard to reason about.
+
+## Status
+
+ABANDONED. Replacement is being discussed in [#11955](https://github.com/cosmos/cosmos-sdk/issues/11955).
+
+## Abstract
+
+This ADR replaces the current BaseApp `runTx` and antehandlers design with a middleware-based design.
+
+## Context
+
+BaseApp's implementation of ABCI `{Check,Deliver}Tx()` and its own `Simulate()` method call the `runTx` method under the hood, which first runs antehandlers, then executes `Msg`s. However, the [transaction Tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [refunding unused gas](https://github.com/cosmos/cosmos-sdk/issues/2150) use cases require custom logic to be run after the `Msg`s execution. There is currently no way to achieve this.
+
+An naive solution would be to add post-`Msg` hooks to BaseApp. However, the Cosmos SDK team thinks in parallel about the bigger picture of making app wiring simpler ([#9181](https://github.com/cosmos/cosmos-sdk/discussions/9182)), which includes making BaseApp more lightweight and modular.
+
+## Decision
+
+We decide to transform Baseapp's implementation of ABCI `{Check,Deliver}Tx` and its own `Simulate` methods to use a middleware-based design.
+
+The two following interfaces are the base of the middleware design, and are defined in `types/tx`:
+
+```go
+type Handler interface {
+ CheckTx(ctx context.Context, req Request, checkReq RequestCheckTx) (Response, ResponseCheckTx, error)
+ DeliverTx(ctx context.Context, req Request) (Response, error)
+ SimulateTx(ctx context.Context, req Request (Response, error)
+}
+
+type Middleware func(Handler) Handler
+```
+
+where we define the following arguments and return types:
+
+```go
+type Request struct {
+ Tx sdk.Tx
+ TxBytes []byte
+}
+
+type Response struct {
+ GasWanted uint64
+ GasUsed uint64
+ // MsgResponses is an array containing each Msg service handler's response
+ // type, packed in an Any. This will get proto-serialized into the `Data` field
+ // in the ABCI Check/DeliverTx responses.
+ MsgResponses []*codectypes.Any
+ Log string
+ Events []abci.Event
+}
+
+type RequestCheckTx struct {
+ Type abci.CheckTxType
+}
+
+type ResponseCheckTx struct {
+ Priority int64
+}
+```
+
+Please note that because CheckTx handles separate logic related to mempool priotization, its signature is different than DeliverTx and SimulateTx.
+
+BaseApp holds a reference to a `tx.Handler`:
+
+```go
+type BaseApp struct {
+ // other fields
+ txHandler tx.Handler
+}
+```
+
+Baseapp's ABCI `{Check,Deliver}Tx()` and `Simulate()` methods simply call `app.txHandler.{Check,Deliver,Simulate}Tx()` with the relevant arguments. For example, for `DeliverTx`:
+
+```go
+func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
+ var abciRes abci.ResponseDeliverTx
+ ctx := app.getContextForTx(runTxModeDeliver, req.Tx)
+ res, err := app.txHandler.DeliverTx(ctx, tx.Request{TxBytes: req.Tx})
+ if err != nil {
+ abciRes = sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace)
+ return abciRes
+ }
+
+ abciRes, err = convertTxResponseToDeliverTx(res)
+ if err != nil {
+ return sdkerrors.ResponseDeliverTx(err, uint64(res.GasUsed), uint64(res.GasWanted), app.trace)
+ }
+
+ return abciRes
+}
+
+// convertTxResponseToDeliverTx converts a tx.Response into a abci.ResponseDeliverTx.
+func convertTxResponseToDeliverTx(txRes tx.Response) (abci.ResponseDeliverTx, error) {
+ data, err := makeABCIData(txRes)
+ if err != nil {
+ return abci.ResponseDeliverTx{}, nil
+ }
+
+ return abci.ResponseDeliverTx{
+ Data: data,
+ Log: txRes.Log,
+ Events: txRes.Events,
+ }, nil
+}
+
+// makeABCIData generates the Data field to be sent to ABCI Check/DeliverTx.
+func makeABCIData(txRes tx.Response) ([]byte, error) {
+ return proto.Marshal(&sdk.TxMsgData{MsgResponses: txRes.MsgResponses})
+}
+```
+
+The implementations are similar for `BaseApp.CheckTx` and `BaseApp.Simulate`.
+
+`baseapp.txHandler`'s three methods' implementations can obviously be monolithic functions, but for modularity we propose a middleware composition design, where a middleware is simply a function that takes a `tx.Handler`, and returns another `tx.Handler` wrapped around the previous one.
+
+### Implementing a Middleware
+
+In practice, middlewares are created by Go function that takes as arguments some parameters needed for the middleware, and returns a `tx.Middleware`.
+
+For example, for creating an arbitrary `MyMiddleware`, we can implement:
+
+```go
+// myTxHandler is the tx.Handler of this middleware. Note that it holds a
+// reference to the next tx.Handler in the stack.
+type myTxHandler struct {
+ // next is the next tx.Handler in the middleware stack.
+ next tx.Handler
+ // some other fields that are relevant to the middleware can be added here
+}
+
+// NewMyMiddleware returns a middleware that does this and that.
+func NewMyMiddleware(arg1, arg2) tx.Middleware {
+ return func (txh tx.Handler) tx.Handler {
+ return myTxHandler{
+ next: txh,
+ // optionally, set arg1, arg2... if they are needed in the middleware
+ }
+ }
+}
+
+// Assert myTxHandler is a tx.Handler.
+var _ tx.Handler = myTxHandler{}
+
+func (h myTxHandler) CheckTx(ctx context.Context, req Request, checkReq RequestcheckTx) (Response, ResponseCheckTx, error) {
+ // CheckTx specific pre-processing logic
+
+ // run the next middleware
+ res, checkRes, err := txh.next.CheckTx(ctx, req, checkReq)
+
+ // CheckTx specific post-processing logic
+
+ return res, checkRes, err
+}
+
+func (h myTxHandler) DeliverTx(ctx context.Context, req Request) (Response, error) {
+ // DeliverTx specific pre-processing logic
+
+ // run the next middleware
+ res, err := txh.next.DeliverTx(ctx, tx, req)
+
+ // DeliverTx specific post-processing logic
+
+ return res, err
+}
+
+func (h myTxHandler) SimulateTx(ctx context.Context, req Request) (Response, error) {
+ // SimulateTx specific pre-processing logic
+
+ // run the next middleware
+ res, err := txh.next.SimulateTx(ctx, tx, req)
+
+ // SimulateTx specific post-processing logic
+
+ return res, err
+}
+```
+
+### Composing Middlewares
+
+While BaseApp simply holds a reference to a `tx.Handler`, this `tx.Handler` itself is defined using a middleware stack. The Cosmos SDK exposes a base (i.e. innermost) `tx.Handler` called `RunMsgsTxHandler`, which executes messages.
+
+Then, the app developer can compose multiple middlewares on top on the base `tx.Handler`. Each middleware can run pre-and-post-processing logic around its next middleware, as described in the section above. Conceptually, as an example, given the middlewares `A`, `B`, and `C` and the base `tx.Handler` `H` the stack looks like:
+
+```text
+A.pre
+ B.pre
+ C.pre
+ H # The base tx.handler, for example `RunMsgsTxHandler`
+ C.post
+ B.post
+A.post
+```
+
+We define a `ComposeMiddlewares` function for composing middlewares. It takes the base handler as first argument, and middlewares in the "outer to inner" order. For the above stack, the final `tx.Handler` is:
+
+```go
+txHandler := middleware.ComposeMiddlewares(H, A, B, C)
+```
+
+The middleware is set in BaseApp via its `SetTxHandler` setter:
+
+```go
+// simapp/app.go
+
+txHandler := middleware.ComposeMiddlewares(...)
+app.SetTxHandler(txHandler)
+```
+
+The app developer can define their own middlewares, or use the Cosmos SDK's pre-defined middlewares from `middleware.NewDefaultTxHandler()`.
+
+### Middlewares Maintained by the Cosmos SDK
+
+While the app developer can define and compose the middlewares of their choice, the Cosmos SDK provides a set of middlewares that caters for the ecosystem's most common use cases. These middlewares are:
+
+| Middleware | Description |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RunMsgsTxHandler | This is the base `tx.Handler`. It replaces the old baseapp's `runMsgs`, and executes a transaction's `Msg`s. |
+| TxDecoderMiddleware | This middleware takes in transaction raw bytes, and decodes them into a `sdk.Tx`. It replaces the `baseapp.txDecoder` field, so that BaseApp stays as thin as possible. Since most middlewares read the contents of the `sdk.Tx`, the TxDecoderMiddleware should be run first in the middleware stack. |
+| {Antehandlers} | Each antehandler is converted to its own middleware. These middlewares perform signature verification, fee deductions and other validations on the incoming transaction. |
+| IndexEventsTxMiddleware | This is a simple middleware that chooses which events to index in Tendermint. Replaces `baseapp.indexEvents` (which unfortunately still exists in baseapp too, because it's used to index Begin/EndBlock events) |
+| RecoveryTxMiddleware | This index recovers from panics. It replaces baseapp.runTx's panic recovery described in [ADR-022](adr-022-custom-panic-handling.md). |
+| GasTxMiddleware | This replaces the [`Setup`](https://github.com/cosmos/cosmos-sdk/blob/v0.43.0/x/auth/ante/setup.go) Antehandler. It sets a GasMeter on sdk.Context. Note that before, GasMeter was set on sdk.Context inside the antehandlers, and there was some mess around the fact that antehandlers had their own panic recovery system so that the GasMeter could be read by baseapp's recovery system. Now, this mess is all removed: one middleware sets GasMeter, another one handles recovery. |
+
+### Similarities and Differences between Antehandlers and Middlewares
+
+The middleware-based design builds upon the existing antehandlers design described in [ADR-010](adr-010-modular-antehandler.md). Even though the final decision of ADR-010 was to go with the "Simple Decorators" approach, the middleware design is actually very similar to the other [Decorator Pattern](adr-010-modular-antehandler.md#decorator-pattern) proposal, also used in [weave](https://github.com/iov-one/weave).
+
+#### Similarities with Antehandlers
+
+* Designed as chaining/composing small modular pieces.
+* Allow code reuse for `{Check,Deliver}Tx` and for `Simulate`.
+* Set up in `app.go`, and easily customizable by app developers.
+* Order is important.
+
+#### Differences with Antehandlers
+
+* The Antehandlers are run before `Msg` execution, whereas middlewares can run before and after.
+* The middleware approach uses separate methods for `{Check,Deliver,Simulate}Tx`, whereas the antehandlers pass a `simulate bool` flag and uses the `sdkCtx.Is{Check,Recheck}Tx()` flags to determine in which transaction mode we are.
+* The middleware design lets each middleware hold a reference to the next middleware, whereas the antehandlers pass a `next` argument in the `AnteHandle` method.
+* The middleware design use Go's standard `context.Context`, whereas the antehandlers use `sdk.Context`.
+
+## Consequences
+
+### Backwards Compatibility
+
+Since this refactor removes some logic away from BaseApp and into middlewares, it introduces API-breaking changes for app developers. Most notably, instead of creating an antehandler chain in `app.go`, app developers need to create a middleware stack:
+
+```diff
+- anteHandler, err := ante.NewAnteHandler(
+- ante.HandlerOptions{
+- AccountKeeper: app.AccountKeeper,
+- BankKeeper: app.BankKeeper,
+- SignModeHandler: encodingConfig.TxConfig.SignModeHandler(),
+- FeegrantKeeper: app.FeeGrantKeeper,
+- SigGasConsumer: ante.DefaultSigVerificationGasConsumer,
+- },
+-)
++txHandler, err := authmiddleware.NewDefaultTxHandler(authmiddleware.TxHandlerOptions{
++ Debug: app.Trace(),
++ IndexEvents: indexEvents,
++ LegacyRouter: app.legacyRouter,
++ MsgServiceRouter: app.msgSvcRouter,
++ LegacyAnteHandler: anteHandler,
++ TxDecoder: encodingConfig.TxConfig.TxDecoder,
++})
+if err != nil {
+ panic(err)
+}
+- app.SetAnteHandler(anteHandler)
++ app.SetTxHandler(txHandler)
+```
+
+Other more minor API breaking changes will also be provided in the CHANGELOG. As usual, the Cosmos SDK will provide a release migration document for app developers.
+
+This ADR does not introduce any state-machine-, client- or CLI-breaking changes.
+
+### Positive
+
+* Allow custom logic to be run before an after `Msg` execution. This enables the [tips](https://github.com/cosmos/cosmos-sdk/issues/9406) and [gas refund](https://github.com/cosmos/cosmos-sdk/issues/2150) uses cases, and possibly other ones.
+* Make BaseApp more lightweight, and defer complex logic to small modular components.
+* Separate paths for `{Check,Deliver,Simulate}Tx` with different returns types. This allows for improved readability (replace `if sdkCtx.IsRecheckTx() && !simulate {...}` with separate methods) and more flexibility (e.g. returning a `priority` in `ResponseCheckTx`).
+
+### Negative
+
+* It is hard to understand at first glance the state updates that would occur after a middleware runs given the `sdk.Context` and `tx`. A middleware can have an arbitrary number of nested middleware being called within its function body, each possibly doing some pre- and post-processing before calling the next middleware on the chain. Thus to understand what a middleware is doing, one must also understand what every other middleware further along the chain is also doing, and the order of middlewares matters. This can get quite complicated to understand.
+* API-breaking changes for app developers.
+
+### Neutral
+
+No neutral consequences.
+
+## Further Discussions
+
+* [#9934](https://github.com/cosmos/cosmos-sdk/discussions/9934) Decomposing BaseApp's other ABCI methods into middlewares.
+* Replace `sdk.Tx` interface with the concrete protobuf Tx type in the `tx.Handler` methods signature.
+
+## Test Cases
+
+We update the existing baseapp and antehandlers tests to use the new middleware API, but keep the same test cases and logic, to avoid introducing regressions. Existing CLI tests will also be left untouched.
+
+For new middlewares, we introduce unit tests. Since middlewares are purposefully small, unit tests suit well.
+
+## References
+
+* Initial discussion: https://github.com/cosmos/cosmos-sdk/issues/9585
+* Implementation: [#9920 BaseApp refactor](https://github.com/cosmos/cosmos-sdk/pull/9920) and [#10028 Antehandlers migration](https://github.com/cosmos/cosmos-sdk/pull/10028)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-046-module-params.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-046-module-params.md
new file mode 100644
index 00000000..369cd043
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-046-module-params.md
@@ -0,0 +1,184 @@
+# ADR 046: Module Params
+
+## Changelog
+
+* Sep 22, 2021: Initial Draft
+
+## Status
+
+Proposed
+
+## Abstract
+
+This ADR describes an alternative approach to how Cosmos SDK modules use, interact,
+and store their respective parameters.
+
+## Context
+
+Currently, in the Cosmos SDK, modules that require the use of parameters use the
+`x/params` module. The `x/params` works by having modules define parameters,
+typically via a simple `Params` structure, and registering that structure in
+the `x/params` module via a unique `Subspace` that belongs to the respective
+registering module. The registering module then has unique access to its respective
+`Subspace`. Through this `Subspace`, the module can get and set its `Params`
+structure.
+
+In addition, the Cosmos SDK's `x/gov` module has direct support for changing
+parameters on-chain via a `ParamChangeProposal` governance proposal type, where
+stakeholders can vote on suggested parameter changes.
+
+There are various tradeoffs to using the `x/params` module to manage individual
+module parameters. Namely, managing parameters essentially comes for "free" in
+that developers only need to define the `Params` struct, the `Subspace`, and the
+various auxiliary functions, e.g. `ParamSetPairs`, on the `Params` type. However,
+there are some notable drawbacks. These drawbacks include the fact that parameters
+are serialized in state via JSON which is extremely slow. In addition, parameter
+changes via `ParamChangeProposal` governance proposals have no way of reading from
+or writing to state. In other words, it is currently not possible to have any
+state transitions in the application during an attempt to change param(s).
+
+## Decision
+
+We will build off of the alignment of `x/gov` and `x/authz` work per
+[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers
+will create one or more unique parameter data structures that must be serialized
+to state. The Param data structures must implement `sdk.Msg` interface with respective
+Protobuf Msg service method which will validate and update the parameters with all
+necessary changes. The `x/gov` module via the work done in
+[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param
+messages, which will be handled by Protobuf Msg services.
+
+Note, it is up to developers to decide how to structure their parameters and
+the respective `sdk.Msg` messages. Consider the parameters currently defined in
+`x/auth` using the `x/params` module for parameter management:
+
+```protobuf
+message Params {
+ uint64 max_memo_characters = 1;
+ uint64 tx_sig_limit = 2;
+ uint64 tx_size_cost_per_byte = 3;
+ uint64 sig_verify_cost_ed25519 = 4;
+ uint64 sig_verify_cost_secp256k1 = 5;
+}
+```
+
+Developers can choose to either create a unique data structure for every field in
+`Params` or they can create a single `Params` structure as outlined above in the
+case of `x/auth`.
+
+In the former, `x/params`, approach, a `sdk.Msg` would need to be created for every single
+field along with a handler. This can become burdensome if there are a lot of
+parameter fields. In the latter case, there is only a single data structure and
+thus only a single message handler, however, the message handler might have to be
+more sophisticated in that it might need to understand what parameters are being
+changed vs what parameters are untouched.
+
+Params change proposals are made using the `x/gov` module. Execution is done through
+`x/authz` authorization to the root `x/gov` module's account.
+
+Continuing to use `x/auth`, we demonstrate a more complete example:
+
+```go
+type Params struct {
+ MaxMemoCharacters uint64
+ TxSigLimit uint64
+ TxSizeCostPerByte uint64
+ SigVerifyCostED25519 uint64
+ SigVerifyCostSecp256k1 uint64
+}
+
+type MsgUpdateParams struct {
+ MaxMemoCharacters uint64
+ TxSigLimit uint64
+ TxSizeCostPerByte uint64
+ SigVerifyCostED25519 uint64
+ SigVerifyCostSecp256k1 uint64
+}
+
+type MsgUpdateParamsResponse struct {}
+
+func (ms msgServer) UpdateParams(goCtx context.Context, msg *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) {
+ ctx := sdk.UnwrapSDKContext(goCtx)
+
+ // verification logic...
+
+ // persist params
+ params := ParamsFromMsg(msg)
+ ms.SaveParams(ctx, params)
+
+ return &types.MsgUpdateParamsResponse{}, nil
+}
+
+func ParamsFromMsg(msg *types.MsgUpdateParams) Params {
+ // ...
+}
+```
+
+A gRPC `Service` query should also be provided, for example:
+
+```protobuf
+service Query {
+ // ...
+
+ rpc Params(QueryParamsRequest) returns (QueryParamsResponse) {
+ option (google.api.http).get = "/cosmos//v1beta1/params";
+ }
+}
+
+message QueryParamsResponse {
+ Params params = 1 [(gogoproto.nullable) = false];
+}
+```
+
+## Consequences
+
+As a result of implementing the module parameter methodology, we gain the ability
+for module parameter changes to be stateful and extensible to fit nearly every
+application's use case. We will be able to emit events (and trigger hooks registered
+to that events using the work proposed in [event hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)),
+call other Msg service methods or perform migration.
+In addition, there will be significant gains in performance when it comes to reading
+and writing parameters from and to state, especially if a specific set of parameters
+are read on a consistent basis.
+
+However, this methodology will require developers to implement more types and
+Msg service metohds which can become burdensome if many parameters exist. In addition,
+developers are required to implement persistance logics of module parameters.
+However, this should be trivial.
+
+### Backwards Compatibility
+
+The new method for working with module parameters is naturally not backwards
+compatible with the existing `x/params` module. However, the `x/params` will
+remain in the Cosmos SDK and will be marked as deprecated with no additional
+functionality being added apart from potential bug fixes. Note, the `x/params`
+module may be removed entirely in a future release.
+
+### Positive
+
+* Module parameters are serialized more efficiently
+* Modules are able to react on parameters changes and perform additional actions.
+* Special events can be emitted, allowing hooks to be triggered.
+
+### Negative
+
+* Module parameters becomes slightly more burdensome for module developers:
+ * Modules are now responsible for persisting and retrieving parameter state
+ * Modules are now required to have unique message handlers to handle parameter
+ changes per unique parameter data structure.
+
+### Neutral
+
+* Requires [#9810](https://github.com/cosmos/cosmos-sdk/pull/9810) to be reviewed
+ and merged.
+
+
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/pull/9810
+* https://github.com/cosmos/cosmos-sdk/issues/9438
+* https://github.com/cosmos/cosmos-sdk/discussions/9913
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-047-extend-upgrade-plan.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-047-extend-upgrade-plan.md
new file mode 100644
index 00000000..3a4f3aac
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-047-extend-upgrade-plan.md
@@ -0,0 +1,250 @@
+# ADR 047: Extend Upgrade Plan
+
+## Changelog
+
+* Nov, 23, 2021: Initial Draft
+
+## Status
+
+PROPOSED Not Implemented
+
+## Abstract
+
+This ADR expands the existing x/upgrade `Plan` proto message to include new fields for defining pre-run and post-run processes within upgrade tooling.
+It also defines a structure for providing downloadable artifacts involved in an upgrade.
+
+## Context
+
+The `upgrade` module in conjunction with Cosmovisor are designed to facilitate and automate a blockchain's transition from one version to another.
+
+Users submit a software upgrade governance proposal containing an upgrade `Plan`.
+The [Plan](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto#L12) currently contains the following fields:
+* `name`: A short string identifying the new version.
+* `height`: The chain height at which the upgrade is to be performed.
+* `info`: A string containing information about the upgrade.
+
+The `info` string can be anything.
+However, Cosmovisor will try to use the `info` field to automatically download a new version of the blockchain executable.
+For the auto-download to work, Cosmovisor expects it to be either a stringified JSON object (with a specific structure defined through documentation), or a URL that will return such JSON.
+The JSON object identifies URLs used to download the new blockchain executable for different platforms (OS and Architecture, e.g. "linux/amd64").
+Such a URL can either return the executable file directly or can return an archive containing the executable and possibly other assets.
+
+If the URL returns an archive, it is decompressed into `{DAEMON_HOME}/cosmovisor/{upgrade name}`.
+Then, if `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}` does not exist, but `{DAEMON_HOME}/cosmovisor/{upgrade name}/{DAEMON_NAME}` does, the latter is copied to the former.
+If the URL returns something other than an archive, it is downloaded to `{DAEMON_HOME}/cosmovisor/{upgrade name}/bin/{DAEMON_NAME}`.
+
+If an upgrade height is reached and the new version of the executable version isn't available, Cosmovisor will stop running.
+
+Both `DAEMON_HOME` and `DAEMON_NAME` are [environment variables used to configure Cosmovisor](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md#command-line-arguments-and-environment-variables).
+
+Currently, there is no mechanism that makes Cosmovisor run a command after the upgraded chain has been restarted.
+
+The current upgrade process has this timeline:
+
+1. An upgrade governance proposal is submitted and approved.
+1. The upgrade height is reached.
+1. The `x/upgrade` module writes the `upgrade_info.json` file.
+1. The chain halts.
+1. Cosmovisor backs up the data directory (if set up to do so).
+1. Cosmovisor downloads the new executable (if not already in place).
+1. Cosmovisor executes the `${DAEMON_NAME} pre-upgrade`.
+1. Cosmovisor restarts the app using the new version and same args originally provided.
+
+## Decision
+
+### Protobuf Updates
+
+We will update the `x/upgrade.Plan` message for providing upgrade instructions.
+The upgrade instructions will contain a list of artifacts available for each platform.
+It allows for the definition of a pre-run and post-run commands.
+These commands are not consensus guaranteed; they will be executed by Cosmosvisor (or other) during its upgrade handling.
+
+```protobuf
+message Plan {
+ // ... (existing fields)
+
+ UpgradeInstructions instructions = 6;
+}
+```
+
+The new `UpgradeInstructions instructions` field MUST be optional.
+
+```protobuf
+message UpgradeInstructions {
+ string pre_run = 1;
+ string post_run = 2;
+ repeated Artifact artifacts = 3;
+ string description = 4;
+}
+```
+
+All fields in the `UpgradeInstructions` are optional.
+* `pre_run` is a command to run prior to the upgraded chain restarting.
+ If defined, it will be executed after halting and downloading the new artifact but before restarting the upgraded chain.
+ The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`.
+ This command MUST behave the same as the current [pre-upgrade](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md) command.
+ It does not take in any command-line arguments and is expected to terminate with the following exit codes:
+
+ | Exit status code | How it is handled in Cosmosvisor |
+ |------------------|---------------------------------------------------------------------------------------------------------------------|
+ | `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. |
+ | `1` | Default exit code when `pre-upgrade` command has not been implemented. |
+ | `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. |
+ | `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. |
+ If defined, then the app supervisors (e.g. Cosmovisor) MUST NOT run `app pre-run`.
+* `post_run` is a command to run after the upgraded chain has been started. If defined, this command MUST be only executed at most once by an upgrading node.
+ The output and exit code SHOULD be logged but SHOULD NOT affect the running of the upgraded chain.
+ The working directory this command runs from MUST be `{DAEMON_HOME}/cosmovisor/{upgrade name}`.
+* `artifacts` define items to be downloaded.
+ It SHOULD have only one entry per platform.
+* `description` contains human-readable information about the upgrade and might contain references to external resources.
+ It SHOULD NOT be used for structured processing information.
+
+```protobuf
+message Artifact {
+ string platform = 1;
+ string url = 2;
+ string checksum = 3;
+ string checksum_algo = 4;
+}
+```
+
+* `platform` is a required string that SHOULD be in the format `{OS}/{CPU}`, e.g. `"linux/amd64"`.
+ The string `"any"` SHOULD also be allowed.
+ An `Artifact` with a `platform` of `"any"` SHOULD be used as a fallback when a specific `{OS}/{CPU}` entry is not found.
+ That is, if an `Artifact` exists with a `platform` that matches the system's OS and CPU, that should be used;
+ otherwise, if an `Artifact` exists with a `platform` of `any`, that should be used;
+ otherwise no artifact should be downloaded.
+* `url` is a required URL string that MUST conform to [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt).
+ A request to this `url` MUST return either an executable file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`.
+ The URL should not contain checksum - it should be specified by the `checksum` attribute.
+* `checksum` is a checksum of the expected result of a request to the `url`.
+ It is not required, but is recommended.
+ If provided, it MUST be a hex encoded checksum string.
+ Tools utilizing these `UpgradeInstructions` MUST fail if a `checksum` is provided but is different from the checksum of the result returned by the `url`.
+* `checksum_algo` is a string identify the algorithm used to generate the `checksum`.
+ Recommended algorithms: `sha256`, `sha512`.
+ Algorithms also supported (but not recommended): `sha1`, `md5`.
+ If a `checksum` is provided, a `checksum_algo` MUST also be provided.
+
+A `url` is not required to contain a `checksum` query parameter.
+If the `url` does contain a `checksum` query parameter, the `checksum` and `checksum_algo` fields MUST also be populated, and their values MUST match the value of the query parameter.
+For example, if the `url` is `"https://example.com?checksum=md5:d41d8cd98f00b204e9800998ecf8427e"`, then the `checksum` field must be `"d41d8cd98f00b204e9800998ecf8427e"` and the `checksum_algo` field must be `"md5"`.
+
+### Upgrade Module Updates
+
+If an upgrade `Plan` does not use the new `UpgradeInstructions` field, existing functionality will be maintained.
+The parsing of the `info` field as either a URL or `binaries` JSON will be deprecated.
+During validation, if the `info` field is used as such, a warning will be issued, but not an error.
+
+We will update the creation of the `upgrade-info.json` file to include the `UpgradeInstructions`.
+
+We will update the optional validation available via CLI to account for the new `Plan` structure.
+We will add the following validation:
+
+1. If `UpgradeInstructions` are provided:
+ 1. There MUST be at least one entry in `artifacts`.
+ 1. All of the `artifacts` MUST have a unique `platform`.
+ 1. For each `Artifact`, if the `url` contains a `checksum` query parameter:
+ 1. The `checksum` query parameter value MUST be in the format of `{checksum_algo}:{checksum}`.
+ 1. The `{checksum}` from the query parameter MUST equal the `checksum` provided in the `Artifact`.
+ 1. The `{checksum_algo}` from the query parameter MUST equal the `checksum_algo` provided in the `Artifact`.
+1. The following validation is currently done using the `info` field. We will apply similar validation to the `UpgradeInstructions`.
+ For each `Artifact`:
+ 1. The `platform` MUST have the format `{OS}/{CPU}` or be `"any"`.
+ 1. The `url` field MUST NOT be empty.
+ 1. The `url` field MUST be a proper URL.
+ 1. A `checksum` MUST be provided either in the `checksum` field or as a query parameter in the `url`.
+ 1. If the `checksum` field has a value and the `url` also has a `checksum` query parameter, the two values MUST be equal.
+ 1. The `url` MUST return either a file or an archive containing either `bin/{DAEMON_NAME}` or `{DAEMON_NAME}`.
+ 1. If a `checksum` is provided (in the field or as a query param), the checksum of the result of the `url` MUST equal the provided checksum.
+
+Downloading of an `Artifact` will happen the same way that URLs from `info` are currently downloaded.
+
+### Cosmovisor Updates
+
+If the `upgrade-info.json` file does not contain any `UpgradeInstructions`, existing functionality will be maintained.
+
+We will update Cosmovisor to look for and handle the new `UpgradeInstructions` in `upgrade-info.json`.
+If the `UpgradeInstructions` are provided, we will do the following:
+
+1. The `info` field will be ignored.
+1. The `artifacts` field will be used to identify the artifact to download based on the `platform` that Cosmovisor is running in.
+1. If a `checksum` is provided (either in the field or as a query param in the `url`), and the downloaded artifact has a different checksum, the upgrade process will be interrupted and Cosmovisor will exit with an error.
+1. If a `pre_run` command is defined, it will be executed at the same point in the process where the `app pre-upgrade` command would have been executed.
+ It will be executed using the same environment as other commands run by Cosmovisor.
+1. If a `post_run` command is defined, it will be executed after executing the command that restarts the chain.
+ It will be executed in a background process using the same environment as the other commands.
+ Any output generated by the command will be logged.
+ Once complete, the exit code will be logged.
+
+We will deprecate the use of the `info` field for anything other than human readable information.
+A warning will be logged if the `info` field is used to define the assets (either by URL or JSON).
+
+The new upgrade timeline is very similar to the current one. Changes are in bold:
+
+1. An upgrade governance proposal is submitted and approved.
+1. The upgrade height is reached.
+1. The `x/upgrade` module writes the `upgrade_info.json` file **(now possibly with `UpgradeInstructions`)**.
+1. The chain halts.
+1. Cosmovisor backs up the data directory (if set up to do so).
+1. Cosmovisor downloads the new executable (if not already in place).
+1. Cosmovisor executes **the `pre_run` command if provided**, or else the `${DAEMON_NAME} pre-upgrade` command.
+1. Cosmovisor restarts the app using the new version and same args originally provided.
+1. **Cosmovisor immediately runs the `post_run` command in a detached process.**
+
+## Consequences
+
+### Backwards Compatibility
+
+Since the only change to existing definitions is the addition of the `instructions` field to the `Plan` message, and that field is optional, there are no backwards incompatibilities with respects to the proto messages.
+Additionally, current behavior will be maintained when no `UpgradeInstructions` are provided, so there are no backwards incompatibilities with respects to either the upgrade module or Cosmovisor.
+
+### Forwards Compatibility
+
+In order to utilize the `UpgradeInstructions` as part of a software upgrade, both of the following must be true:
+
+1. The chain must already be using a sufficiently advanced version of the Cosmos SDK.
+1. The chain's nodes must be using a sufficiently advanced version of Cosmovisor.
+
+### Positive
+
+1. The structure for defining artifacts is clearer since it is now defined in the proto instead of in documentation.
+1. Availability of a pre-run command becomes more obvious.
+1. A post-run command becomes possible.
+
+### Negative
+
+1. The `Plan` message becomes larger. This is negligible because A) the `x/upgrades` module only stores at most one upgrade plan, and B) upgrades are rare enough that the increased gas cost isn't a concern.
+1. There is no option for providing a URL that will return the `UpgradeInstructions`.
+1. The only way to provide multiple assets (executables and other files) for a platform is to use an archive as the platform's artifact.
+
+### Neutral
+
+1. Existing functionality of the `info` field is maintained when the `UpgradeInstructions` aren't provided.
+
+## Further Discussions
+
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r698708349):
+ Consider different names for `UpgradeInstructions instructions` (either the message type or field name).
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754655072):
+ 1. Consider putting the `string platform` field inside `UpgradeInstructions` and make `UpgradeInstructions` a repeated field in `Plan`.
+ 1. Consider using a `oneof` field in the `Plan` which could either be `UpgradeInstructions` or else a URL that should return the `UpgradeInstructions`.
+ 1. Consider allowing `info` to either be a JSON serialized version of `UpgradeInstructions` or else a URL that returns that.
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r755462876):
+ Consider not including the `UpgradeInstructions.description` field, using the `info` field for that purpose instead.
+1. [Draft PR #10032 Comment](https://github.com/cosmos/cosmos-sdk/pull/10032/files?authenticity_token=pLtzpnXJJB%2Fif2UWiTp9Td3MvRrBF04DvjSuEjf1azoWdLF%2BSNymVYw9Ic7VkqHgNLhNj6iq9bHQYnVLzMXd4g%3D%3D&file-filters%5B%5D=.go&file-filters%5B%5D=.proto#r754643691):
+ Consider allowing multiple artifacts to be downloaded for any given `platform` by adding a `name` field to the `Artifact` message.
+1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288)
+ Allow the new `UpgradeInstructions` to be provided via URL.
+1. [PR #10502 Comment](https://github.com/cosmos/cosmos-sdk/pull/10602#discussion_r781438288)
+ Allow definition of a `signer` for assets (as an alternative to using a `checksum`).
+
+## References
+
+* [Current upgrade.proto](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/proto/cosmos/upgrade/v1beta1/upgrade.proto)
+* [Upgrade Module README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/x/upgrade/spec/README.md)
+* [Cosmovisor README](https://github.com/cosmos/cosmos-sdk/blob/cosmovisor/v1.0.0/cosmovisor/README.md)
+* [Pre-upgrade README](https://github.com/cosmos/cosmos-sdk/blob/v0.44.5/docs/migrations/pre-upgrade.md)
+* [Draft/POC PR #10032](https://github.com/cosmos/cosmos-sdk/pull/10032)
+* [RFC 1738: Uniform Resource Locators](https://www.ietf.org/rfc/rfc1738.txt)
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-048-consensus-fees.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-048-consensus-fees.md
new file mode 100644
index 00000000..f1c6065c
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-048-consensus-fees.md
@@ -0,0 +1,204 @@
+# ADR 048: Multi Tire Gas Price System
+
+## Changelog
+
+* Dec 1, 2021: Initial Draft
+
+## Status
+
+Rejected
+
+## Abstract
+
+This ADR describes a flexible mechanism to maintain a consensus level gas prices, in which one can choose a multi-tier gas price system or EIP-1559 like one through configuration.
+
+## Context
+
+Currently, each validator configures it's own `minimal-gas-prices` in `app.yaml`. But setting a proper minimal gas price is critical to protect network from dos attack, and it's hard for all the validators to pick a sensible value, so we propose to maintain a gas price in consensus level.
+
+Since tendermint 0.34.20 has supported mempool prioritization, we can take advantage of that to implement more sophisticated gas fee system.
+
+## Multi-Tier Price System
+
+We propose a multi-tier price system on consensus to provide maximum flexibility:
+
+* Tier 1: a constant gas price, which could only be modified occasionally through governance proposal.
+* Tier 2: a dynamic gas price which is adjusted according to previous block load.
+* Tier 3: a dynamic gas price which is adjusted according to previous block load at a higher speed.
+
+The gas price of higher tier should bigger than the lower tier.
+
+The transaction fees are charged with the exact gas price calculated on consensus.
+
+The parameter schema is like this:
+
+```protobuf
+message TierParams {
+ uint32 priority = 1 // priority in tendermint mempool
+ Coin initial_gas_price = 2 //
+ uint32 parent_gas_target = 3 // the target saturation of block
+ uint32 change_denominator = 4 // decides the change speed
+ Coin min_gas_price = 5 // optional lower bound of the price adjustment
+ Coin max_gas_price = 6 // optional upper bound of the price adjustment
+}
+
+message Params {
+ repeated TierParams tiers = 1;
+}
+```
+
+### Extension Options
+
+We need to allow user to specify the tier of service for the transaction, to support it in an extensible way, we add an extension option in `AuthInfo`:
+
+```protobuf
+message ExtensionOptionsTieredTx {
+ uint32 fee_tier = 1
+}
+```
+
+The value of `fee_tier` is just the index to the `tiers` parameter list.
+
+We also change the semantic of existing `fee` field of `Tx`, instead of charging user the exact `fee` amount, we treat it as a fee cap, while the actual amount of fee charged is decided dynamically. If the `fee` is smaller than dynamic one, the transaction won't be included in current block and ideally should stay in the mempool until the consensus gas price drop. The mempool can eventually prune old transactions.
+
+### Tx Prioritization
+
+Transactions are prioritized based on the tier, the higher the tier, the higher the priority.
+
+Within the same tier, follow the default Tendermint order (currently FIFO). Be aware of that the mempool tx ordering logic is not part of consensus and can be modified by malicious validator.
+
+This mechanism can be easily composed with prioritization mechanisms:
+
+* we can add extra tiers out of a user control:
+ * Example 1: user can set tier 0, 10 or 20, but the protocol will create tiers 0, 1, 2 ... 29. For example IBC transactions will go to tier `user_tier + 5`: if user selected tier 1, then the transaction will go to tier 15.
+ * Example 2: we can reserve tier 4, 5, ... only for special transaction types. For example, tier 5 is reserved for evidence tx. So if submits a bank.Send transaction and set tier 5, it will be delegated to tier 3 (the max tier level available for any transaction).
+ * Example 3: we can enforce that all transactions of a sepecific type will go to specific tier. For example, tier 100 will be reserved for evidence transactions and all evidence transactions will always go to that tier.
+
+### `min-gas-prices`
+
+Deprecate the current per-validator `min-gas-prices` configuration, since it would confusing for it to work together with the consensus gas price.
+
+### Adjust For Block Load
+
+For tier 2 and tier 3 transactions, the gas price is adjusted according to previous block load, the logic could be similar to EIP-1559:
+
+```python
+def adjust_gas_price(gas_price, parent_gas_used, tier):
+ if parent_gas_used == tier.parent_gas_target:
+ return gas_price
+ elif parent_gas_used > tier.parent_gas_target:
+ gas_used_delta = parent_gas_used - tier.parent_gas_target
+ gas_price_delta = max(gas_price * gas_used_delta // tier.parent_gas_target // tier.change_speed, 1)
+ return gas_price + gas_price_delta
+ else:
+ gas_used_delta = parent_gas_target - parent_gas_used
+ gas_price_delta = gas_price * gas_used_delta // parent_gas_target // tier.change_speed
+ return gas_price - gas_price_delta
+```
+
+### Block Segment Reservation
+
+Ideally we should reserve block segments for each tier, so the lower tiered transactions won't be completely squeezed out by higher tier transactions, which will force user to use higher tier, and the system degraded to a single tier.
+
+We need help from tendermint to implement this.
+
+## Implementation
+
+We can make each tier's gas price strategy fully configurable in protocol parameters, while providing a sensible default one.
+
+Pseudocode in python-like syntax:
+
+```python
+interface TieredTx:
+ def tier(self) -> int:
+ pass
+
+def tx_tier(tx):
+ if isinstance(tx, TieredTx):
+ return tx.tier()
+ else:
+ # default tier for custom transactions
+ return 0
+ # NOTE: we can add more rules here per "Tx Prioritization" section
+
+class TierParams:
+ 'gas price strategy parameters of one tier'
+ priority: int # priority in tendermint mempool
+ initial_gas_price: Coin
+ parent_gas_target: int
+ change_speed: Decimal # 0 means don't adjust for block load.
+
+class Params:
+ 'protocol parameters'
+ tiers: List[TierParams]
+
+class State:
+ 'consensus state'
+ # total gas used in last block, None when it's the first block
+ parent_gas_used: Optional[int]
+ # gas prices of last block for all tiers
+ gas_prices: List[Coin]
+
+def begin_block():
+ 'Adjust gas prices'
+ for i, tier in enumerate(Params.tiers):
+ if State.parent_gas_used is None:
+ # initialized gas price for the first block
+ State.gas_prices[i] = tier.initial_gas_price
+ else:
+ # adjust gas price according to gas used in previous block
+ State.gas_prices[i] = adjust_gas_price(State.gas_prices[i], State.parent_gas_used, tier)
+
+def mempoolFeeTxHandler_checkTx(ctx, tx):
+ # the minimal-gas-price configured by validator, zero in deliver_tx context
+ validator_price = ctx.MinGasPrice()
+ consensus_price = State.gas_prices[tx_tier(tx)]
+ min_price = max(validator_price, consensus_price)
+
+ # zero means infinity for gas price cap
+ if tx.gas_price() > 0 and tx.gas_price() < min_price:
+ return 'insufficient fees'
+ return next_CheckTx(ctx, tx)
+
+def txPriorityHandler_checkTx(ctx, tx):
+ res, err := next_CheckTx(ctx, tx)
+ # pass priority to tendermint
+ res.Priority = Params.tiers[tx_tier(tx)].priority
+ return res, err
+
+def end_block():
+ 'Update block gas used'
+ State.parent_gas_used = block_gas_meter.consumed()
+```
+
+### Dos attack protection
+
+To fully saturate the blocks and prevent other transactions from executing, attacker need to use transactions of highest tier, the cost would be significantly higher than the default tier.
+
+If attacker spam with lower tier transactions, user can mitigate by sending higher tier transactions.
+
+## Consequences
+
+### Backwards Compatibility
+
+* New protocol parameters.
+* New consensus states.
+* New/changed fields in transaction body.
+
+### Positive
+
+* The default tier keeps the same predictable gas price experience for client.
+* The higher tier's gas price can adapt to block load.
+* No priority conflict with custom priority based on transaction types, since this proposal only occupy three priority levels.
+* Possibility to compose different priority rules with tiers
+
+### Negative
+
+* Wallets & tools need to update to support the new `tier` parameter, and semantic of `fee` field is changed.
+
+### Neutral
+
+## References
+
+* https://eips.ethereum.org/EIPS/eip-1559
+* https://iohk.io/en/blog/posts/2021/11/26/network-traffic-and-tiered-pricing/
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-049-state-sync-hooks.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-049-state-sync-hooks.md
new file mode 100644
index 00000000..c7353aa3
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-049-state-sync-hooks.md
@@ -0,0 +1,174 @@
+# ADR 049: State Sync Hooks
+
+## Changelog
+
+* Jan 19, 2022: Initial Draft
+* Apr 29, 2022: Safer extension snapshotter interface
+
+## Status
+
+Implemented
+
+## Abstract
+
+This ADR outlines a hooks-based mechanism for application modules to provide additional state (outside of the IAVL tree) to be used
+during state sync.
+
+## Context
+
+New clients use state-sync to download snapshots of module state from peers. Currently, the snapshot consists of a
+stream of `SnapshotStoreItem` and `SnapshotIAVLItem`, which means that application modules that define their state outside of the IAVL
+tree cannot include their state as part of the state-sync process.
+
+Note, Even though the module state data is outside of the tree, for determinism we require that the hash of the external data should
+be posted in the IAVL tree.
+
+## Decision
+
+A simple proposal based on our existing implementation is that, we can add two new message types: `SnapshotExtensionMeta`
+and `SnapshotExtensionPayload`, and they are appended to the existing multi-store stream with `SnapshotExtensionMeta`
+acting as a delimiter between extensions. As the chunk hashes should be able to ensure data integrity, we don't need
+a delimiter to mark the end of the snapshot stream.
+
+Besides, we provide `Snapshotter` and `ExtensionSnapshotter` interface for modules to implement snapshotters, which will handle both taking
+snapshot and the restoration. Each module could have mutiple snapshotters, and for modules with additional state, they should
+implement `ExtensionSnapshotter` as extension snapshotters. When setting up the application, the snapshot `Manager` should call
+`RegisterExtensions([]ExtensionSnapshotter…)` to register all the extension snapshotters.
+
+```protobuf
+// SnapshotItem is an item contained in a rootmulti.Store snapshot.
+// On top of the exsiting SnapshotStoreItem and SnapshotIAVLItem, we add two new options for the item.
+message SnapshotItem {
+ // item is the specific type of snapshot item.
+ oneof item {
+ SnapshotStoreItem store = 1;
+ SnapshotIAVLItem iavl = 2 [(gogoproto.customname) = "IAVL"];
+ SnapshotExtensionMeta extension = 3;
+ SnapshotExtensionPayload extension_payload = 4;
+ }
+}
+
+// SnapshotExtensionMeta contains metadata about an external snapshotter.
+// One module may need multiple snapshotters, so each module may have multiple SnapshotExtensionMeta.
+message SnapshotExtensionMeta {
+ // the name of the ExtensionSnapshotter, and it is registered to snapshotter manager when setting up the application
+ // name should be unique for each ExtensionSnapshotter as we need to alphabetically order their snapshots to get
+ // deterministic snapshot stream.
+ string name = 1;
+ // this is used by each ExtensionSnapshotter to decide the format of payloads included in SnapshotExtensionPayload message
+ // it is used within the snapshotter/namespace, not global one for all modules
+ uint32 format = 2;
+}
+
+// SnapshotExtensionPayload contains payloads of an external snapshotter.
+message SnapshotExtensionPayload {
+ bytes payload = 1;
+}
+```
+
+When we create a snapshot stream, the `multistore` snapshot is always placed at the beginning of the binary stream, and other extension snapshots are alphabetically ordered by the name of the corresponding `ExtensionSnapshotter`.
+
+The snapshot stream would look like as follows:
+
+```go
+// multi-store snapshot
+{SnapshotStoreItem | SnapshotIAVLItem, ...}
+// extension1 snapshot
+SnapshotExtensionMeta
+{SnapshotExtensionPayload, ...}
+// extension2 snapshot
+SnapshotExtensionMeta
+{SnapshotExtensionPayload, ...}
+```
+
+We add an `extensions` field to snapshot `Manager` for extension snapshotters. The `multistore` snapshotter is a special one and it doesn't need a name because it is always placed at the beginning of the binary stream.
+
+```go
+type Manager struct {
+ store *Store
+ multistore types.Snapshotter
+ extensions map[string]types.ExtensionSnapshotter
+ mtx sync.Mutex
+ operation operation
+ chRestore chan<- io.ReadCloser
+ chRestoreDone <-chan restoreDone
+ restoreChunkHashes [][]byte
+ restoreChunkIndex uint32
+}
+```
+
+For extension snapshotters that implement the `ExtensionSnapshotter` interface, their names should be registered to the snapshot `Manager` by
+calling `RegisterExtensions` when setting up the application. The snapshotters will handle both taking snapshot and restoration.
+
+```go
+// RegisterExtensions register extension snapshotters to manager
+func (m *Manager) RegisterExtensions(extensions ...types.ExtensionSnapshotter) error
+```
+
+On top of the existing `Snapshotter` interface for the `multistore`, we add `ExtensionSnapshotter` interface for the extension snapshotters. Three more function signatures: `SnapshotFormat()`, `SupportedFormats()` and `SnapshotName()` are added to `ExtensionSnapshotter`.
+
+```go
+// ExtensionPayloadReader read extension payloads,
+// it returns io.EOF when reached either end of stream or the extension boundaries.
+type ExtensionPayloadReader = func() ([]byte, error)
+
+// ExtensionPayloadWriter is a helper to write extension payloads to underlying stream.
+type ExtensionPayloadWriter = func([]byte) error
+
+// ExtensionSnapshotter is an extension Snapshotter that is appended to the snapshot stream.
+// ExtensionSnapshotter has an unique name and manages it's own internal formats.
+type ExtensionSnapshotter interface {
+ // SnapshotName returns the name of snapshotter, it should be unique in the manager.
+ SnapshotName() string
+
+ // SnapshotFormat returns the default format used to take a snapshot.
+ SnapshotFormat() uint32
+
+ // SupportedFormats returns a list of formats it can restore from.
+ SupportedFormats() []uint32
+
+ // SnapshotExtension writes extension payloads into the underlying protobuf stream.
+ SnapshotExtension(height uint64, payloadWriter ExtensionPayloadWriter) error
+
+ // RestoreExtension restores an extension state snapshot,
+ // the payload reader returns `io.EOF` when reached the extension boundaries.
+ RestoreExtension(height uint64, format uint32, payloadReader ExtensionPayloadReader) error
+
+}
+```
+
+## Consequences
+
+As a result of this implementation, we are able to create snapshots of binary chunk stream for the state that we maintain outside of the IAVL Tree, CosmWasm blobs for example. And new clients are able to fetch sanpshots of state for all modules that have implemented the corresponding interface from peer nodes.
+
+
+### Backwards Compatibility
+
+This ADR introduces new proto message types, add an `extensions` field in snapshot `Manager`, and add new `ExtensionSnapshotter` interface, so this is not backwards compatible if we have extensions.
+
+But for applications that does not have the state data outside of the IAVL tree for any module, the snapshot stream is backwards-compatible.
+
+### Positive
+
+* State maintained outside of IAVL tree like CosmWasm blobs can create snapshots by implementing extension snapshotters, and being fetched by new clients via state-sync.
+
+### Negative
+
+### Neutral
+
+* All modules that maintain state outside of IAVL tree need to implement `ExtensionSnapshotter` and the snapshot `Manager` need to call `RegisterExtensions` when setting up the application.
+
+## Further Discussions
+
+While an ADR is in the DRAFT or PROPOSED stage, this section should contain a summary of issues to be solved in future iterations (usually referencing comments from a pull-request discussion).
+Later, this section can optionally list ideas or improvements the author or reviewers found during the analysis of this ADR.
+
+## Test Cases [optional]
+
+Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.
+
+## References
+
+* https://github.com/cosmos/cosmos-sdk/pull/10961
+* https://github.com/cosmos/cosmos-sdk/issues/7340
+* https://hackmd.io/gJoyev6DSmqqkO667WQlGw
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual-annex1.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual-annex1.md
new file mode 100644
index 00000000..13deec92
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual-annex1.md
@@ -0,0 +1,358 @@
+# ADR 050: SIGN_MODE_TEXTUAL: Annex 1 Value Renderers
+
+## Changelog
+
+* Dec 06, 2021: Initial Draft
+* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team.
+* Dec 01, 2022: Remove `Object: ` prefix on Any header screen.
+* Dec 13, 2022: Sign over bytes hash when bytes length > 32.
+* Mar 27, 2023: Update `Any` value renderer to omit message header screen.
+
+## Status
+
+Accepted. Implementation started. Small value renderers details still need to be polished.
+
+## Abstract
+
+This Annex describes value renderers, which are used for displaying Protobuf values in a human-friendly way using a string array.
+
+## Value Renderers
+
+Value Renderers describe how values of different Protobuf types should be encoded as a string array. Value renderers can be formalized as a set of bijective functions `func renderT(value T) []string`, where `T` is one of the below Protobuf types for which this spec is defined.
+
+### Protobuf `number`
+
+* Applies to:
+ * protobuf numeric integer types (`int{32,64}`, `uint{32,64}`, `sint{32,64}`, `fixed{32,64}`, `sfixed{32,64}`)
+ * strings whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec`
+ * bytes whose `customtype` is `github.com/cosmos/cosmos-sdk/types.Int` or `github.com/cosmos/cosmos-sdk/types.Dec`
+* Trailing decimal zeroes are always removed
+* Formatting with `'`s for every three integral digits.
+* Usage of `.` to denote the decimal delimiter.
+
+#### Examples
+
+* `1000` (uint64) -> `1'000`
+* `"1000000.00"` (string representing a Dec) -> `1'000'000`
+* `"1000000.10"` (string representing a Dec) -> `1'000'000.1`
+
+### `coin`
+
+* Applies to `cosmos.base.v1beta1.Coin`.
+* Denoms are converted to `display` denoms using `Metadata` (if available). **This requires a state query**. The definition of `Metadata` can be found in the [bank protobuf definition](https://buf.build/cosmos/cosmos-sdk/docs/main:cosmos.bank.v1beta1#cosmos.bank.v1beta1.Metadata). If the `display` field is empty or nil, then we do not perform any denom conversion.
+* Amounts are converted to `display` denom amounts and rendered as `number`s above
+ * We do not change the capitalization of the denom. In practice, `display` denoms are stored in lowercase in state (e.g. `10 atom`), however they are often showed in UPPERCASE in everyday life (e.g. `10 ATOM`). Value renderers keep the case used in state, but we may recommend chains changing the denom metadata to be uppercase for better user display.
+* One space between the denom and amount (e.g. `10 atom`).
+* In the future, IBC denoms could maybe be converted to DID/IIDs, if we can find a robust way for doing this (ex. `cosmos:cosmos:hub:bank:denom:atom`)
+
+#### Examples
+
+* `1000000000uatom` -> `["1'000 atom"]`, because atom is the metadata's display denom.
+
+### `coins`
+
+* an array of `coin` is display as the concatenation of each `coin` encoded as the specification above, the joined together with the delimiter `", "` (a comma and a space, no quotes around).
+* the list of coins is ordered by unicode code point of the display denom: `A-Z` < `a-z`. For example, the string `aAbBcC` would be sorted `ABCabc`.
+ * if the coins list had 0 items in it then it'll be rendered as `zero`
+
+### Example
+
+* `["3cosm", "2000000uatom"]` -> `2 atom, 3 COSM` (assuming the display denoms are `atom` and `COSM`)
+* `["10atom", "20Acoin"]` -> `20 Acoin, 10 atom` (assuming the display denoms are `atom` and `Acoin`)
+* `[]` -> `zero`
+
+### `repeated`
+
+* Applies to all `repeated` fields, except `cosmos.tx.v1beta1.TxBody#Messages`, which has a particular encoding (see [ADR-050](adr-050-sign-mode-textual.md)).
+* A repeated type has the following template:
+
+```
+:
+ (/):
+
+ (/):
+
+End of .
+```
+
+where:
+
+* `field_name` is the Protobuf field name of the repeated field
+* `field_kind`:
+ * if the type of the repeated field is a message, `field_kind` is the message name
+ * if the type of the repeated field is an enum, `field_kind` is the enum name
+ * in any other case, `field_kind` is the protobuf primitive type (e.g. "string" or "bytes")
+* `int` is the length of the array
+* `index` is one based index of the repeated field
+
+#### Examples
+
+Given the proto definition:
+
+```protobuf
+message AllowedMsgAllowance {
+ repeated string allowed_messages = 1;
+}
+```
+
+and initializing with:
+
+```go
+x := []AllowedMsgAllowance{"cosmos.bank.v1beta1.MsgSend", "cosmos.gov.v1.MsgVote"}
+```
+
+we have the following value-rendered encoding:
+
+```
+Allowed messages: 2 strings
+Allowed messages (1/2): cosmos.bank.v1beta1.MsgSend
+Allowed messages (2/2): cosmos.gov.v1.MsgVote
+End of Allowed messages
+```
+
+### `message`
+
+* Applies to all Protobuf messages that do not have a custom encoding.
+* Field names follow [sentence case](https://en.wiktionary.org/wiki/sentence_case)
+ * replace each `_` with a space
+ * capitalize first letter of the sentence
+* Field names are ordered by their Protobuf field number
+* Screen title is the field name, and screen content is the value.
+* Nesting:
+ * if a field contains a nested message, we value-render the underlying message using the template:
+
+ ```
+ : <1st line of value-rendered message>
+ > // Notice the `>` prefix.
+ ```
+
+ * `>` character is used to denote nesting. For each additional level of nesting, add `>`.
+
+#### Examples
+
+Given the following Protobuf messages:
+
+```protobuf
+enum VoteOption {
+ VOTE_OPTION_UNSPECIFIED = 0;
+ VOTE_OPTION_YES = 1;
+ VOTE_OPTION_ABSTAIN = 2;
+ VOTE_OPTION_NO = 3;
+ VOTE_OPTION_NO_WITH_VETO = 4;
+}
+
+message WeightedVoteOption {
+ VoteOption option = 1;
+ string weight = 2 [(cosmos_proto.scalar) = "cosmos.Dec"];
+}
+
+message Vote {
+ uint64 proposal_id = 1;
+ string voter = 2 [(cosmos_proto.scalar) = "cosmos.AddressString"];
+ reserved 3;
+ repeated WeightedVoteOption options = 4;
+}
+```
+
+we get the following encoding for the `Vote` message:
+
+```
+Vote object
+> Proposal id: 4
+> Voter: cosmos1abc...def
+> Options: 2 WeightedVoteOptions
+> Options (1/2): WeightedVoteOption object
+>> Option: VOTE_OPTION_YES
+>> Weight: 0.7
+> Options (2/2): WeightedVoteOption object
+>> Option: VOTE_OPTION_NO
+>> Weight: 0.3
+> End of Options
+```
+
+### Enums
+
+* Show the enum variant name as string.
+
+#### Examples
+
+See example above with `message Vote{}`.
+
+### `google.protobuf.Any`
+
+* Applies to `google.protobuf.Any`
+* Rendered as:
+
+```
+
+>
+```
+
+There is however one exception: when the underlying message is a Protobuf message that does not have a custom encoding, then the message header screen is omitted, and one level of indentation is removed.
+
+Messages that have a custom encoding, including `google.protobuf.Timestamp`, `google.protobuf.Duration`, `google.protobuf.Any`, `cosmos.base.v1beta1.Coin`, and messages that have an app-defined custom encoding, will preserve their header and indentation level.
+
+#### Examples
+
+Message header screen is stripped, one-level of indentation removed:
+```
+/cosmos.gov.v1.Vote
+> Proposal id: 4
+> Vote: cosmos1abc...def
+> Options: 2 WeightedVoteOptions
+> Options (1/2): WeightedVoteOption object
+>> Option: Yes
+>> Weight: 0.7
+> Options (2/2): WeightedVoteOption object
+>> Option: No
+>> Weight: 0.3
+> End of Options
+```
+
+Message with custom encoding:
+```
+/cosmos.base.v1beta1.Coin
+> 10uatom
+```
+
+### `google.protobuf.Timestamp`
+
+Rendered using [RFC 3339](https://www.rfc-editor.org/rfc/rfc3339) (a
+simplification of ISO 8601), which is the current recommendation for portable
+time values. The rendering always uses "Z" (UTC) as the timezone. It uses only
+the necessary fractional digits of a second, omitting the fractional part
+entirely if the timestamp has no fractional seconds. (The resulting timestamps
+are not automatically sortable by standard lexicographic order, but we favor
+the legibility of the shorter string.)
+
+#### Examples
+
+The timestamp with 1136214245 seconds and 700000000 nanoseconds is rendered
+as `2006-01-02T15:04:05.7Z`.
+The timestamp with 1136214245 seconds and zero nanoseconds is rendered
+as `2006-01-02T15:04:05Z`.
+
+### `google.protobuf.Duration`
+
+The duration proto expresses a raw number of seconds and nanoseconds.
+This will be rendered as longer time units of days, hours, and minutes,
+plus any remaining seconds, in that order.
+Leading and trailing zero-quantity units will be omitted, but all
+units in between nonzero units will be shown, e.g. ` 3 days, 0 hours, 0 minutes, 5 seconds`.
+
+Even longer time units such as months or years are imprecise.
+Weeks are precise, but not commonly used - `91 days` is more immediately
+legible than `13 weeks`. Although `days` can be problematic,
+e.g. noon to noon on subsequent days can be 23 or 25 hours depending on
+daylight savings transitions, there is significant advantage in using
+strict 24-hour days over using only hours (e.g. `91 days` vs `2184 hours`).
+
+When nanoseconds are nonzero, they will be shown as fractional seconds,
+with only the minimum number of digits, e.g `0.5 seconds`.
+
+A duration of exactly zero is shown as `0 seconds`.
+
+Units will be given as singular (no trailing `s`) when the quantity is exactly one,
+and will be shown in plural otherwise.
+
+Negative durations will be indicated with a leading minus sign (`-`).
+
+Examples:
+
+* `1 day`
+* `30 days`
+* `-1 day, 12 hours`
+* `3 hours, 0 minutes, 53.025 seconds`
+
+### bytes
+
+* Bytes of length shorter or equal to 35 are rendered in hexadecimal, all capital letters, without the `0x` prefix.
+* Bytes of length greater than 35 are hashed using SHA256. The rendered text is `SHA-256=`, followed by the 32-byte hash, in hexadecimal, all capital letters, without the `0x` prefix.
+* The hexadecimal string is finally separated into groups of 4 digits, with a space `' '` as separator. If the bytes length is odd, the 2 remaining hexadecimal characters are at the end.
+
+The number 35 was chosen because it is the longest length where the hashed-and-prefixed representation is longer than the original data directly formatted, using the 3 rules above. More specifically:
+- a 35-byte array will have 70 hex characters, plus 17 space characters, resulting in 87 characters.
+- byte arrays starting from length 36 will be be hashed to 32 bytes, which is 64 hex characters plus 15 spaces, and with the `SHA-256=` prefix, it takes 87 characters.
+Also, secp256k1 public keys have length 33, so their Textual representation is not their hashed value, which we would like to avoid.
+
+Note: Data longer than 35 bytes are not rendered in a way that can be inverted. See ADR-050's [section about invertability](adr-050-sign-mode-textual.md#invertible-rendering) for a discussion.
+
+#### Examples
+
+Inputs are displayed as byte arrays.
+
+* `[0]`: `00`
+* `[0,1,2]`: `0001 02`
+* `[0,1,2,..,34]`: `0001 0203 0405 0607 0809 0A0B 0C0D 0E0F 1011 1213 1415 1617 1819 1A1B 1C1D 1E1F 2021 22`
+* `[0,1,2,..,35]`: `SHA-256=5D7E 2D9B 1DCB C85E 7C89 0036 A2CF 2F9F E7B6 6554 F2DF 08CE C6AA 9C0A 25C9 9C21`
+
+### address bytes
+
+We currently use `string` types in protobuf for addresses so this may not be needed, but if any address bytes are used in sign mode textual they should be rendered with bech32 formatting
+
+### strings
+
+Strings are rendered as-is.
+
+### Default Values
+
+* Default Protobuf values for each field are skipped.
+
+#### Example
+
+```protobuf
+message TestData {
+ string signer = 1;
+ string metadata = 2;
+}
+```
+
+```go
+myTestData := TestData{
+ Signer: "cosmos1abc"
+}
+```
+
+We get the following encoding for the `TestData` message:
+
+```
+TestData object
+> Signer: cosmos1abc
+```
+
+### bool
+
+Boolean values are rendered as `True` or `False`.
+
+### [ABANDONED] Custom `msg_title` instead of Msg `type_url`
+
+_This paragraph is in the Annex for informational purposes only, and will be removed in a next update of the ADR._
+
+
+ Click to see abandoned idea.
+
+* all protobuf messages to be used with `SIGN_MODE_TEXTUAL` CAN have a short title associated with them that can be used in format strings whenever the type URL is explicitly referenced via the `cosmos.msg.v1.textual.msg_title` Protobuf message option.
+* if this option is not specified for a Msg, then the Protobuf fully qualified name will be used.
+
+```protobuf
+message MsgSend {
+ option (cosmos.msg.v1.textual.msg_title) = "bank send coins";
+}
+```
+
+* they MUST be unique per message, per chain
+
+#### Examples
+
+* `cosmos.gov.v1.MsgVote` -> `governance v1 vote`
+
+#### Best Pratices
+
+We recommend to use this option only for `Msg`s whose Protobuf fully qualified name can be hard to understand. As such, the two examples above (`MsgSend` and `MsgVote`) are not good examples to be used with `msg_title`. We still allow `msg_title` for chains who might have `Msg`s with complex or non-obvious names.
+
+In those cases, we recommend to drop the version (e.g. `v1`) in the string if there's only one version of the module on chain. This way, the bijective mapping can figure out which message each string corresponds to. If multiple Protobuf versions of the same module exist on the same chain, we recommend keeping the first `msg_title` with version, and the second `msg_title` with version (e.g. `v2`):
+
+* `mychain.mymodule.v1.MsgDo` -> `mymodule do something`
+* `mychain.mymodule.v2.MsgDo` -> `mymodule v2 do something`
+
+
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual-annex2.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual-annex2.md
new file mode 100644
index 00000000..9bd0f3f4
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual-annex2.md
@@ -0,0 +1,122 @@
+# ADR 050: SIGN_MODE_TEXTUAL: Annex 2 XXX
+
+## Changelog
+
+* Oct 3, 2022: Initial Draft
+
+## Status
+
+DRAFT
+
+## Abstract
+
+This annex provides normative guidance on how devices should render a
+`SIGN_MODE_TEXTUAL` document.
+
+## Context
+
+`SIGN_MODE_TEXTUAL` allows a legible version of a transaction to be signed
+on a hardware security device, such as a Ledger. Early versions of the
+design rendered transactions directly to lines of ASCII text, but this
+proved awkward from its in-band signaling, and for the need to display
+Unicode text within the transaction.
+
+## Decision
+
+`SIGN_MODE_TEXTUAL` renders to an abstract representation, leaving it
+up to device-specific software how to present this representation given the
+capabilities, limitations, and conventions of the deivce.
+
+We offer the following normative guidance:
+
+1. The presentation should be as legible as possible to the user, given
+the capabilities of the device. If legibility could be sacrificed for other
+properties, we would recommend just using some other signing mode.
+Legibility should focus on the common case - it is okay for unusual cases
+to be less legible.
+
+2. The presentation should be invertible if possible without substantial
+sacrifice of legibility. Any change to the rendered data should result
+in a visible change to the presentation. This extends the integrity of the
+signing to user-visible presentation.
+
+3. The presentation should follow normal conventions of the device,
+without sacrificing legibility or invertibility.
+
+As an illustration of these principles, here is an example algorithm
+for presentation on a device which can display a single 80-character
+line of printable ASCII characters:
+
+* The presentation is broken into lines, and each line is presented in
+sequence, with user controls for going forward or backward a line.
+
+* Expert mode screens are only presented if the device is in expert mode.
+
+* Each line of the screen starts with a number of `>` characters equal
+to the screen's indentation level, followed by a `+` character if this
+isn't the first line of the screen, followed by a space if either a
+`>` or a `+` has been emitted,
+or if this header is followed by a `>`, `+`, or space.
+
+* If the line ends with whitespace or an `@` character, an additional `@`
+character is appended to the line.
+
+* The following ASCII control characters or backslash (`\`) are converted
+to a backslash followed by a letter code, in the manner of string literals
+in many languages:
+
+ * a: U+0007 alert or bell
+ * b: U+0008 backspace
+ * f: U+000C form feed
+ * n: U+000A line feed
+ * r: U+000D carriage return
+ * t: U+0009 horizontal tab
+ * v: U+000B vertical tab
+ * `\`: U+005C backslash
+
+* All other ASCII control characters, plus non-ASCII Unicode code points,
+are shown as either:
+
+ * `\u` followed by 4 uppercase hex chacters for code points
+ in the basic multilingual plane (BMP).
+
+ * `\U` followed by 8 uppercase hex characters for other code points.
+
+* The screen will be broken into multiple lines to fit the 80-character
+limit, considering the above transformations in a way that attempts to
+minimize the number of lines generated. Expanded control or Unicode characters
+are never split across lines.
+
+Example output:
+
+```
+An introductory line.
+key1: 123456
+key2: a string that ends in whitespace @
+key3: a string that ends in a single ampersand - @@
+ >tricky key4<: note the leading space in the presentation
+introducing an aggregate
+> key5: false
+> key6: a very long line of text, please co\u00F6perate and break into
+>+ multiple lines.
+> Can we do further nesting?
+>> You bet we can!
+```
+
+The inverse mapping gives us the only input which could have
+generated this output (JSON notation for string data):
+
+```
+Indent Text
+------ ----
+0 "An introductory line."
+0 "key1: 123456"
+0 "key2: a string that ends in whitespace "
+0 "key3: a string that ends in a single ampersand - @"
+0 ">tricky key4<: note the leading space in the presentation"
+0 "introducing an aggregate"
+1 "key5: false"
+1 "key6: a very long line of text, please coöperate and break into multiple lines."
+1 "Can we do further nesting?"
+2 "You bet we can!"
+```
diff --git a/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual.md b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual.md
new file mode 100644
index 00000000..efa4ace4
--- /dev/null
+++ b/copy-of-sdk-versioned_docs/version-0.47/build/architecture/adr-050-sign-mode-textual.md
@@ -0,0 +1,369 @@
+# ADR 050: SIGN_MODE_TEXTUAL
+
+## Changelog
+
+* Dec 06, 2021: Initial Draft.
+* Feb 07, 2022: Draft read and concept-ACKed by the Ledger team.
+* May 16, 2022: Change status to Accepted.
+* Aug 11, 2022: Require signing over tx raw bytes.
+* Sep 07, 2022: Add custom `Msg`-renderers.
+* Sep 18, 2022: Structured format instead of lines of text
+* Nov 23, 2022: Specify CBOR encoding.
+* Dec 01, 2022: Link to examples in separate JSON file.
+* Dec 06, 2022: Re-ordering of envelope screens.
+* Dec 14, 2022: Mention exceptions for invertability.
+* Jan 23, 2023: Switch Screen.Text to Title+Content.
+* Mar 07, 2023: Change SignDoc from array to struct containing array.
+* Mar 20, 2023: Introduce a spec version initialized to 0.
+
+## Status
+
+Accepted. Implementation started. Small value renderers details still need to be polished.
+
+Spec version: 0.
+
+## Abstract
+
+This ADR specifies SIGN_MODE_TEXTUAL, a new string-based sign mode that is targetted at signing with hardware devices.
+
+## Context
+
+Protobuf-based SIGN_MODE_DIRECT was introduced in [ADR-020](adr-020-protobuf-transaction-encoding.md) and is intended to replace SIGN_MODE_LEGACY_AMINO_JSON in most situations, such as mobile wallets and CLI keyrings. However, the [Ledger](https://www.ledger.com/) hardware wallet is still using SIGN_MODE_LEGACY_AMINO_JSON for displaying the sign bytes to the user. Hardware wallets cannot transition to SIGN_MODE_DIRECT as:
+
+* SIGN_MODE_DIRECT is binary-based and thus not suitable for display to end-users. Technically, hardware wallets could simply display the sign bytes to the user. But this would be considered as blind signing, and is a security concern.
+* hardware cannot decode the protobuf sign bytes due to memory constraints, as the Protobuf definitions would need to be embedded on the hardware device.
+
+In an effort to remove Amino from the SDK, a new sign mode needs to be created for hardware devices. [Initial discussions](https://github.com/cosmos/cosmos-sdk/issues/6513) propose a text-based sign mode, which this ADR formally specifies.
+
+## Decision
+
+In SIGN_MODE_TEXTUAL, a transaction is rendered into a textual representation,
+which is then sent to a secure device or subsystem for the user to review and sign.
+Unlike `SIGN_MODE_DIRECT`, the transmitted data can be simply decoded into legible text
+even on devices with limited processing and display.
+
+The textual representation is a sequence of _screens_.
+Each screen is meant to be displayed in its entirety (if possible) even on a small device like a Ledger.
+A screen is roughly equivalent to a short line of text.
+Large screens can be displayed in several pieces,
+much as long lines of text are wrapped,
+so no hard guidance is given, though 40 characters is a good target.
+A screen is used to display a single key/value pair for scalar values
+(or composite values with a compact notation, such as `Coins`)
+or to introduce or conclude a larger grouping.
+
+The text can contain the full range of Unicode code points, including control characters and nul.
+The device is responsible for deciding how to display characters it cannot render natively.
+See [annex 2](adr-050-sign-mode-textual-annex2.md) for guidance.
+
+Screens have a non-negative indentation level to signal composite or nested structures.
+Indentation level zero is the top level.
+Indentation is displayed via some device-specific mechanism.
+Message quotation notation is an appropriate model, such as
+leading `>` characters or vertical bars on more capable displays.
+
+Some screens are marked as _expert_ screens,
+meant to be displayed only if the viewer chooses to opt in for the extra detail.
+Expert screens are meant for information that is rarely useful,
+or needs to be present only for signature integrity (see below).
+
+### Invertible Rendering
+
+We require that the rendering of the transaction be invertible:
+there must be a parsing function such that for every transaction,
+when rendered to the textual representation,
+parsing that representation yeilds a proto message equivalent
+to the original under proto equality.
+
+Note that this inverse function does not need to perform correct
+parsing or error signaling for the whole domain of textual data.
+Merely that the range of valid transactions be invertible under
+the composition of rendering and parsing.
+
+Note that the existence of an inverse function ensures that the
+rendered text contains the full information of the original transaction,
+not a hash or subset.
+
+We make an exception for invertibility for data which are too large to
+meaningfully display, such as byte strings longer than 32 bytes. We may then
+selectively render them with a cryptographically-strong hash. In these cases,
+it is still computationally infeasible to find a different transaction which
+has the same rendering. However, we must ensure that the hash computation is
+simple enough to be reliably executed independently, so at least the hash is
+itself reasonably verifiable when the raw byte string is not.
+
+### Chain State
+
+The rendering function (and parsing function) may depend on the current chain state.
+This is useful for reading parameters, such as coin display metadata,
+or for reading user-specific preferences such as language or address aliases.
+Note that if the observed state changes between signature generation
+and the transaction's inclusion in a block, the delivery-time rendering
+might differ. If so, the signature will be invalid and the transaction
+will be rejected.
+
+### Signature and Security
+
+For security, transaction signatures should have three properties:
+
+1. Given the transaction, signatures, and chain state, it must be possible to validate that the signatures matches the transaction,
+to verify that the signers must have known their respective secret keys.
+
+2. It must be computationally infeasible to find a substantially different transaction for which the given signatures are valid, given the same chain state.
+
+3. The user should be able to give informed consent to the signed data via a simple, secure device with limited display capabilities.
+
+The correctness and security of `SIGN_MODE_TEXTUAL` is guaranteed by demonstrating an inverse function from the rendering to transaction protos.
+This means that it is impossible for a different protocol buffer message to render to the same text.
+
+### Transaction Hash Malleability
+
+When client software forms a transaction, the "raw" transaction (`TxRaw`) is serialized as a proto
+and a hash of the resulting byte sequence is computed.
+This is the `TxHash`, and is used by various services to track the submitted transaction through its lifecycle.
+Various misbehavior is possible if one can generate a modified transaction with a different TxHash
+but for which the signature still checks out.
+
+SIGN_MODE_TEXTUAL prevents this transaction malleability by including the TxHash as an expert screen
+in the rendering.
+
+### SignDoc
+
+The SignDoc for `SIGN_MODE_TEXTUAL` is formed from a data structure like:
+
+```go
+type Screen struct {
+ Title string // possibly size limited to, advised to 64 characters
+ Content string // possibly size limited to, advised to 255 characters
+ Indent uint8 // size limited to something small like 16 or 32
+ Expert bool
+}
+
+type SignDocTextual struct {
+ Screens []Screen
+}
+```
+
+We do not plan to use protobuf serialization to form the sequence of bytes
+that will be tranmitted and signed, in order to keep the decoder simple.
+We will use [CBOR](https://cbor.io) ([RFC 8949](https://www.rfc-editor.org/rfc/rfc8949.html)) instead.
+The encoding is defined by the following CDDL ([RFC 8610](https://www.rfc-editor.org/rfc/rfc8610)):
+
+```
+;;; CDDL (RFC 8610) Specification of SignDoc for SIGN_MODE_TEXTUAL.
+;;; Must be encoded using CBOR deterministic encoding (RFC 8949, section 4.2.1).
+
+;; A Textual document is a struct containing one field: an array of screens.
+sign_doc = {
+ screens_key: [* screen],
+}
+
+;; The key is an integer to keep the encoding small.
+screens_key = 1
+
+;; A screen consists of a text string, an indentation, and the expert flag,
+;; represented as an integer-keyed map. All entries are optional
+;; and MUST be omitted from the encoding if empty, zero, or false.
+;; Text defaults to the empty string, indent defaults to zero,
+;; and expert defaults to false.
+screen = {
+ ? title_key: tstr,
+ ? content_key: tstr,
+ ? indent_key: uint,
+ ? expert_key: bool,
+}
+
+;; Keys are small integers to keep the encoding small.
+title_key = 1
+content_key = 2
+indent_key = 3
+expert_key = 4
+```
+
+Defining the sign_doc as directly an array of screens has also been considered. However, given the possibility of future iterations of this specification, using a single-keyed struct has been chosen over the former proposal, as structs allow for easier backwards-compatibility.
+
+## Details
+
+In the examples that follow, screens will be shown as lines of text,
+indentation is indicated with a leading '>',
+and expert screens are marked with a leading `*`.
+
+### Encoding of the Transaction Envelope
+
+We define "transaction envelope" as all data in a transaction that is not in the `TxBody.Messages` field. Transaction envelope includes fee, signer infos and memo, but don't include `Msg`s. `//` denotes comments and are not shown on the Ledger device.
+
+```
+Chain ID:
+Account number:
+Sequence:
+Address:
+*Public Key:
+This transaction has Message(s) // Pluralize "Message" only when int>1
+> Message (/): // See value renderers for Any rendering.
+End of Message
+Memo: // Skipped if no memo set.
+Fee: // See value renderers for coins rendering.
+*Fee payer: // Skipped if no fee_payer set.
+*Fee granter: // Skipped if no fee_granter set.
+Tip: // Skippted if no tip.
+Tipper:
+*Gas Limit:
+*Timeout Height: // Skipped if no timeout_height set.
+*Other signer: SignerInfo // Skipped if the transaction only has 1 signer.
+*> Other signer (/):