Rewards Calculation Doc Diff
533 lines
# EigenLayer Rewards Calculation - Foundation Incentives
# EigenLayer Rewards Calculation
The previous rewards calculation link is [here](https://hackmd.io/u-NHKEvtQ7m7CVDb4_42bA). A Diff is [here]()
[TOC]
[TOC]
# Overview
# Overview
The EigenLayer rewards calculation is a set of SQL queries that calculates the distribution from rewards made by AVSs to stakers and operators via the `RewardsCoordinator`. The data is ingested on Postgres tables and transformed to calculate final rewards.
The EigenLayer rewards calculation is a set of SQL queries that calculates the distribution from rewards made by AVSs to stakers and operators via the `RewardsCoordinator`. The data is ingested on Postgres tables and transformed to calculate final rewards.
The queries run in a daily job that use snapshots of core contract state to calculate rewards made from any active rewards submissions.
The queries run in a daily job that use snapshots of core contract state to calculate rewards made from any active rewards submissions.
## Calculation Process
## Calculation Process
The calculation proceeds in 3 stages and is run daily
The calculation proceeds in 3 stages and is run daily
1. Data Extraction: Extracts data from event logs. There are referred to as the bronze tables. As part of this process, we reconcile all event data, including RPC data, with an additional data provider to ensure consistency.
1. Data Extraction: Extracts data from event logs. There are referred to as the bronze tables. As part of this process, we reconcile all event data, including RPC data, with an additional data provider to ensure consistency.
2. Data Transformation: Transforms data from the bronze tables into daily snapshots of on-chain state.
2. Data Transformation: Transforms data from the bronze tables into daily snapshots of on-chain state.
3. Reward Calculation: Cross-references snapshots with active rewards submissions to calculate rewards to stakers and operators.
3. Reward Calculation: Cross-references snapshots with active rewards submissions to calculate rewards to stakers and operators.
The pipeline then aggregates rewards all rewards up to `lastRewardTimestamp + calculationIntervalSeconds` and submits a root that merkleizes the the cumulative sum of each earner to the `RewardsCoordinator`.
The pipeline then aggregates rewards all rewards up to `lastRewardTimestamp + calculationIntervalSeconds` and submits a root that merkleizes the the cumulative sum of each earner to the `RewardsCoordinator`.
## Job Sequencing
## Job Sequencing
The Reward Calculation is an airflow pipeline that runs daily at 16:00 UTC. Queries to on-chain events and event calculations are all rounded down to 0:00 UTC. That is, **if the pipeline is run on 4-27 at 16:00 UTC, the `cutoff_date` parameter is set to 4-26 at 0:00 UTC.**
The Reward Calculation is an airflow pipeline that runs daily at 16:00 UTC. Queries to on-chain events and event calculations are all rounded down to 0:00 UTC. That is, **if the pipeline is run on 4-27 at 16:00 UTC, the `cutoff_date` parameter is set to 4-26 at 0:00 UTC.**
We handle reorgs by running the daily pipeline several hours after the 0:00 UTC, giving our reorg handler enough time to heal state.
We handle reorgs by running the daily pipeline several hours after the 0:00 UTC, giving our reorg handler enough time to heal state.
## Key Considerations
## Key Considerations
Each of the three sections below details key considerations to be mindful of when reading the queries and understanding the calculation. A summary of these considerations are:
Each of the three sections below details key considerations to be mindful of when reading the queries and understanding the calculation. A summary of these considerations are:
- Snapshots are taken of core contract state every 24 hours: `SNAPSHOT_CADENCE` in `RewardsCoordinator`
- Snapshots are taken of core contract state every 24 hours: `SNAPSHOT_CADENCE` in `RewardsCoordinator`
- Snapshots from on-chain state are *rounded up* to the nearest day at 0:00 UTC. The one exception is operator<>avs deregistrations, which are rounded *down* to the nearest day at 0:00 UTC
- Snapshots from on-chain state are *rounded up* to the nearest day at 0:00 UTC. The one exception is operator<>avs deregistrations, which are rounded *down* to the nearest day at 0:00 UTC
- Since snapshots are rounded up, we only care about the *latest state update* from a single day
- Since snapshots are rounded up, we only care about the *latest state update* from a single day
- The reward distribution to all earners must be <= the amount paid for a rewards submission
- The reward distribution to all earners must be <= the amount paid for a rewards submission
## Glossary
## Glossary
- `earner`: The entity, a staker or operator, receiving a reward
- `earner`: The entity, a staker or operator, receiving a reward
- `calculationIntervalSeconds`: The multiple that the duration of rewards submissions must be
- `calculationIntervalSeconds`: The multiple that the duration of rewards submissions must be
- `SNAPSHOT_CADENCE`: The cadence at which snapshots of EigenLayer core contract state are taken
- `SNAPSHOT_CADENCE`: The cadence at which snapshots of EigenLayer core contract state are taken
- `typo rewardSnaphot -> rewardSnapshot`: The reward to earners on a snapshot
- `typo rewardSnaphot -> rewardSnapshot`: The reward to earners on a snapshot
- `cutoff-date`: The date at which transformations are run. Always set to the *previous* days at 0:00 UTC
- `cutoff-date`: The date at which transformations are run. Always set to the *previous* days at 0:00 UTC
- `run`: An iteration of the daily rewards pipeline job
- `run`: An iteration of the daily rewards pipeline job
- `stakeWeight`: How an AVS values its earners stake, given by multipliers for each strategy of the reward
- `stakeWeight`: How an AVS values its earners stake, given by multipliers for each strategy of the reward
- `gold_table`: The table that contains the `rewardSnapshots`. Its columns are `earner`, `amount`, `token`, `snapshot`, `reward_hash`
- `gold_table`: The table that contains the `rewardSnapshots`. Its columns are `earner`, `amount`, `token`, `snapshot`, `reward_hash`
# Data Extraction
# Data Extraction
## Key Considerations
## Key Considerations
Shares are transformed into Decimal(78,0), a data type that can hold up to uint256. The tokens that are whitelisted for deposit (all LSTs & Eigen) & Native ETH should not have this an issue with truncation.
Shares are transformed into Decimal(78,0), a data type that can hold up to uint256. The tokens that are whitelisted for deposit (all LSTs & Eigen) & Native ETH should not have this an issue with truncation.
## Cutoff Date
## Cutoff Date
We set the cutoff date at the beginning of each run with the following logic:
We set the cutoff date at the beginning of each run with the following logic:
```python=
```python=
def get_cutoff_date():
def get_cutoff_date():
# get current time in utc
# get current time in utc
ts = datetime.now(timezone.utc)
ts = datetime.now(timezone.utc)
# round down to 00:00 UTC of the current day
# round down to 00:00 UTC of the current day
ts = ts.replace(hour=0, minute=0, second=0, microsecond=0)
ts = ts.replace(hour=0, minute=0, second=0, microsecond=0)
# subtract 1 day
# subtract 1 day
ts = ts - timedelta(days=1)
ts = ts - timedelta(days=1)
return ts
return ts
```
```
## Airflow Variables
## Airflow Variables
At the daily run of the pipeline, we get the variables passed in if the run is a backfill. On a backfill run, we enforce that the start & end date are valid, namely that the end date is not after the cutoff and that the start date is not after the end date.
At the daily run of the pipeline, we get the variables passed in if the run is a backfill. On a backfill run, we enforce that the start & end date are valid, namely that the end date is not after the cutoff and that the start date is not after the end date.
Backfills are run in worst case scenarios if there are events that are missed in the pipeline run. We run reconciliation with multiple data vendors to ensure this should not have to be done. In addition, we run a sanity check query at the end of the pipeline generation which ensures that:
Backfills are run in worst case scenarios if there are events that are missed in the pipeline run. We run reconciliation with multiple data vendors to ensure this should not have to be done. In addition, we run a sanity check query at the end of the pipeline generation which ensures that:
1. The cumulative rewards for earners never decreases
1. The cumulative rewards for earners never decreases
2. The tokens per day of an AVS is always >= sum(earener payouts) for a given snapshot and reward hash
2. The tokens per day of an AVS is always >= sum(earener payouts) for a given snapshot and reward hash
3. The number of rows of (`earner`, `reward_hash`, and `snapshot`) never decreases
3. The number of rows of (`earner`, `reward_hash`, and `snapshot`) never decreases
```python=
```python=
def get_gold_calculation_dates(**kwargs):
def get_gold_calculation_dates(**kwargs):
# Cutoff Date
# Cutoff Date
cutoff_date = get_cutoff_date()
cutoff_date = get_cutoff_date()
cutoff_date_str = cutoff_date.strftime('%Y-%m-%d %H:%M:%S')
cutoff_date_str = cutoff_date.strftime('%Y-%m-%d %H:%M:%S')
# Backfill Date
# Backfill Date
dag_run = kwargs.get('dag_run')
dag_run = kwargs.get('dag_run')
if dag_run is not None:
if dag_run is not None:
start_date_str = dag_run.conf.get('start_date', '1970-01-01 00:00:00')
start_date_str = dag_run.conf.get('start_date', '1970-01-01 00:00:00')
end_date_str = dag_run.conf.get('end_date', cutoff_date_str)
end_date_str = dag_run.conf.get('end_date', cutoff_date_str)
is_backfill = str.lower(dag_run.conf.get('is_backfill', 'false'))
is_backfill = str.lower(dag_run.conf.get('is_backfill', 'false'))
else:
else:
raise ValueError('Dag run is None')
raise ValueError('Dag run is None')
# Sanitize the start and end dates
# Sanitize the start and end dates
start_datetime = datetime.strptime(start_date_str, '%Y-%m-%d %H:%M:%S')
start_datetime = datetime.strptime(start_date_str, '%Y-%m-%d %H:%M:%S')
end_datetime = datetime.strptime(end_date_str, '%Y-%m-%d %H:%M:%S')
end_datetime = datetime.strptime(end_date_str, '%Y-%m-%d %H:%M:%S')
cutoff_datetime = datetime.strptime(cutoff_date_str, '%Y-%m-%d %H:%M:%S')
cutoff_datetime = datetime.strptime(cutoff_date_str, '%Y-%m-%d %H:%M:%S')
if start_datetime >= end_datetime:
if start_datetime >= end_datetime:
raise ValueError('Start date must be before end date')
raise ValueError('Start date must be before end date')
if end_datetime > cutoff_datetime:
if end_datetime > cutoff_datetime:
raise ValueError('End date must be less than or equal to cutoff date')
raise ValueError('End date must be less than or equal to cutoff date')
# Push to XCom
# Push to XCom
kwargs['ti'].xcom_push(key='cutoff_date', value=end_date_str)
kwargs['ti'].xcom_push(key='cutoff_date', value=end_date_str)
kwargs['ti'].xcom_push(key='rewards_start', value=start_date_str)
kwargs['ti'].xcom_push(key='rewards_start', value=start_date_str)
kwargs['ti'].xcom_push(key='is_backfill', value=is_backfill)
kwargs['ti'].xcom_push(key='is_backfill', value=is_backfill)
```
```
## Queries
## Queries
This set of queries extracts event data from EigenLayer Core contracts. The event logs are automatically decoded from the contract ABIs [here](https://github.com/Layr-Labs/eigenlayer-contracts). Running `forge build` will build the contracts and ABIs are stored in the `/out` folder.
This set of queries extracts event data from EigenLayer Core contracts. The event logs are automatically decoded from the contract ABIs [here](https://github.com/Layr-Labs/eigenlayer-contracts). Running `forge build` will build the contracts and ABIs are stored in the `/out` folder.
In the below queries, `block_date` is the date of the block whereas `block_time` is the full date + time of the block.
In the below queries, `block_date` is the date of the block whereas `block_time` is the full date + time of the block.
### Staker State
### Staker State
#### Deposits
#### Deposits
```sql=
```sql=
SELECT
SELECT
lower(coalesce(t.output_data ->> 'depositor', t.output_data ->> 'staker')) as staker,
lower(coalesce(t.output_data ->> 'depositor', t.output_data ->> 'staker')) as staker,
lower(t.output_data ->> 'strategy') as strategy,
lower(t.output_data ->> 'strategy') as strategy,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs as t
FROM transaction_logs as t
LEFT JOIN blocks as b ON (t.block_sequence_id = b.id)
LEFT JOIN blocks as b ON (t.block_sequence_id = b.id)
WHERE t.address = '{{ var('strategy_manager_address') }}'
WHERE t.address = '{{ var('strategy_manager_address') }}'
AND t.event_name = 'Deposit'
AND t.event_name = 'Deposit'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### EigenPodShares
#### EigenPodShares
*Note: Shares can be negative*
*Note: Shares can be negative*
```sql=
```sql=
SELECT
SELECT
lower(t.arguments #>> '{0,Value}') AS staker,
lower(t.arguments #>> '{0,Value}') AS staker,
(t.output_data ->> 'sharesDelta')::numeric(78,0) as shares,
(t.output_data ->> 'sharesDelta')::numeric(78,0) as shares,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('eigen_pod_manager_address') }}'
WHERE t.address = '{{ var('eigen_pod_manager_address') }}'
AND t.event_name = 'PodSharesUpdated'
AND t.event_name = 'PodSharesUpdated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### M1 Withdrawals
#### M1 Withdrawals
Withdrawals in M1 were routed through the StrategyManager. Note that we remove the single withdrawal completed as shares in M1 as there was no deposit event for this code path.
Withdrawals in M1 were routed through the StrategyManager. Note that we remove the single withdrawal completed as shares in M1 as there was no deposit event for this code path.
```sql=
```sql=
SELECT
SELECT
lower(coalesce(t.output_data ->> 'depositor', t.output_data ->> 'staker')) as staker,
lower(coalesce(t.output_data ->> 'depositor', t.output_data ->> 'staker')) as staker,
lower(t.output_data ->> 'strategy') as strategy,
lower(t.output_data ->> 'strategy') as strategy,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('strategy_manager_address') }}'
WHERE t.address = '{{ var('strategy_manager_address') }}'
AND t.event_name = 'ShareWithdrawalQueued'
AND t.event_name = 'ShareWithdrawalQueued'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
-- Remove this transaction hash as it is the only withdrawal on m1 that was completed as shares. There is no corresponding deposit event. The withdrawal was completed to the same staker address.
-- Remove this transaction hash as it is the only withdrawal on m1 that was completed as shares. There is no corresponding deposit event. The withdrawal was completed to the same staker address.
AND t.transaction_hash != '0x62eb0d0865b2636c74ed146e2d161e39e42b09bac7f86b8905fc7a830935dc1e'
AND t.transaction_hash != '0x62eb0d0865b2636c74ed146e2d161e39e42b09bac7f86b8905fc7a830935dc1e'
```
```
#### M2 Withdrawals
#### M2 Withdrawals
Unlike M1 withdrawal events, M2 withdrawal events return a tuple with a list of strategies and shares. Thus, we unwind the tuple into indivudal rows to create (staker, strategy$_0$, share$_0$), (staker, strategy$_1$, share$_1$). We discard all M2 withdrawals that were migrated from M1 so we do not double count a withdrawal.
Unlike M1 withdrawal events, M2 withdrawal events return a tuple with a list of strategies and shares. Thus, we unwind the tuple into indivudal rows to create (staker, strategy$_0$, share$_0$), (staker, strategy$_1$, share$_1$). We discard all M2 withdrawals that were migrated from M1 so we do not double count a withdrawal.
```sql=
```sql=
WITH migrations AS (
WITH migrations AS (
SELECT
SELECT
(
(
SELECT lower(string_agg(lpad(to_hex(elem::int), 2, '0'), ''))
SELECT lower(string_agg(lpad(to_hex(elem::int), 2, '0'), ''))
FROM jsonb_array_elements_text(t.output_data->'oldWithdrawalRoot') AS elem
FROM jsonb_array_elements_text(t.output_data->'oldWithdrawalRoot') AS elem
) AS m1_withdrawal_root,
) AS m1_withdrawal_root,
(
(
SELECT lower(string_agg(lpad(to_hex(elem::int), 2, '0'), ''))
SELECT lower(string_agg(lpad(to_hex(elem::int), 2, '0'), ''))
FROM jsonb_array_elements_text(t.output_data->'newWithdrawalRoot') AS elem
FROM jsonb_array_elements_text(t.output_data->'newWithdrawalRoot') AS elem
) AS m2_withdrawal_root
) AS m2_withdrawal_root
FROM transaction_logs t
FROM transaction_logs t
WHERE t.address = '{{ var('delegation_manager_address') }}'
WHERE t.address = '{{ var('delegation_manager_address') }}'
AND t.event_name = 'WithdrawalMigrated'
AND t.event_name = 'WithdrawalMigrated'
),
),
full_m2_withdrawals AS (
full_m2_withdrawals AS (
SELECT
SELECT
lower(t.output_data #>> '{withdrawal}') as withdrawals,
lower(t.output_data #>> '{withdrawal}') as withdrawals,
(
(
SELECT lower(string_agg(lpad(to_hex(elem::int), 2, '0'), ''))
SELECT lower(string_agg(lpad(to_hex(elem::int), 2, '0'), ''))
FROM jsonb_array_elements_text(t.output_data ->'withdrawalRoot') AS elem
FROM jsonb_array_elements_text(t.output_data ->'withdrawalRoot') AS elem
) AS withdrawal_root,
) AS withdrawal_root,
lower(t.output_data #>> '{withdrawal, staker}') AS staker,
lower(t.output_data #>> '{withdrawal, staker}') AS staker,
lower(t_strategy.strategy) AS strategy,
lower(t_strategy.strategy) AS strategy,
(t_share.share)::numeric(78,0) AS shares,
(t_share.share)::numeric(78,0) AS shares,
t_strategy.strategy_index,
t_strategy.strategy_index,
t_share.share_index,
t_share.share_index,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time::timestamp(6),
b.block_time::timestamp(6),
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id,
LEFT JOIN blocks b ON t.block_sequence_id = b.id,
jsonb_array_elements_text(t.output_data #> '{withdrawal, strategies}') WITH ORDINALITY AS t_strategy(strategy, strategy_index),
jsonb_array_elements_text(t.output_data #> '{withdrawal, strategies}') WITH ORDINALITY AS t_strategy(strategy, strategy_index),
jsonb_array_elements_text(t.output_data #> '{withdrawal, shares}') WITH ORDINALITY AS t_share(share, share_index)
jsonb_array_elements_text(t.output_data #> '{withdrawal, shares}') WITH ORDINALITY AS t_share(share, share_index)
WHERE t.address = '{{ var('delegation_manager_address') }}'
WHERE t.address = '{{ var('delegation_manager_address') }}'
AND t.event_name = 'WithdrawalQueued'
AND t.event_name = 'WithdrawalQueued'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND t_strategy.strategy_index = t_share.share_index
AND t_strategy.strategy_index = t_share.share_index
)
)
-- Parse out the m2 withdrawals that were migrated from m1
-- Parse out the m2 withdrawals that were migrated from m1
SELECT
SELECT
full_m2_withdrawals.*
full_m2_withdrawals.*
FROM
FROM
full_m2_withdrawals
full_m2_withdrawals
LEFT JOIN
LEFT JOIN
migrations
migrations
ON
ON
full_m2_withdrawals.withdrawal_root = migrations.m2_withdrawal_root
full_m2_withdrawals.withdrawal_root = migrations.m2_withdrawal_root
WHERE
WHERE
migrations.m2_withdrawal_root IS NULL
migrations.m2_withdrawal_root IS NULL
```
```
### Operator State
### Operator State
Operator state is made of stake delegated to them by stakers.
Operator state is made of stake delegated to them by stakers.
#### Operator Shares Increased
#### Operator Shares Increased
```sql=
```sql=
SELECT
SELECT
lower(t.arguments #>> '{0,Value}') as operator,
lower(t.arguments #>> '{0,Value}') as operator,
lower(t.output_data ->> 'strategy') as strategy,
lower(t.output_data ->> 'strategy') as strategy,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('delegation_manager_address') }}'
WHERE t.address = '{{ var('delegation_manager_address') }}'
AND t.event_name = 'OperatorSharesIncreased'
AND t.event_name = 'OperatorSharesIncreased'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### Operator Shares Decreased
#### Operator Shares Decreased
```sql=
```sql=
SELECT
SELECT
lower(t.arguments #>> '{0,Value}') as operator,
lower(t.arguments #>> '{0,Value}') as operator,
lower(t.output_data ->> 'strategy') as strategy,
lower(t.output_data ->> 'strategy') as strategy,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
(t.output_data ->> 'shares')::numeric(78,0) as shares,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('delegation_manager_address') }}'
WHERE t.address = '{{ var('delegation_manager_address') }}'
AND t.event_name = 'OperatorSharesDecreased'
AND t.event_name = 'OperatorSharesDecreased'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
### Staker Delegations
### Staker Delegations
#### Staker Delegated
#### Staker Delegated
```sql=
```sql=
SELECT
SELECT
lower(t.arguments #>> '{0,Value}') AS staker,
lower(t.arguments #>> '{0,Value}') AS staker,
lower(t.arguments #>> '{1,Value}') AS operator,
lower(t.arguments #>> '{1,Value}') AS operator,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('delegation_manager_address') }}'
WHERE t.address = '{{ var('delegation_manager_address') }}'
AND t.event_name = 'StakerDelegated'
AND t.event_name = 'StakerDelegated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### Staker Undelegated
#### Staker Undelegated
```sql=
```sql=
SELECT
SELECT
lower(t.arguments #>> '{0,Value}') AS staker,
lower(t.arguments #>> '{0,Value}') AS staker,
lower(t.arguments #>> '{1,Value}') AS operator,
lower(t.arguments #>> '{1,Value}') AS operator,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('delegation_manager_address') }}'
WHERE t.address = '{{ var('delegation_manager_address') }}'
AND t.event_name = 'StakerUndelegated'
AND t.event_name = 'StakerUndelegated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
### Rewards Submissions
### Rewards Submissions
There are two types of rewards submissions in the protocol:
There are two types of rewards submissions in the protocol:
1. AVS Rewards Submission: Permissionless function called by any AVS
1. AVS Rewards Submission: Permissionless function called by any AVS
2. Reward for All: Permissioned reward to all stakers of the protocol
2. Reward for All: Permissioned reward to all stakers of the protocol
*Note: The amount in the RewardsCoordinator has a max value of $1e38-1$, which allows us to truncate it to a DECIMAL(38,0).*
*Note: The amount in the RewardsCoordinator has a max value of $1e38-1$, which allows us to truncate it to a DECIMAL(38,0).*
#### AVS Rewards Submissions
#### AVS Rewards Submissions
For each rewards submission, we extract each (strategy,multiplier) in a separate row for easier accounting
For each rewards submission, we extract each (strategy,multiplier) in a separate row for easier accounting
```sql=
```sql=
SELECT
SELECT
lower(tl.arguments #>> '{0,Value}') AS avs,
lower(tl.arguments #>> '{0,Value}') AS avs,
lower(tl.arguments #>> '{2,Value}') AS reward_hash,
lower(tl.arguments #>> '{2,Value}') AS reward_hash,
coalesce(lower(tl.output_data #>> '{rewardsSubmission}'), lower(tl.output_data #>> '{rangePayment}')) as rewards_submission,
coalesce(lower(tl.output_data #>> '{rewardsSubmission}'), lower(tl.output_data #>> '{rangePayment}')) as rewards_submission,
coalesce(lower(tl.output_data #>> '{rewardsSubmission, token}'), lower(tl.output_data #>> '{rangePayment, token}')) as token,
coalesce(lower(tl.output_data #>> '{rewardsSubmission, token}'), lower(tl.output_data #>> '{rangePayment, token}')) as token,
coalesce(tl.output_data #>> '{rewardsSubmission,amount}', tl.output_data #>> '{rangePayment,amount}')::numeric(78,0) as amount,
coalesce(tl.output_data #>> '{rewardsSubmission,amount}', tl.output_data #>> '{rangePayment,amount}')::numeric(78,0) as amount,
to_timestamp(coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint)::timestamp(6) as start_timestamp,
to_timestamp(coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint)::timestamp(6) as start_timestamp,
coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint as duration,
coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint as duration,
to_timestamp(
to_timestamp(
coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint
coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint
+ coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint
+ coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint
)::timestamp(6) as end_timestamp,
)::timestamp(6) as end_timestamp,
lower(t.entry ->> 'strategy') as strategy,
lower(t.entry ->> 'strategy') as strategy,
(t.entry ->> 'multiplier')::numeric(78,0) as multiplier,
(t.entry ->> 'multiplier')::numeric(78,0) as multiplier,
t.strategy_index as strategy_index,
t.strategy_index as strategy_index,
tl.transaction_hash,
tl.transaction_hash,
tl.log_index,
tl.log_index,
b.block_time::timestamp(6),
b.block_time::timestamp(6),
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
tl.block_number
tl.block_number
FROM transaction_logs tl
FROM transaction_logs tl
LEFT JOIN blocks b ON (tl.block_sequence_id = b.id)
LEFT JOIN blocks b ON (tl.block_sequence_id = b.id)
CROSS JOIN LATERAL jsonb_array_elements(
CROSS JOIN LATERAL jsonb_array_elements(
coalesce(tl.output_data #> '{rewardsSubmission,strategiesAndMultipliers}',tl.output_data #> '{rangePayment,strategiesAndMultipliers}')
coalesce(tl.output_data #> '{rewardsSubmission,strategiesAndMultipliers}',tl.output_data #> '{rangePayment,strategiesAndMultipliers}')
) WITH ORDINALITY AS t(entry, strategy_index)
) WITH ORDINALITY AS t(entry, strategy_index)
WHERE address = '{{ var('rewards_coordinator_address') }}'
WHERE address = '{{ var('rewards_coordinator_address') }}'
AND event_name = 'AVSRewardsSubmissionCreated'
AND event_name = 'AVSRewardsSubmissionCreated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### Rewards for All Submissions
#### Rewards for All Submissions
```sql=
```sql=
SELECT
SELECT
lower(tl.arguments #>> '{0,Value}') AS avs, -- Keeping as AVS for compatibility with unioning on range_payments.
lower(tl.arguments #>> '{0,Value}') AS avs, -- Keeping as AVS for compatibility with unioning on range_payments.
lower(tl.arguments #>> '{2,Value}') AS reward_hash,
lower(tl.arguments #>> '{2,Value}') AS reward_hash,
coalesce(lower(tl.output_data #>> '{rewardsSubmission}'), lower(tl.output_data #>> '{rangePayment}')) as rewards_submission,
coalesce(lower(tl.output_data #>> '{rewardsSubmission}'), lower(tl.output_data #>> '{rangePayment}')) as rewards_submission,
coalesce(lower(tl.output_data #>> '{rewardsSubmission, token}'), lower(tl.output_data #>> '{rangePayment, token}')) as token,
coalesce(lower(tl.output_data #>> '{rewardsSubmission, token}'), lower(tl.output_data #>> '{rangePayment, token}')) as token,
coalesce(tl.output_data #>> '{rewardsSubmission,amount}', tl.output_data #>> '{rangePayment,amount}')::numeric(78,0) as amount,
coalesce(tl.output_data #>> '{rewardsSubmission,amount}', tl.output_data #>> '{rangePayment,amount}')::numeric(78,0) as amount,
to_timestamp(coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint)::timestamp(6) as start_timestamp,
to_timestamp(coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint)::timestamp(6) as start_timestamp,
coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint as duration,
coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint as duration,
to_timestamp(
to_timestamp(
coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint
coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint
+ coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint
+ coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint
)::timestamp(6) as end_timestamp,
)::timestamp(6) as end_timestamp,
lower(t.entry ->> 'strategy') as strategy,
lower(t.entry ->> 'strategy') as strategy,
(t.entry ->> 'multiplier')::numeric(78,0) as multiplier,
(t.entry ->> 'multiplier')::numeric(78,0) as multiplier,
t.strategy_index as strategy_index,
t.strategy_index as strategy_index,
tl.transaction_hash,
tl.transaction_hash,
tl.log_index,
tl.log_index,
b.block_time::timestamp(6),
b.block_time::timestamp(6),
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
tl.block_number
tl.block_number
FROM transaction_logs tl
FROM transaction_logs tl
LEFT JOIN blocks b ON tl.block_sequence_id = b.id
LEFT JOIN blocks b ON tl.block_sequence_id = b.id
CROSS JOIN LATERAL jsonb_array_elements(
CROSS JOIN LATERAL jsonb_array_elements(
coalesce(tl.output_data #> '{rewardsSubmission,strategiesAndMultipliers}',tl.output_data #> '{rangePayment,strategiesAndMultipliers}')
coalesce(tl.output_data #> '{rewardsSubmission,strategiesAndMultipliers}',tl.output_data #> '{rangePayment,strategiesAndMultipliers}')
) WITH ORDINALITY AS t(entry, strategy_index)
) WITH ORDINALITY AS t(entry, strategy_index)
WHERE address = '{{ var('rewards_coordinator_address') }}'
WHERE address = '{{ var('rewards_coordinator_address') }}'
AND event_name = 'RewardsSubmissionForAllCreated'
AND event_name = 'RewardsSubmissionForAllCreated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### Rewards for All Earners Submissions
Text moved to lines 514-516
```sql=
SELECT
lower(tl.arguments #>> '{0,Value}') AS avs, -- Keeping as AVS for compatibility with unioning on range_payments.
lower(tl.arguments #>> '{2,Value}') AS reward_hash,
coalesce(lower(tl.output_data #>> '{rewardsSubmission}'), lower(tl.output_data #>> '{rangePayment}')) as rewards_submission,
coalesce(lower(tl.output_data #>> '{rewardsSubmission, token}'), lower(tl.output_data #>> '{rangePayment, token}')) as token,
coalesce(tl.output_data #>> '{rewardsSubmission,amount}', tl.output_data #>> '{rangePayment,amount}')::numeric(78,0) as amount,
to_timestamp(coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint)::timestamp(6) as start_timestamp,
coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint as duration,
to_timestamp(
coalesce(tl.output_data #>> '{rewardsSubmission,startTimestamp}', tl.output_data #>> '{rangePayment,startTimestamp}')::bigint
+ coalesce(tl.output_data #>> '{rewardsSubmission,duration}', tl.output_data #>> '{rangePayment,duration}')::bigint
)::timestamp(6) as end_timestamp,
lower(t.entry ->> 'strategy') as strategy,
(t.entry ->> 'multiplier')::numeric(78,0) as multiplier,
t.strategy_index as strategy_index,
tl.transaction_hash,
tl.log_index,
b.block_time::timestamp(6),
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
tl.block_number
FROM transaction_logs tl
LEFT JOIN blocks b ON tl.block_sequence_id = b.id
CROSS JOIN LATERAL jsonb_array_elements(
coalesce(tl.output_data #> '{rewardsSubmission,strategiesAndMultipliers}',tl.output_data #> '{rangePayment,strategiesAndMultipliers}')
) WITH ORDINALITY AS t(entry, strategy_index)
WHERE address = '{{ var('rewards_coordinator_address') }}'
AND event_name = 'RewardsSubmissionForAllEarnersCreated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
### Operator<>AVS State
### Operator<>AVS State
Every deregistration and registration from an Operator to an AVS is recorded in the AVSDirectory
Every deregistration and registration from an Operator to an AVS is recorded in the AVSDirectory
#### Operator Registrations
#### Operator Registrations
```sql=
```sql=
SELECT
SELECT
lower(t.arguments #>> '{0,Value}') as operator,
lower(t.arguments #>> '{0,Value}') as operator,
lower(t.arguments #>> '{1,Value}') as avs,
lower(t.arguments #>> '{1,Value}') as avs,
(t.output_data -> 'status')::int as status,
(t.output_data -> 'status')::int as status,
t.transaction_hash,
t.transaction_hash,
t.log_index,
t.log_index,
b.block_time,
b.block_time,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
to_char(b.block_time, 'YYYY-MM-DD') AS block_date,
t.block_number
t.block_number
FROM transaction_logs t
FROM transaction_logs t
LEFT JOIN blocks b ON t.block_sequence_id = b.id
LEFT JOIN blocks b ON t.block_sequence_id = b.id
WHERE t.address = '{{ var('avs_directory_address') }}'
WHERE t.address = '{{ var('avs_directory_address') }}'
AND t.event_name = 'OperatorAVSRegistrationStatusUpdated'
AND t.event_name = 'OperatorAVSRegistrationStatusUpdated'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
AND date_trunc('day', b.block_time) < TIMESTAMP '{{ var("cutoff_date") }}'
```
```
#### Operator Restaked Strategies
#### Operator Restaked Strategies
The AVS Directory **does not** emit an event for the strategies that an operator restakes or un-restakes on an AVS. To retrieve this information we run a cron job every 3600 blocks (ie. `blockNum % 3600 = 0`), starting when the AVSDirectory was deployed, that:
The AVS Directory **does not** emit an event for the strategies that an operator restakes or un-restakes on an AVS. To retrieve this information we run a cron job every 3600 blocks (ie. `blockNum % 3600 = 0`), starting when the AVSDirectory was deployed, that:
1. Retrieves all operators restaked on the AVS
1. Retrieves all operators restaked on the AVS
2. Calls `getOperatorRestakedStrategies(address operator) returns (address[])` on each AVS's serviceManager contract
2. Calls `getOperatorRestakedStrategies(address operator) returns (address[])` on each AVS's serviceManager contract
It is a requirement that AVSs are compliant with this interface, as referenced in our [docs](https://docs.eigenlayer.xyz/eigenlayer/avs-guides/avs-dashboard-onboarding)
It is a requirement that AVSs are compliant with this interface, as referenced in our [docs](https://docs.eigenlayer.xyz/eigenlayer/avs-guides/avs-dashboard-onboarding)
Assuming that an operator is registered to an AVS at timestamp $t$, an example output of this cron job is:
Assuming that an operator is registered to an AVS at timestamp $t$, an example output of this cron job is:
| Operator | AVS | Strategy | Block Time |
| Operator | AVS | Strategy | Block Time |
| -------- | ------- | ------- | ------- |
| -------- | ------- | ------- | ------- |
| Operator1 | AVS-A | stETH | t |
| Operator1 | AVS-A | stETH | t |
| Operator1 | AVS-A | rETH | t |
| Operator1 | AVS-A | rETH | t |
| Operator2 | AVS-A | rETH | t |
| Operator2 | AVS-A | rETH | t |
| Operator3 | AVS-B | cbETH | t |
| Operator3 | AVS-B | cbETH | t |
# Data Transformation
# Data Transformation
Once we extract all logs and relevant storage of EigenLayer core contracts and AVSs, we transform this to create daily snapshots of state in two parts
Once we extract all logs and relevant storage of EigenLayer core contracts and AVSs, we transform this to create daily snapshots of state in two parts
1. Aggregation of extraction data into on-chain contract state
1. Aggregation of extraction data into on-chain contract state
2. Combine state into ranges and unwind into daily snapshots
2. Combine state into ranges and unwind into daily snapshots
## Key Considerations
## Key Considerations
In part 2, once state has been aggregated, we unwind the ranges of state into daily snapshots.
In part 2, once state has been aggregated, we unwind the ranges of state into daily snapshots.
***Snapshots of state are rounded up to the nearest day, except for operator<>avs deregistrations, which are rounded down***. Let's assume we have the following range with events A & B being updates for a staker's shares.
***Snapshots of state are rounded up to the nearest day, except for operator<>avs deregistrations, which are rounded down***. Let's assume we have the following range with events A & B being updates for a staker's shares.
```
```
GENESIS_TIMESTAMP---------------------Day1---------------------Day2
GENESIS_TIMESTAMP---------------------Day1---------------------Day2
^ ^
^ ^
A=100 B=200
A=100 B=200
```
```
The output of the snapshot transformation should denote that on Day1 the Staker has 200 shares. More generally, we take the *latest* update in [Day$_{i-1}$, Day$_i$] range and set that to the state on Day$_i$. We refer to the reward on a given day as a *reward snapshot*.
The output of the snapshot transformation should denote that on Day1 the Staker has 200 shares. More generally, we take the *latest* update in [Day$_{i-1}$, Day$_i$] range and set that to the state on Day$_i$. We refer to the reward on a given day as a *reward snapshot*.
### Operator<>AVS Registration/Deregistration
### Operator<>AVS Registration/Deregistration
In the case of an operator registration and deregistration:
In the case of an operator registration and deregistration:
```
```
GENESIS_TIMESTAMP---------------------Day1---------------------Day2
GENESIS_TIMESTAMP---------------------Day1---------------------Day2
^ ^
^ ^
Register Deregister
Register Deregister
```
```
The end state is that the operator has registered & deregistered on day 1, resulting in no reward being made to the operator. We add this mechanism as a protection for operators gaining extra days of rewards if we were to round up deregistrations. The side effect is the following:
The end state is that the operator has registered & deregistered on day 1, resulting in no reward being made to the operator. We add this mechanism as a protection for operators gaining extra days of rewards if we were to round up deregistrations. The side effect is the following:
```
```
--------------Day1--------------Day2--------------Day3--------------Day4
--------------Day1--------------Day2--------------Day3--------------Day4
^ ^
^ ^
Register Deregister
Register Deregister
```
```
The operator in this case will be deregistered on Day3, resulting in the operator not receving any reward on the [Day3,Day4] range for which it was securing the AVS. Rounding down deregistrations is why the `cutoff_date` must be the *previous day* at 0:00 UTC.
The operator in this case will be deregistered on Day3, resulting in the operator not receving any reward on the [Day3,Day4] range for which it was securing the AVS. Rounding down deregistrations is why the `cutoff_date` must be the *previous day* at 0:00 UTC.
## Part 1: Aggregation
## Part 1: Aggregation
### Staker Shares
### Staker Shares
The LST shares for a staker, $s$, and strategy, $y$, is given by:
The LST shares for a staker, $s$, and strategy, $y$, is given by:
Shares$_{s,y}$ = Deposits$_{s,y}$ $-$ M1Withdrawals$_{s,y}$ $-$ M2Withdrawals$_{s,y}$
Shares$_{s,y}$ = Deposits$_{s,y}$ $-$ M1Withdrawals$_{s,y}$ $-$ M2Withdrawals$_{s,y}$
The Native ETH shares for a staker is the sum of all `PodSharesUpdated` events for a given staker. Note that Shares *can be negative in this event*.
The Native ETH shares for a staker is the sum of all `PodSharesUpdated` events for a given staker. Note that Shares *can be negative in this event*.
NativeETHShares$_s$ = $\sum_{i=0}^{n}$ PodSharesUpdated$_i$- M1Withdrawals$_i$ - M2Withdrawals$_i$
NativeETHShares$_s$ = $\sum_{i=0}^{n}$ PodSharesUpdated$_i$- M1Withdrawals$_i$ - M2Withdrawals$_i$
Combining these two gives us the shares for a staker for every strategy for every update.
Combining these two gives us the shares for a staker for every strategy for every update.
The key part of this query is:
The key part of this query is:
```sql=
```sql=
SUM(shares) OVER (PARTITION BY staker, strategy ORDER BY block_time, log_index) AS shares,
SUM(shares) OVER (PARTITION BY staker, strategy ORDER BY block_time, log_index) AS shares,
```
```
Which gets the ***running*** sum for every (staker, strategy) pair at every update.
Which gets the ***running*** sum for every (staker, strategy) pair at every update.
```sql=
```sql=
SELECT
SELECT
staker,
staker,
strategy,
strategy,
-- Sum each share amount over the window to get total shares for each (staker, strategy) at every timestamp update */
-- Sum each share amount over the window to get total shares for each (staker, strategy) at every timestamp update */
SUM(shares) OVER (PARTITION BY staker, strategy ORDER BY block_time, log_index) AS shares,
SUM(shares) OVER (PARTITION BY staker, strategy ORDER BY block_time, log_index) AS shares,
transaction_hash,
transaction_hash,
log_index,
log_index,
strategy_index,
strategy_index,
block_time,
block_time,
block_date,
block_date,
block_number
block_number
FROM (
FROM (
SELECT staker, strategy, shares, 0 as strategy_index, transaction_hash, log_index, block_time, block_date, block_number
SELECT staker, strategy, shares, 0 as strategy_index, transaction_hash, log_index, block_time, block_date, block_number
FROM {{ ref('staker_deposits') }}
FROM {{ ref('staker_deposits') }}
UNION ALL
UNION ALL
-- Subtract m1 & m2 withdrawals
-- Subtract m1 & m2 withdrawals
SELECT staker, strategy, shares * -1, 0 as strategy_index, transaction_hash, log_index, block_time, block_date, block_number
SELECT staker, strategy, shares * -1, 0 as strategy_index, transaction_hash, log_index, block_time, block_date, block_number
FROM {{ ref('m1_staker_withdrawals') }}
FROM {{ ref('m1_staker_withdrawals') }}
UNION ALL
UNION ALL
SELECT staker, strategy, shares * -1, strate
SELECT staker, strategy, shares * -1, strategy_index, transaction_hash, log_index, block_time, block_date, block_number
FROM {{ ref('m2_staker_withdrawals') }}
UNION all
-- Shares in eigenpod are positive or negative, so no need to multiply by -1
SELECT staker, '0xbeac0eeeeeeeeeeeeeeeeeeeeeeeeeeeeeebeac0' as strategy, shares, 0 as strategy_index, transaction_hash, log_index, block_time, block_date, block_number
FROM {{ ref('eigenpod_shares') }}
) combined_staker_shares
```
***Note: Rewards For All will not pay out stakers who have not proven their beacon chain balances to the execution layer.***
### Operator Shares
The shares for an operator, $o$, for a strategy, $y$, is given by:
$Shares_{o,y} = ShareIncrease_{o,y} - ShareDecrease_{o,y}$
Text moved from lines 381-383
```sql=
SELECT
operator,
strategy,
-- Sum each share amount over the window to get total shares for each (operator, strategy) at every timestamp update */
SUM(shares) OVER (PARTITION BY operator, strategy ORDER BY block_time, log_index) AS shares,
transaction_hash,
log_index,
block_time,
block_date,
block_number
FROM (
SELECT operator, strategy, shares, transaction_hash, log_index, block_time, block_date, block_number
FROM {{ ref('operator_share_increases') }}
UNION ALL
SELECT operator, strategy, shares * -1 AS shares, transaction_hash, log_index, block_time, block_date, block_number
FROM {{ ref('operator_share_decreases') }}
) combined_shares
```
### Staker Delegation Status
Here, we aggregate each delegation and undelegation into a single view. When a staker is undelegated, we mark its operator as `0x0000000000000000000000000000000000000000`.
```sql=
SELECT
staker,
CASE when src = 'undelegations' THEN '0x0000000000000000000000000000000000000000' ELSE operator END AS operator,
transaction_hash,
log_index,
block_time,
block_date,
block_number
FROM (
SELECT *, 'undelegations' AS src FROM {{ ref('staker_undelegations') }}
UNION ALL
SELECT *, 'delegations' AS src FROM {{ ref('staker_delegations') }}
) as delegations_combined
```
### Combined Rewards Submissions
Combines AVS Rewards Submissions and Rewards for All Submissions into one vie