Data in MYID

Original Events / Data Sources Layer

Here, MYID relies on the existing data registries and indexing services to get original user data and offload the provision of data to external data providers, such as Etherscan, The Graph, OnFinality, and more.

The original data here refers to the existing data that are generated by the user and recorded by open data registries, especially traceable and unalterable data recorded by blockchains, for example, the chain states, or historical transactions, or emitted events. The original identity data could also be provided by web2.0 APIs like Twitter, Facebook, Discord, etc.

Every single piece of data can be provided by different data endpoints, and it’s up to the data analyzer’s choice to choose which data endpoint to fetch data from.

Address Analysis Layer

This layer processes the original identity data and produces analyzed data for Identity Aggregation. The identity computing services consume a lot of resources thus we do it off-chain and split it from consensus.

This layer decouples the computation process of identity analysis and identity aggregation. It improves computation efficiency and network throughput by providing redundancy on both structured identity data and computing resources.

Data Analyzers

Data Analyzers are external nodes providing identity-related data analysis services for the MYID Network. Each data analyzer works independently and gets rewards by providing honest identity analysis results. The higher quantity of data analyzers in the MYID Network the high efficiency and availability of the address analysis layer.

How it works

In this layer, data analyzers are incentivized to process random tasks to calculate results for the Identification Event. Following the analyzing methods given in identification events, data analyzers basically handle the data indexing work and provide a simple data calculation service.

Data integrity

The generated identity-related data will be signed by each analyzer to ensure data integrity and will further be validated in the next layer. One task is always executed by multiple analyzers to ensure availability, but not necessary by all the analyzers.

Last updated