Raven Protocol

www.ravenprotocol.com

Raven is creating a network of compute nodes that utilize idle compute power for the purposes of AI training where speed is the key. AI companies will be able to train models better and faster. We developed a completely new approach to distribution that speeds up a training run of 1M images and brings it down to a few hours. We solve latency by chunking the data into really small pieces (bytes), maintaining its identity, and then distributing it across the host of devices with a call to action: gradient calculations. Other solutions require high-end compute power. Our approach has no dependency on the system specs of each compute node in the network. Thus, we can utilize idle computer power on normal desktops, laptops, and mobile devices allowing anyone in the world to contribute to the Raven Protocol network. This will bring costs down to a fraction of what you need to pay for traditional cloud services. Most importantly, this means Raven will create the first truly distributed and scalable solution to AI training by speeding up the training process. Our consensus mechanism is something we call Proof-of-Calculation. Proof-of-Calculation will be the primary guideline for the regulation and distribution of incentives to the compute nodes in the network. Following are the two prime deciders for the incentive distribution: Speed: Depending upon how fast a node can perform gradient calculations (in a neural network) and return it back to the Gradient Collector. Redundancy: The 3 fastest redundant calculation will only qualify for receiving the incentive. This will make sure that the gradients that are getting returned are genuine and of the highest quality.

Read more

Reach decision makers at Raven Protocol

Lusha Magic

Free credit every month!

Raven is creating a network of compute nodes that utilize idle compute power for the purposes of AI training where speed is the key. AI companies will be able to train models better and faster. We developed a completely new approach to distribution that speeds up a training run of 1M images and brings it down to a few hours. We solve latency by chunking the data into really small pieces (bytes), maintaining its identity, and then distributing it across the host of devices with a call to action: gradient calculations. Other solutions require high-end compute power. Our approach has no dependency on the system specs of each compute node in the network. Thus, we can utilize idle computer power on normal desktops, laptops, and mobile devices allowing anyone in the world to contribute to the Raven Protocol network. This will bring costs down to a fraction of what you need to pay for traditional cloud services. Most importantly, this means Raven will create the first truly distributed and scalable solution to AI training by speeding up the training process. Our consensus mechanism is something we call Proof-of-Calculation. Proof-of-Calculation will be the primary guideline for the regulation and distribution of incentives to the compute nodes in the network. Following are the two prime deciders for the incentive distribution: Speed: Depending upon how fast a node can perform gradient calculations (in a neural network) and return it back to the Gradient Collector. Redundancy: The 3 fastest redundant calculation will only qualify for receiving the incentive. This will make sure that the gradients that are getting returned are genuine and of the highest quality.

Read more
icon

Country

icon

City (Headquarters)

Hong Kong

icon

Employees

1-10

icon

Founded

2017

icon

Social

  • icon
  • icon

Employees statistics

View all employees

Potential Decision Makers

  • Fullstack Developer

    Email ****** @****.com
    Phone (***) ****-****
  • Investor

    Email ****** @****.com
    Phone (***) ****-****
  • Strategic Partner

    Email ****** @****.com
    Phone (***) ****-****

Reach decision makers at Raven Protocol

Free credits every month!

My account

Sign up now to uncover all the contact details