top of page

Near Protocol's Ambitious Plan: Building the World's Largest Open-Source AI Model

Nov 11

2 min read

0

3

0

Near Protocol has recently unveiled a groundbreaking project that could transform the landscape of artificial intelligence. During the Redacted conference in Bangkok, Thailand, Near announced plans to create the world's largest open-source AI model with a staggering 1.4 trillion parameters. This initiative aims to surpass Meta's open-source Llama model, which currently holds a leading position with significantly fewer parameters. Near's 1.4T model would be 3.5 times larger, setting a new benchmark in AI.


The project is not a solo endeavor; it relies heavily on crowdsourced research and development through Near's newly established Near AI Research hub. Thousands of contributors will collaborate on the development, with participants already invited to start training a preliminary model with 500 million parameters. As the project progresses, only the top contributors will move forward to work on increasingly complex models, which will gradually scale in size and sophistication over a series of seven models.

To ensure data privacy and incentivize continual updates, the project will use encrypted Trusted Execution Environments. This privacy-preserving technology will help reward contributors while keeping user data secure, aligning with Near's commitment to decentralization. Notably, funding for the expensive training and computational resources required—estimated at around $160 million—will be raised through token sales. Near Protocol co-founder Illia Polosukhin elaborated on the funding strategy, emphasizing a business model in which tokenholders are compensated through inferences made by the AI model. This structure aims to create a self-sustaining loop, with reinvestment into further model development.


The scale of this project places Near among the few blockchain initiatives capable of such an ambitious undertaking. Co-founder Alex Skidanov, formerly of OpenAI, now leads Near AI. He is determined to see the project succeed alongside Polosukhin—who co-authored the pioneering transformer research paper behind ChatGPT. However, Skidanov acknowledges the immense challenges ahead, particularly the need for "tens of thousands of GPUs in one place" for model training, which could be challenging to achieve with decentralized computing resources. However, emerging research from Deep Mind suggests the feasibility of such an approach, providing a promising path forward.

Decentralization remains a core pillar of this project. Polosukhin stressed the importance of decentralized AI in ensuring that no single entity holds exclusive control over AI advancements, which he views as critical to maintaining the philosophical relevance of Web3. He warned that centralization could lead to a world where "we effectively do whatever that company says," undermining AI and economic decentralization. Notable speaker Edward Snowden also highlighted the potential risks of centralized AI, likening it to a global surveillance state and calling for digital sovereignty through systems based on cryptographic principles.


In a separate development, Near Protocol announced another milestone: its main net is now fully compatible with MetaMask and all Ethereum wallets. This integration, facilitated by Aurora Labs, extends the Near main net's accessibility, enabling Ethereum users to benefit from Chain Signatures and Chain Abstraction apps without needing new wallets or assets. Developers can also incorporate Ethereum wallet access directly into their applications, allowing seamless use of MetaMask or any preferred Ethereum wallet.

In Near's words, this compatibility signals a new chapter for Ethereum and Near Protocol users. By expanding its compatibility with Ethereum's ecosystem, Near Protocol is not only enhancing accessibility but is also positioning itself as a robust, privacy-focused AI and blockchain project.


Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page