Macrocosmos: Bittensor's decentralized OpenAI?
July 18, 2025

In this post
Macrocosmos is a company that develops a complete pipeline for AI model creation and operates five subnets on Bittensor (TAO). In this analysis, we take a look at Macrocosmos and its subnets to understand how they work, their purpose and their status on the market.
Introduction and Context on Bittensor
In this analysis, we will discuss a company building its solution on Bittensor. To begin, it is important to recall what this entails. Bittensor is a decentralized protocol designed to transform the use and exchange of AI resources worldwide.
Bittensor is based on a model of subnets—specialized networks focused on one or several missions (inference, fine-tuning, GPU computation, etc.) that operate much like startups.
The Bittensor ecosystem is structured around several key stakeholders: miners ensure the operation of the subnets and execute the tasks, while validators evaluate their performance.
At the heart of Bittensor lies TAO, a token that powers staking, rewards, and governance. Since February 13th, 2025 and the Dynamic TAO (dTAO) upgrade, users can delegate their TAO to vote for the subnets that deserve the most rewards.
Each subnet has its own token, which can be purchased in exchange for TAO. In other words, it has become possible to “invest” in the startups building Bittensor’s subnets.
→ Want to learn more about Bittensor (TAO)? Find our full presentation:
Introducing Macrocosmos
What is Macrocosmos?
Macrocosmos is a company developing open source, decentralized, and contributive solutions in the field of Artificial Intelligence (AI). It is currently one of the most significant players in the Bittensor ecosystem.
The Macrocosmos team aims to demonstrate that it is possible to design, train, and deploy open-source AI models on the blockchain, capable of matching or even surpassing the closed standards of Web2, which are often opaque by nature.
This positioning is based on a core observation: today's leading AI models (OpenAI, Google, Anthropic) are the result of enormous resources but operate in closed silos where model ownership, data control, and final usage remain exclusively in the hands of the publisher.
Macrocosmos seeks to break with this paradigm, leveraging Bittensor’s infrastructure to build incentive systems that encourage engineers and researchers worldwide to contribute to their solutions, while ensuring continuous improvement through the Bittensor miner network.
This analysis will focus on Macrocosmos’s vision, the role and tools offered by each subnet, as well as a general opinion from our contributor on the company.
The Five Macrocosmos Subnets
Macrocosmos stands out through the depth and coherence of its offering: the team currently operates five distinct subnets on Bittensor, making it the protocol’s main contributor by number of active networks.
These subnets cover the entire value creation pipeline for AI, from data collection to concrete scientific application:
- Subnet 13 – “Data Universe”: dedicated to large-scale data collection, especially from social networks and other public sources, which is an essential building block for creating an AI model.
- Subnet 9 – “IOTA”: specialized in training foundation models, i.e., general-purpose AI models from large datasets.
- Subnet 37 – “Fine-Tuning”: enables adaptation and specialization of pre-trained models for specific tasks, according to particular needs.
- Subnet 1 – “Apex”: focused on inference, i.e., the actual use of AI models by applications and end-users.
- Subnet 25 – “Mainframe”: dedicated to solving complex scientific problems, notably through use cases in biomedical research (e.g., protein folding).
In summary, the first four subnets cover the entire pathway required to build high-performing AI: from data collection to training, specialization, and inference. The Mainframe subnet serves as a vertical use case, illustrating the concrete value of these AI components in domains like biomedical research.
Constellation: Toward a Cross-Platform Super-App
To make all these solutions accessible and interoperable, Macrocosmos is currently developing Constellation, a central application designed to bring together the interfaces and uses of its different subnets.
This super-app, still in beta, aims to offer a unified user experience, providing access to models, data collection, fine-tuning tools, and scientific use cases—all in one place. This facilitates onboarding for users and enhances value circulation between the protocol’s components.
Throughout this analysis, we will detail the existing or planned integrations for each subnet within Constellation.
A Structured and Experienced Team
Macrocosmos’s strength also lies in a team of nearly 35 full-time members, making the company one of the most robust organizations in the Bittensor ecosystem.
- Steffen Cruz (Co-founder): PhD in Physics (University of British Columbia), former Head of Machine Learning at Solid State AI, ex-CTO of the Opentensor Foundation, and creator of Bittensor’s very first subnet. This profile gives Macrocosmos rare mastery of the protocol’s technical challenges.
- Will Squires (Co-founder): Over 10 years of experience at Atkins (international engineering group), where he led digital strategy and launched an AI accelerator of 60+ people; advisor to the Opentensor Foundation, with a key role in Bittensor’s multi-subnet scaling (“Revolution” update).
The team mainly comprises engineers and researchers specialized in machine learning and AI, from reputable academic and industrial backgrounds. This technical density, combined with a strong grasp of Web3 dynamics, puts Macrocosmos in a strong position to execute its vision at scale.
Subnet 13: Data Universe
General Overview
The first step in any complete AI process is data collection. Indeed, models require enormous amounts of data to be trained effectively: the larger, more diverse, and more recent the data, the more likely the training is to result in high-performing, adaptive AI.
The problem is that access to specialized datasets is particularly complex, especially those sourced from social networks (X, Reddit, YouTube). Moreover, it is a costly and often restricted process: for instance, the official X (ex-Twitter) API charges up to $54,000 per year for limited access to 1 million posts per month.
Subnet 13, named Data Universe, aims to address this issue by providing a decentralized infrastructure for scraping and aggregating data, with a special focus on social networks. Its ambition is to become the largest source of social data on Bittensor and beyond.
How It Works & Incentive Mechanisms
The operation relies on Bittensor’s typical incentive architecture:
- Miners: rewarded for collecting fresh and relevant data, mainly from X, Reddit, and more recently, YouTube (transcripts, descriptions, metrics).
- Validators: check the quality, originality, and usefulness of the collected data, then score and reward the most effective miners.
This competitive dynamic has enabled Data Universe to amass an impressive stock: nearly 40 billion lines of data collected—about 100 times greater than its main Bittensor competitor (Subnet 42 by Masa, 400 million data points). Today, the collection is gradually expanding to other social networks and data sources.
Use Cases and Clients
The use cases for Subnet 13 are broad. First, it can be used to create custom datasets for AI training (at much lower cost than centralized solutions).
Second, these social network data can be used for trend or sentiment analysis across various topics—ranging from anticipating presidential election results to market monitoring or tracking the mindshare of tokens or crypto narratives.
Finally, it can also be used to improve predictive models, whether for price analysis, sports results, weather forecasts, etc. In summary, Macrocosmos’s Subnet 9 can be used to train any AI model.
Macrocosmos has already attracted other Bittensor subnets: Score (Subnet 44, sports prediction), Gaia (Subnet 57, weather), and Squad (AI agent platform on Chutes, Subnet 64), which rely on its data to strengthen their models.
The Data Universe API is also used to integrate data into third-party solutions, with a key promise: massive scraping at much lower cost than centralized alternatives, while federating different channels (X, Reddit, YouTube, etc.).
Integration in Constellation
Macrocosmos leverages Data Universe via Constellation, the super-app grouping access to all company modules. Two products are currently offered:
- Gravity: a tool for customized data collection. The user can specify, via an interface (or using Mission Commander, an AI agent connected to Apex), the topics, hashtags, or keywords targeted, and then trigger the creation of their dataset in one click. Miners take over to scrape, build, and prepare the dataset, which the user can then download or visualize.
- Nebula: an advanced visualization platform allowing users to explore the collected data, semantically organize it, and interact in natural language with an AI agent guiding exploration (trend detection, insight extraction, etc.).
These products, still in beta, already offer an API accessible to developers. Macrocosmos also plans to list them on SaaS marketplaces to broaden adoption.
Subnet 9: IOTA
General Overview & Issues
After massive data collection, the second key step in the AI creation cycle is pre-training. This is the foundation for training AI models and is what enables the creation of so-called “foundation models.”
These general-purpose models, such as GPT-4 (OpenAI) or Gemini (Google), require colossal hardware resources and volumes of data, historically reserved for a very limited number of actors with major financial and technical means.
This concentration of compute power creates an oligopoly around cutting-edge AI models: training a single model can mobilize several thousand GPUs for weeks or even months. As a result, access to these technologies remains largely closed, and the open-source dynamic is hindered by the material entry barrier.
This is precisely the problem addressed by Macrocosmos’s Subnet 9, called IOTA (Incentivized Orchestrated Training Architecture). The idea is to enable large-scale, truly decentralized, collaborative training of foundation models, open to any contributor with the required resources—leveraging, among others, data collected by Subnet 13.
IOTA Architecture & Innovations
Historically, the first version of Macrocosmos’s training subnet operated on a “winner-takes-all” model: only one miner was rewarded—the one who managed to train the best complete model.
This design, though simple, faced obvious limits in terms of scalability and collaboration: it favored solitary competition and made it almost impossible to train very large models in a distributed way.
It is with this in mind that Macrocosmos completely redesigned its architecture in May 2025 with the launch of IOTA. The founding principle of IOTA is orchestrated and collaborative training.
Concretely, each miner trains only a fraction of the global model, and “swarm learning” coordination allows the work of hundreds, even thousands, of contributors worldwide to be pooled.
The final model construction is thus based on the dynamic aggregation of these fragments, through a series of technological innovations:
- Intelligent model partitioning: the model structure is divided into layers or segments distributed to different miners, each participant training a specific part in parallel.
- Compression & synchronization: to ensure network efficiency and limit communication costs, advanced compression techniques are used to synchronize gradients and parameters throughout the iterations.
- Advanced incentive mechanisms: each contribution is evaluated by validators; miners are rewarded based on the quality of their training (performance improvements on common benchmarks), encouraging the emergence of local “leaders” without sacrificing global collaboration.
- Fusion & robustness: the final assembly method incorporates consistency and robustness checks, resulting in a unified model as performant—or even more so—than one trained centrally.
Initial Results and Outlook
This decentralized training model enabled Macrocosmos to launch the pre-training of a 15-billion-parameter model, surpassing the main Bittensor competitor (Templar).
The short-term goal: to demonstrate the feasibility of 70-billion-parameter models; medium term: to target unprecedented sizes (100, 500, even 1,000 billion parameters)—a milestone that, if achieved, would mark a turning point in open-source AI history.
To ensure transparency, Macrocosmos offers a dedicated dashboard:
- Cylinder view: each point represents a miner, grouped in layers corresponding to a specific model partition.
- Geographical monitoring: global mapping of miner distribution.
- Competition monitoring: score, activity, and evolution of each contributor in real time.
Roadmap & Economic Model
The IOTA roadmap is divided into three major phases:
- Phase 1: finalize architecture stabilization, fix bugs, and adjust orchestration mechanics.
- Phase 2: optimize compression and reduce hardware requirements to open access to more participants (goal: make training accessible from a regular personal computer).
- Phase 3: scale up to train even larger models and enable the first commercial or shared use cases.
Economically, two options are considered:
- Open model: any user can use the model on a pay-per-use basis (via alpha token), with a specific license for large-scale commercial uses, in the spirit of Meta’s licenses.
- Mutualized model: possibility for several organizations to co-finance and co-train a model, each having shared access and rights proportional to their contribution.
Subnet 37: Fine-Tuning
General Overview
In the production cycle of a high-performing AI model, fine-tuning is the crucial step that follows what we have just described. Concretely, it allows a generalist foundation model to be specialized for specific tasks or domains.
This personalization is now essential: the majority of modern AI applications (chatbots, assistants, search tools, etc.) rely on models first pre-trained and then refined to maximize their relevance for a particular use case.
As its name suggests, Macrocosmos’s Subnet 37, “Fine-Tuning,” was designed to address this need. It provides a decentralized infrastructure for model adjustment based on specific datasets, in an open and contributive logic.
Operation and Organization
The subnet operates on open competition, where Macrocosmos has managed to outsource model fine-tuning through an incentive system, with developers from around the world competing to produce the best models and get rewarded for it:
- Miners: submit fine-tuned versions of models, based on datasets tailored to specific use cases (dialogue, code, mathematics, etc.). The goal is to provide measurable improvements over the base model, on defined benchmarks.
- Validators: evaluate the quality of the models produced, based on performance, robustness, and relevance to the targeted task. Rewards are distributed to miners achieving the best results.
Initially, Macrocosmos envisioned the Fine-Tuning subnet as a pool of ready-to-use models, whose quality and diversity would improve through ongoing competitions.
Current Limitations and Outlook
However, it must be noted that Subnet 37 is currently experiencing slowed momentum. Firstly, Macrocosmos has somewhat neglected the subnet in recent months, focusing on the redesign of the training system (IOTA). Secondly, the incentive mechanism remains based on a “winner-takes-all” design, which is less effective than the collaborative systems recently adopted (notably on IOTA, Subnet 9).
As a result, the market cap of Subnet 37’s token is significantly lower than that of Macrocosmos’s other subnets.
While this situation may seem surprising, it is explained by the need for Macrocosmos to concentrate resources on the technical challenges of collaborative pre-training, which is a major issue for the project.
Nevertheless, Subnet 37’s potential remains real: an evolution of the design, inspired by IOTA’s advances, could reignite momentum—especially since the need for specialized models remains largely unmet in the ecosystem.
Use Cases and Ecosystem Integration
The fine-tuned models produced on Subnet 37 are used:
- To power verticalized applications (conversational agents, copilots, analytics tools, etc.)
- And potentially, in the future, to be deployed via Subnet 1 – Apex (inference), ensuring technical continuity between training, specialization, and actual use.
The catalog of fine-tuned models remains accessible via Macrocosmos’s dashboard, providing an overview of submissions, performance, and availability for the ecosystem.
Subnet 1: Apex
General Overview
The final stage in the AI value chain, inference is the phase where a model is actually used. In other words, this is when the previously trained and possibly fine-tuned model is made available to answer concrete requests, whether for text generation, search, analysis, or decision-making.
When you use ChatGPT, it performs inference to respond to you.
Macrocosmos’s Subnet 1, named Apex, aims to provide a decentralized, scalable inference solution that can compete with Web2 standards. Beyond mere hosting of open-source models, Apex aims to create a genuine ecosystem of AI agents capable of delivering smarter, faster, and more tailored responses to real user needs.
Operation and Technical Innovations
Apex’s approach stands out through several innovations, aiming to surpass basic inference. The first and probably most important is “test-time compute.” In concrete terms, this means that models are not limited to classic inference (instant response from a static model), but can “think” longer on each request.
In practice, the system can mobilize varying amounts of compute and external tools to generate the most relevant answer. This allows advanced features such as real-time web searches, database access, multi-step reasoning, etc.
To improve this system, Apex introduces an incentive architecture adapted to Bittensor and inspired by Generative Adversarial Networks (GANs). Two types of miners coexist:
- Generator miners: produce answers to a given query, aiming to come as close as possible to the “ideal” answer (as generated by validators).
- Discriminator miners: evaluate whether an answer comes from a miner or a validator, thus rewarding the continuous improvement of generated responses.
The system is designed to incentivize responses that are both fast (faster than validators) and of a quality considered indistinguishable from those produced by the sector’s strictest standards.
This competition-emulation mechanism enables the subnet to progress incrementally, with each generation of miners seeking to approach validator-level performance while optimizing response speed.
Use Cases and Concrete Applications
Apex is intended to become the preferred inference engine for the Macrocosmos ecosystem and, more broadly, for third-party applications seeking to integrate cutting-edge AI into their products.
- Inference API: Macrocosmos provides an API enabling developers to easily integrate Apex’s intelligence into their own solutions (chatbots, copilots, conversational agents, augmented search tools, etc.).
- Advanced scenarios: The test-time compute and reasoning features pave the way for use cases requiring complex reasoning, documentary research, or enriched contextual interactions—positioning Apex as a credible open-source and decentralized alternative to solutions like ChatGPT.
- Interoperability with other subnets: In practice, Apex already serves as the interface for other Macrocosmos products, such as Mission Commander (AI assistant for data collection on Gravity) or for consulting fine-tuned models from Subnet 37.
Integration in Constellation
Apex has its own dedicated application within Constellation:
- Several inference modes are available, from the most basic (direct inference from an open-source model) to enhanced variants (selection of the best answer among several models, “web-enhanced” inference integrating search results, or “Reasoning” mode leveraging the GAN system).
- The user interface allows experimentation with Apex’s different approaches, with full transparency on the type of intelligence mobilized.
- The Apex API is accessible to developers for deep integration into business workflows or custom tools.
At this stage, Apex’s positioning is still under development, with teams actively working on performance optimization, broadening the supported model catalog, and progressively monetizing the service.
It is important to remember that Macrocosmos’s goal is not to compete head-on, overnight, with giants like OpenAI or Anthropic, but rather to explore the possibilities offered by an open, scalable inference system based on miner collaboration.
Subnet 25: Mainframe
General Overview & Positioning
As seen with the four previous subnets, Macrocosmos offers a full stack for decentralized AI. However, what better way to showcase your tools than by using them for advanced scientific research?
With Mainframe (Subnet 25), Macrocosmos extends its value proposition beyond the traditional AI cycle to tackle a high-potential market segment: scientific research and distributed computing applied to complex problems—specifically here, protein folding.
Protein folding is the process of predicting a protein’s structure and how it will behave in your body. This can help better understand diseases and develop drugs.
Originally dedicated to protein folding, the subnet quickly evolved to offer a more generalist infrastructure, capable of handling an ever-increasing diversity of scientific workloads, notably in computational biology, drug discovery, chemistry, or molecular modeling.
Operation and Incentive Mechanisms
Mainframe’s economic and incentive model transposes Bittensor’s principles to the scientific sector:
- Miners: allocate their computing power to execute simulation or analysis tasks, submitted by users via the Mainframe API (e.g., protein folding simulation, molecular docking, etc.).
- Validators: evaluate the relevance, accuracy, and efficiency of results generated by miners, determining the reward distribution based on the quality and speed of task resolution.
Since its launch, Subnet 25 has enabled the simulation of over 180,000 protein foldings: still less than industry giants (Google DeepMind/AlphaFold), but already above pioneering decentralized computing initiatives like Folding Home—demonstrating the robustness and competitiveness of the Macrocosmos model.
Diversification of Use Cases: Protein Folding, Docking, and Beyond
- Protein folding: Predicting the 3D structure of proteins remains a central challenge at the crossroads of biomedical research, drug discovery, and biotechnology. Mainframe offers an open solution for large-scale simulations, via an API accessible to the scientific community.
- Molecular docking: In response to requests from partner institutes (e.g., Max Planck, Sussex), Macrocosmos added the ability to simulate interactions between molecules (such as a drug and its protein target), paving the way for concrete applications in pharmaceutical research and new molecule design.
- Generalist scientific computing: Mainframe intends to progressively expand its scope to all types of scientific tasks requiring distributed computation: molecular modeling, quantum chemistry, physical simulation, etc. This diversification is also reflected in the name change from “Protein Folding” to “Mainframe.”
Macrocosmos has already established partnerships with renowned research institutes (Max Planck, Sussex), and counts among its first clients Rowan Scientific, a cloud platform specializing in molecular computation and AI for drug discovery. Revenues from these collaborations are reinvested in the subnet, consolidating its economic model and legitimacy among researchers.
Integration in Constellation and Scientific UX
Mainframe is accessible via Constellation, with a dedicated application:
- Users (researchers, data scientists, etc.) can submit a protein to fold, select the configuration of interest, launch the simulation, then analyze results and explore docking possibilities.
- The interface, although technical and still improving, allows visualization of miner competition and access to advanced scientific simulation features without the need for coding skills.
Macrocosmos also provides a public API, enabling Mainframe integration into existing R&D workflows, and plans to broaden its listing on SaaS marketplaces in the sector.
How to Invest in Macrocosmos Subnet Tokens?
If you want to invest in Bittensor's subnets, the best option is to go through Mentat Minds. This is a referral link from Victor, so feel free to use it to support him.
Market Overview
Each of Macrocosmos’s five subnets has its own native token (also called an “alpha token”), reflecting the value and traction of its network. These tokens can be acquired in exchange for TAO via major Bittensor ecosystem platforms (Taostats, official dApp interface, etc.), or indirectly through certain subnet ETF strategies.
At the time of writing, here are the market metrics:
Name | Market Cap ($) | FDV ($) | Perf. (30D) |
---|---|---|---|
Data Universe (13) | $15.2M | $177M | -23 % |
Iota (9) | $24.6M | $288M | -17 % |
Finetuning (37) | $4.6M | $54M | +24 % |
Apex (1) | $15.4M | $190M | -18 % |
Mainframe (25) | $5.2M | $60M | -36 % |
The verdict is clear: the majority of Macrocosmos tokens, like those of other Bittensor subnets, are currently trading near annual lows. This situation is explained by an unfavorable environment for altcoins and low liquidity in secondary markets, combined with a lack of strong AI narratives since the beginning of the year.
This broad pullback phase may offer attractive entry points for long-term investors convinced by the potential of Bittensor and its leading subnets, including Macrocosmos. A resurgence of the AI narrative or major technical catalysts could trigger a significant rebound—provided the overall context becomes favorable for altcoins again.
Investment Strategies
Several approaches are possible to gain exposure to Macrocosmos subnets:
- Subnet picking:
You can buy one or several specific tokens, selecting the most promising subnets according to your convictions (technical potential, usage volume, team, etc.). This approach requires active monitoring of performance, updates, and the roadmap for each subnet.
- Subnet ETF strategies:
Certain platforms (e.g., Mentat Minds) offer turnkey investment products, allowing exposure to a weighted basket of the five Macrocosmos tokens. The allocation can be automatically adjusted based on each subnet’s performance, with yield optimization (APY) managed by the platform.
- Root staking (Optimized Root, Protected Alpha):
For more cautious investors, it is possible to stick with more global or protected exposure strategies:
- Optimized Root allows you to maximize TAO yield without taking risks on the subnets.
- Protected Alpha proposes to invest only dividends generated by root staking in the top 15 subnets, limiting downside exposure on the initial capital.
It is important to remember that investing in Bittensor subnets is particularly risky. The shallow market depth can lead to rapid and unpredictable price movements, and a new bearish phase for altcoins could drive prices down further.
Moreover, the Bittensor subnet market is highly dependent on Bittensor’s overall success and on whether an AI narrative gains traction again in 2025. Note as well that the difficulty in investing in these tokens remains a barrier to market growth.
Finally, it is essential to remember that any investment decision is the sole responsibility of the investor. We recommend thorough analysis and regular monitoring of fundamentals before taking any significant position.
If you want to invest in Bittensor's subnets, the best option is to go through Mentat Minds. This is a referral link from Victor, so feel free to use it to support him.