July 18, 2025

Macrocosmos is a company that develops a complete pipeline for AI model creation and operates five subnets on Bittensor (TAO). In this analysis, we take a look at Macrocosmos and its subnets to understand how they work, their purpose and their status on the market.
In this analysis, we will discuss a company building its solution on Bittensor. To begin, it is important to recall what this entails. Bittensor is a decentralized protocol designed to transform the use and exchange of AI resources worldwide.
Bittensor is based on a model of subnets-specialized networks focused on one or several missions (inference, fine-tuning, GPU computation, etc.) that operate much like startups.
The Bittensor ecosystem is structured around several key stakeholders: miners ensure the operation of the subnets and execute the tasks, while validators evaluate their performance.
At the heart of Bittensor lies TAO, a token that powers staking, rewards, and governance. Since February 13th, 2025 and the Dynamic TAO (dTAO) upgrade, users can delegate their TAO to vote for the subnets that deserve the most rewards.
Each subnet has its own token, which can be purchased in exchange for TAO. In other words, it has become possible to “invest” in the startups building Bittensor’s subnets.
→ Want to learn more about Bittensor (TAO)? Find our full presentation:
Macrocosmos is a company developing open source, decentralized, and contributive solutions in the field of Artificial Intelligence (AI). It is currently one of the most significant players in the Bittensor ecosystem.
The Macrocosmos team aims to demonstrate that it is possible to design, train, and deploy open-source AI models on the blockchain, capable of matching or even surpassing the closed standards of Web2, which are often opaque by nature.
This positioning is based on a core observation: today's leading AI models (OpenAI, Google, Anthropic) are the result of enormous resources but operate in closed silos where model ownership, data control, and final usage remain exclusively in the hands of the publisher.
Macrocosmos seeks to break with this paradigm, leveraging Bittensor’s infrastructure to build incentive systems that encourage engineers and researchers worldwide to contribute to their solutions, while ensuring continuous improvement through the Bittensor miner network.
This analysis will focus on Macrocosmos’s vision, the role and tools offered by each subnet, as well as a general opinion from our contributor on the company.
Macrocosmos stands out through the depth and coherence of its offering: the team currently operates five distinct subnets on Bittensor, making it the protocol’s main contributor by number of active networks.
These subnets cover the entire value creation pipeline for AI, from data collection to concrete scientific application:
In summary, the first four subnets cover the entire pathway required to build high-performing AI: from data collection to training, specialization, and inference. The Mainframe subnet serves as a vertical use case, illustrating the concrete value of these AI components in domains like biomedical research.
To make all these solutions accessible and interoperable, Macrocosmos is currently developing Constellation, a central application designed to bring together the interfaces and uses of its different subnets.
This super-app, still in beta, aims to offer a unified user experience, providing access to models, data collection, fine-tuning tools, and scientific use cases-all in one place. This facilitates onboarding for users and enhances value circulation between the protocol’s components.
Throughout this analysis, we will detail the existing or planned integrations for each subnet within Constellation.
Macrocosmos’s strength also lies in a team of nearly 35 full-time members, making the company one of the most robust organizations in the Bittensor ecosystem.
The team mainly comprises engineers and researchers specialized in machine learning and AI, from reputable academic and industrial backgrounds. This technical density, combined with a strong grasp of Web3 dynamics, puts Macrocosmos in a strong position to execute its vision at scale.
The first step in any complete AI process is data collection. Indeed, models require enormous amounts of data to be trained effectively: the larger, more diverse, and more recent the data, the more likely the training is to result in high-performing, adaptive AI.
The problem is that access to specialized datasets is particularly complex, especially those sourced from social networks (X, Reddit, YouTube). Moreover, it is a costly and often restricted process: for instance, the official X (ex-Twitter) API charges up to $54,000 per year for limited access to 1 million posts per month.
Subnet 13, named Data Universe, aims to address this issue by providing a decentralized infrastructure for scraping and aggregating data, with a special focus on social networks. Its ambition is to become the largest source of social data on Bittensor and beyond.
The operation relies on Bittensor’s typical incentive architecture:
This competitive dynamic has enabled Data Universe to amass an impressive stock: nearly 40 billion lines of data collected-about 100 times greater than its main Bittensor competitor (Subnet 42 by Masa, 400 million data points). Today, the collection is gradually expanding to other social networks and data sources.
The use cases for Subnet 13 are broad. First, it can be used to create custom datasets for AI training (at much lower cost than centralized solutions).
Second, these social network data can be used for trend or sentiment analysis across various topics-ranging from anticipating presidential election results to market monitoring or tracking the mindshare of tokens or crypto narratives.
Finally, it can also be used to improve predictive models, whether for price analysis, sports results, weather forecasts, etc. In summary, Macrocosmos’s Subnet 9 can be used to train any AI model.
Macrocosmos has already attracted other Bittensor subnets: Score (Subnet 44, sports prediction), Gaia (Subnet 57, weather), and Squad (AI agent platform on Chutes, Subnet 64), which rely on its data to strengthen their models.
The Data Universe API is also used to integrate data into third-party solutions, with a key promise: massive scraping at much lower cost than centralized alternatives, while federating different channels (X, Reddit, YouTube, etc.).
Macrocosmos leverages Data Universe via Constellation, the super-app grouping access to all company modules. Two products are currently offered:
These products, still in beta, already offer an API accessible to developers. Macrocosmos also plans to list them on SaaS marketplaces to broaden adoption.
After massive data collection, the second key step in the AI creation cycle is pre-training. This is the foundation for training AI models and is what enables the creation of so-called “foundation models.”
These general-purpose models, such as GPT-4 (OpenAI) or Gemini (Google), require colossal hardware resources and volumes of data, historically reserved for a very limited number of actors with major financial and technical means.
This concentration of compute power creates an oligopoly around cutting-edge AI models: training a single model can mobilize several thousand GPUs for weeks or even months. As a result, access to these technologies remains largely closed, and the open-source dynamic is hindered by the material entry barrier.
This is precisely the problem addressed by Macrocosmos’s Subnet 9, called IOTA (Incentivized Orchestrated Training Architecture). The idea is to enable large-scale, truly decentralized, collaborative training of foundation models, open to any contributor with the required resources-leveraging, among others, data collected by Subnet 13.
Historically, the first version of Macrocosmos’s training subnet operated on a “winner-takes-all” model: only one miner was rewarded-the one who managed to train the best complete model.
This design, though simple, faced obvious limits in terms of scalability and collaboration: it favored solitary competition and made it almost impossible to train very large models in a distributed way.
It is with this in mind that Macrocosmos completely redesigned its architecture in May 2025 with the launch of IOTA. The founding principle of IOTA is orchestrated and collaborative training.
Concretely, each miner trains only a fraction of the global model, and “swarm learning” coordination allows the work of hundreds, even thousands, of contributors worldwide to be pooled.
The final model construction is thus based on the dynamic aggregation of these fragments, through a series of technological innovations:
This decentralized training model enabled Macrocosmos to launch the pre-training of a 15-billion-parameter model, surpassing the main Bittensor competitor (Templar).
The short-term goal: to demonstrate the feasibility of 70-billion-parameter models; medium term: to target unprecedented sizes (100, 500, even 1,000 billion parameters)-a milestone that, if achieved, would mark a turning point in open-source AI history.
To ensure transparency, Macrocosmos offers a dedicated dashboard:
The IOTA roadmap is divided into three major phases:
Economically, two options are considered:
In the production cycle of a high-performing AI model, fine-tuning is the crucial step that follows what we have just described. Concretely, it allows a generalist foundation model to be specialized for specific tasks or domains.
This personalization is now essential: the majority of modern AI applications (chatbots, assistants, search tools, etc.) rely on models first pre-trained and then refined to maximize their relevance for a particular use case.
As its name suggests, Macrocosmos’s Subnet 37, “Fine-Tuning,” was designed to address this need. It provides a decentralized infrastructure for model adjustment based on specific datasets, in an open and contributive logic.
The subnet operates on open competition, where Macrocosmos has managed to outsource model fine-tuning through an incentive system, with developers from around the world competing to produce the best models and get rewarded for it:
Initially, Macrocosmos envisioned the Fine-Tuning subnet as a pool of ready-to-use models, whose quality and diversity would improve through ongoing competitions.
However, it must be noted that Subnet 37 is currently experiencing slowed momentum. Firstly, Macrocosmos has somewhat neglected the subnet in recent months, focusing on the redesign of the training system (IOTA). Secondly, the incentive mechanism remains based on a “winner-takes-all” design, which is less effective than the collaborative systems recently adopted (notably on IOTA, Subnet 9).
As a result, the market cap of Subnet 37’s token is significantly lower than that of Macrocosmos’s other subnets.
While this situation may seem surprising, it is explained by the need for Macrocosmos to concentrate resources on the technical challenges of collaborative pre-training, which is a major issue for the project.
Nevertheless, Subnet 37’s potential remains real: an evolution of the design, inspired by IOTA’s advances, could reignite momentum-especially since the need for specialized models remains largely unmet in the ecosystem.
The fine-tuned models produced on Subnet 37 are used:
The catalog of fine-tuned models remains accessible via Macrocosmos’s dashboard, providing an overview of submissions, performance, and availability for the ecosystem.
The final stage in the AI value chain, inference is the phase where a model is actually used. In other words, this is when the previously trained and possibly fine-tuned model is made available to answer concrete requests, whether for text generation, search, analysis, or decision-making.
When you use ChatGPT, it performs inference to respond to you.
Macrocosmos’s Subnet 1, named Apex, aims to provide a decentralized, scalable inference solution that can compete with Web2 standards. Beyond mere hosting of open-source models, Apex aims to create a genuine ecosystem of AI agents capable of delivering smarter, faster, and more tailored responses to real user needs.
Apex’s approach stands out through several innovations, aiming to surpass basic inference. The first and probably most important is “test-time compute.” In concrete terms, this means that models are not limited to classic inference (instant response from a static model), but can “think” longer on each request.
In practice, the system can mobilize varying amounts of compute and external tools to generate the most relevant answer. This allows advanced features such as real-time web searches, database access, multi-step reasoning, etc.
To improve this system, Apex introduces an incentive architecture adapted to Bittensor and inspired by Generative Adversarial Networks (GANs). Two types of miners coexist:
The system is designed to incentivize responses that are both fast (faster than validators) and of a quality considered indistinguishable from those produced by the sector’s strictest standards.
This competition-emulation mechanism enables the subnet to progress incrementally, with each generation of miners seeking to approach validator-level performance while optimizing response speed.
Apex is intended to become the preferred inference engine for the Macrocosmos ecosystem and, more broadly, for third-party applications seeking to integrate cutting-edge AI into their products.
Apex has its own dedicated application within Constellation:
At this stage, Apex’s positioning is still under development, with teams actively working on performance optimization, broadening the supported model catalog, and progressively monetizing the service.
It is important to remember that Macrocosmos’s goal is not to compete head-on, overnight, with giants like OpenAI or Anthropic, but rather to explore the possibilities offered by an open, scalable inference system based on miner collaboration.
As seen with the four previous subnets, Macrocosmos offers a full stack for decentralized AI. However, what better way to showcase your tools than by using them for advanced scientific research?
With Mainframe (Subnet 25), Macrocosmos extends its value proposition beyond the traditional AI cycle to tackle a high-potential market segment: scientific research and distributed computing applied to complex problems-specifically here, protein folding.
Protein folding is the process of predicting a protein’s structure and how it will behave in your body. This can help better understand diseases and develop drugs.
Originally dedicated to protein folding, the subnet quickly evolved to offer a more generalist infrastructure, capable of handling an ever-increasing diversity of scientific workloads, notably in computational biology, drug discovery, chemistry, or molecular modeling.
Mainframe’s economic and incentive model transposes Bittensor’s principles to the scientific sector:
Since its launch, Subnet 25 has enabled the simulation of over 180,000 protein foldings: still less than industry giants (Google DeepMind/AlphaFold), but already above pioneering decentralized computing initiatives like Folding Home-demonstrating the robustness and competitiveness of the Macrocosmos model.
Macrocosmos has already established partnerships with renowned research institutes (Max Planck, Sussex), and counts among its first clients Rowan Scientific, a cloud platform specializing in molecular computation and AI for drug discovery. Revenues from these collaborations are reinvested in the subnet, consolidating its economic model and legitimacy among researchers.
Mainframe is accessible via Constellation, with a dedicated application:
Macrocosmos also provides a public API, enabling Mainframe integration into existing R&D workflows, and plans to broaden its listing on SaaS marketplaces in the sector.
If you want to invest in Bittensor's subnets, the best option is to go through Mentat Minds. This is a referral link from Victor, so feel free to use it to support him.
Each of Macrocosmos’s five subnets has its own native token (also called an “alpha token”), reflecting the value and traction of its network. These tokens can be acquired in exchange for TAO via major Bittensor ecosystem platforms (Taostats, official dApp interface, etc.), or indirectly through certain subnet ETF strategies.
At the time of writing, here are the market metrics:
| Name | Market Cap ($) | FDV ($) | Perf. (30D) |
|---|---|---|---|
| Data Universe (13) | $15.2M | $177M | -23 % |
| Iota (9) | $24.6M | $288M | -17 % |
| Finetuning (37) | $4.6M | $54M | +24 % |
| Apex (1) | $15.4M | $190M | -18 % |
| Mainframe (25) | $5.2M | $60M | -36 % |
The verdict is clear: the majority of Macrocosmos tokens, like those of other Bittensor subnets, are currently trading near annual lows. This situation is explained by an unfavorable environment for altcoins and low liquidity in secondary markets, combined with a lack of strong AI narratives since the beginning of the year.
This broad pullback phase may offer attractive entry points for long-term investors convinced by the potential of Bittensor and its leading subnets, including Macrocosmos. A resurgence of the AI narrative or major technical catalysts could trigger a significant rebound-provided the overall context becomes favorable for altcoins again.
Several approaches are possible to gain exposure to Macrocosmos subnets:
You can buy one or several specific tokens, selecting the most promising subnets according to your convictions (technical potential, usage volume, team, etc.). This approach requires active monitoring of performance, updates, and the roadmap for each subnet.
Certain platforms (e.g., Mentat Minds) offer turnkey investment products, allowing exposure to a weighted basket of the five Macrocosmos tokens. The allocation can be automatically adjusted based on each subnet’s performance, with yield optimization (APY) managed by the platform.
For more cautious investors, it is possible to stick with more global or protected exposure strategies:
It is important to remember that investing in Bittensor subnets is particularly risky. The shallow market depth can lead to rapid and unpredictable price movements, and a new bearish phase for altcoins could drive prices down further.
Moreover, the Bittensor subnet market is highly dependent on Bittensor’s overall success and on whether an AI narrative gains traction again in 2025. Note as well that the difficulty in investing in these tokens remains a barrier to market growth.
Finally, it is essential to remember that any investment decision is the sole responsibility of the investor. We recommend thorough analysis and regular monitoring of fundamentals before taking any significant position.
If you want to invest in Bittensor's subnets, the best option is to go through Mentat Minds. This is a referral link from Victor, so feel free to use it to support him.