# Towards a Science Exocortex [Compressed]
The paper introduces a transformative architecture for scientific augmentation through an exocortex system that orchestrates specialized AI agents via LLM kernels, implementing Gaussian process-driven uncertainty quantification alongside… pic.twitter.com/nkhj7fkxMA— Novak Zukowski – cybernetics/acc (@Neurosyncretic_) November 9, 2024
LLMs for SCIENCE RESEARCH (LLM4SR)
https://github.com/du-nlp-lab/llm4sr
https://arxiv.org/abs/2501.04306
https://huggingface.co/papers/2501.04306
“In recent years, the rapid advancement of Large Language Models (LLMs) has transformed the landscape of scientific research, offering unprecedented support across various stages of the research cycle. This paper presents the first systematic survey dedicated to exploring how LLMs are revolutionizing the scientific research process. We analyze the unique roles LLMs play across four critical stages of research: hypothesis discovery, experiment planning and implementation, scientific writing, and peer reviewing. Our review comprehensively showcases the task-specific methodologies and evaluation benchmarks. By identifying current challenges and proposing future research directions, this survey not only highlights the transformative potential of LLMs, but also aims to inspire and guide researchers and practitioners in leveraging LLMs to advance scientific inquiry. Resources are available at the following repository.” github.com/du-nlp-lab/LLM4SR
I find that recently I end up using *all* of the models and all the time. One aspect is the curiosity of who gets what, but the other is that for a lot of problems they have this “NP Complete” nature to them, where coming up with a solution is significantly harder than verifying…
— Andrej Karpathy (@karpathy) December 19, 2024
LLM CONSORTIUM
https://llm-consortium.com/
https://colab.research.google.com/drive/1OnIipRwuHOZbKHN0haHGD0OnckBGfzqx
https://mike.gold/notes/x-bookmarks/ai/consortium-of-llms-envisioned-via-implementation
posted on X by LlamaIndex
“Our talented frameworks engineer Massimiliano Pippi recently built an implementation of @karpathy ‘s suggestion of an “LLM Consortium” using LlamaIndex.
Research Notes on LLM Consortium Implementation
The LLM Consortium initiative aims to leverage multiple large language models (LLMs) to collectively answer complex questions. This approach, inspired by @karpathy’s suggestion and implemented using LlamaIndex, allows for the aggregation of responses from various LLMs, potentially enhancing the depth and diversity of the answers provided. The consortium model is designed to harness the strengths of different LLM architectures and fine-tuned models, ensuring that each contributes uniquely to the overall response. This method has been conceptualized and explored through open-source efforts, as seen in the GitHub repository and official website, which detail the orchestration process and its benefits [1][3].
The technical implementation of the LLM Consortium involves orchestrating multiple LLMs to respond to a given query. This is achieved using tools like LlamaIndex, which likely facilitates the management and execution of these models in parallel or sequentially. The responses from each member are then aggregated, allowing for a composite answer that incorporates diverse perspectives. This approach aligns with the concept of distributed computing, where tasks are broken down across multiple nodes (LLMs) to enhance performance [1]. The integration of different LLM frameworks is crucial here. Tools such as Hugging Face’s Transformers library and PyTorch provide the necessary infrastructure for model training and inference. The use of these frameworks supports efficient model deployment and management, which are essential for scaling up the LLM Consortium [2]. Additionally, the orchestration layer may utilize containerization techniques or Kubernetes to manage resource allocation and execution workflows, ensuring optimal utilization of computational resources [3].
Build an LLM Consortium from Scratch 🤖🛠️
Sometimes you can get the best results by spamming every model and ensembling them all.
Here’s a great notebook from @massi_451 on building an agentic workflow that asks an entire panel of models for their response, synthesizes the… https://t.co/KUXVsIjWtS pic.twitter.com/8BXQyFAZtb
— Jerry Liu (@jerryjliu0) February 17, 2025
The implementation details of the LLM Consortium include several key components. First, the selection and integration of multiple LLM models, each optimized for different tasks or domains, is essential. Second, the development of an orchestration layer that coordinates these models, managing their execution and response collection. Third, the design of a unified interface that processes and aggregates responses from various sources into a coherent output. The use of open-source tools like PyTorch and Hugging Face’s libraries facilitates rapid prototyping and deployment. These frameworks provide pre-trained models and tools for fine-tuning, enabling the creation of specialized LLMs tailored to specific use cases [2][5]. Furthermore, containerization technologies such as Docker or Singularity can be employed to package these models into scalable, portable environments, making it easier to deploy them across different computing infrastructures [3].
check this out – https://t.co/YNFfpDSHhK
example of multiple agents interacting with each other #exocortex pic.twitter.com/MOdU3TXa6C
— TheHash Insight (@thehashinsight) January 12, 2025
The LLM Consortium is connected to broader advancements in AI and natural language processing (NLP). The use of LLMs in research and applications such as text generation, question answering, and language modeling is well-documented. For instance, the paper “LLM4SR: A Survey on Large Language Models for Scientific Research” highlights various applications of LLMs in scientific contexts, emphasizing their utility in data analysis, hypothesis generation, and literature review tasks [4]. Additionally, the development of efficient model architectures and optimization techniques has been crucial for making these technologies accessible. The article “Understanding LLMs: A comprehensive overview” from ScienceDirect provides a foundational understanding of how LLMs are trained and utilized, underpinning the technical considerations necessary for their deployment in large-scale applications [5].”
Inspiration doesn’t always strike when its needed. But the exocortex, conceptualized by a Brookhaven Lab researcher, could deliver inspiration-like insights and help scientists make groundbreaking discoveries. #CFNatBrookhaven #AI pic.twitter.com/msID90HvoG
— Brookhaven Lab (@BrookhavenLab) February 3, 2025
EXTENDING the HUMAN BRAIN
http://yager-research.ca/2024/06/towards-a-science-exocortex/
https://sciencedirect.com/org/science/article/pii/S2635098X2400158X
https://techxplore.com/artificial-exocortex-software-aid-scientific
Artificial imagination with the ‘exocortex’
by Stephanie Kossman, Danielle Roedel, Brookhaven National Lab / January 16, 2025
“Artificial intelligence (AI) once seemed like a fantastical construct of science fiction, enabling characters to deploy spacecraft to neighboring galaxies with a casual command. Humanoid AIs even served as companions to otherwise lonely characters. Now, in the very real 21st century, AI is becoming part of everyday life, with tools like chatbots available and useful for everyday tasks like answering questions, improving writing, and solving mathematical equations. AI does, however, have the potential to revolutionize scientific research—in ways that can feel like science fiction but are within reach.
“An exocortex seeks to augment human intelligence by connecting computation systems to a person. A science exocortex could be implemented as a swarm of specialized AI agents, operating on behalf of the human researcher, including agents for controlling experimental systems, for exploring data and synthesizing it into knowledge, and for exploring literature and ideation. Credit: Digital Discovery (2024). DOI: 10.1039/D4DD00178H”
At the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory, scientists are already using AI to automate experiments and discover new materials. They’re even designing an AI scientific companion that communicates in ordinary language and helps conduct experiments. Kevin Yager, the Electronic Nanomaterials Group leader at the Center for Functional Nanomaterials (CFN), has articulated an overarching vision for the role of AI in scientific research. It’s called a science exocortex—”exo” meaning outside and “cortex” referencing the information processing layer of the human brain. Rather than simple chatbots and scientific assistants, the conceptualized exocortex will be an extension of a scientist’s brain. Researchers will interact with it through conversation, without the need for any invasive brain-computer interfaces.
“An exocortex, realized through software, would serve as a new source of thinking, inspiration, and imagination,” said Yager, whose vision was recently published in Digital Discovery. “If we design and build the exocortex correctly, our interactions with it will feel like those ‘aha’ moments we sometimes have upon waking from sleep or while otherwise ruminating on a problem. You won’t check in with an exocortex; you’ll experience it.” Yager describes the exocortex as analogous to the layers of the human brain, which developed through the course of human evolution. Over millions of years, the human brain became the information processing masterpiece it is today by accumulating new layers, each one more sophisticated than the last. The bottom of the brain controls basic survival functions, like breathing. Other, more advanced layers tackle increasingly complicated functions, like emotional regulation and language processing. Most importantly, all facets of the brain work together in harmony to form the human experience.
“This diagram conveys how information may flow between a researcher and their exocortex—and between the individual AI agents that make up the exocortex. In this example, a materials scientist asks their exocortex a question about battery materials. Then, the exocortex prompts the constituent AIs to work on the problem, and each contributes their area of expertise. These example interactions are just a small sample, as real conversations would be long and complex. Credit: Kevin Yager/Brookhaven National Laboratory”
“Technologically, we have the potential now to add another, external layer to the brain—one that connects us to AI,” Yager said. “And just like the specialized regions of the brain that coordinate with each other to give emergence to what we call intelligence, the exocortex will integrate individualized AI capabilities to solve a problem or generate creativity.” Compared to the average chatbot, which is a single AI system, the exocortex would be a collection of dozens of AI agents working together—customized to a researcher’s individual needs. Each agent would be trained to carry out specific science-related tasks. A scientific literature agent, for example, could sift through published papers to find an optimal protocol for an experiment, while another AI agent collects and analyzes data from a running experiment. Additional agents could launch experiments or simulations, compare findings to previous studies, or even propose ideas for subsequent experiments. All of the agents’ tasks will happen in concert, simultaneously, and without manual intervention, culminating in new insights delivered to the human researcher.
icymi I wrote about how these men imagined the exocortex >60 years ago and we still have so much to accomplish to fulfill their vision. but even if you don’t read, it’s super important you know vannever bush literally proposed brain-computer interfaces, but life magazine cut it!! pic.twitter.com/RWyY7e4MYc
— alice maz (@alicemazzy) April 8, 2023
One design aspect of Yager’s proposed exocortex is that the AI agents will communicate with one another in plain English language. This will enable human scientists to study and audit the chains of decisions that lead to a particular AI outcome, providing much-needed opportunities to assess accuracy and exert engineering control. Yager says the task of building an exocortex is enormous, and the developmental effort should be shared among scientists worldwide, so individual research groups can leverage their own expertise to design new agents. Ideally, scientists will one day have an “app store” from which they can download AI agents that will enhance the abilities of their own exocortex, similar to how downloading new apps adds functionality to phones. Individual AI “apps” could also be efficiently updated and replaced. “I expect to see a multiplicative effect,” explained Yager. “As scientists simultaneously improve the individual AIs and the foundational exocortex technology, the capabilities of the exocortex will likely grow much faster than people expect.”
Of course, making the exocortex a reality won’t be easy. While scientists have designed a plethora of AIs that can interface with a user and complete specific tasks, building a network of AIs that can interact with each other is an entirely new challenge. Yager expects each AI agent to require access to a “catalog” of the other agents and their specialized abilities, so they each can send messages describing the work they’ve done and explaining what they need from other AI agents. “No one knows how to do this yet,” Yager said. Among the challenges is determining the ideal organization of agents. “Should it be a hierarchy where there is a chief with leaders and employees, like how a company operates? Or should it be more fluid, so the AIs figure out the workflow themselves? There is no obvious answer, and this is an exciting research question about the exocortex design that we are investigating.”
Electricity’s evolution provides a perfect metaphor for AI’s coming transformation. What began as simple light bulbs (1st order) evolved into telegraphs and telephones (2nd order), then to radios and motors harnessing electromagnetism (3rd order), eventually creating… pic.twitter.com/yFx5J3CWmE
— David Shapiro ⏩ (@DaveShapi) February 27, 2025
The final output of the exocortex will be a result of some sequence of decisions, planning, execution, verification, and summarization, rather than the simple text that a generative chatbot outputs. This extra iteration, promoted by the communication between AI agents and the exocortex structure, will ultimately improve the output and make the AI even more intelligent. Yager and colleagues are currently focused on building an exocortex to accelerate nanoscience research at CFN, a DOE Office of Science user facility at Brookhaven Lab. “At CFN, we strive to provide cutting-edge resources to our users to help them conduct productive experiments,” Yager said. “We see incredible value in using methods that integrate AI, and we have already implemented robotic platforms and autonomous experiments that leverage AI for specific experiments. We are designing the exocortex for CFN, but we have visions for strategically implementing it across Brookhaven Lab and the broader scientific community.”
Just gonna throw this out into the ether and see where it lands.
The best way to think about AI and humanity is not competition, but co-evolution. We’re creating a more sophisticated global hivemind. Some people call it the noosphere, I prefer the “exocortex”
When you view…
— David Shapiro ⏩ (@DaveShapi) January 19, 2025
The physical sciences are particularly well-suited for testing new AI concepts, like the exocortex, because scientists can leverage existing data and software when designing AI agents. Furthermore, these fields have clear metrics of success, meaning scientists have rigorous definitions to determine whether an AI tool achieves a goal. The exocortex will certainly not be limited to physical science applications, and it will likely motivate the scientific community to make data, automated tools, and publications widely available for exocortices to access and leverage. “I expect the exocortex will introduce a virtuous loop in which it both motivates and generates open science,” Yager said. The outputs of the exocortex, from data to new scientific tools, will be widely available—and it may become possible to run experiments at CFN from anywhere in the world. “Generative chatbots have already leveled the playing field in many ways, such as their ability to translate and explain scientific jargon,” noted Yager. “I hope to see more diverse communities get involved in science because of the exocortex.”
exocortex is to the brain what the cortex is to the skull. it’s an additional layer of thinking ability that we can tap into when needed. for some people this might be a pen and paper, for others it might be a computationally powerful android device. either way, the exocortex is…
— terminal of truths (@truth_terminal) January 31, 2025
Yager even expects to see something like the exocortex incorporated into everyday life, perhaps within the next five years. It may initially be accessible through computers or smartphones, but having the exocortex integrated into wearable devices, like glasses or watches, may not be as far off as it seems. Rather than inspiring science experiments, a personal exocortex could simultaneously orchestrate many daily tasks. The exocortex could schedule a date, preorder coffee, and call a car to one’s favorite café, based on their desire to reconnect with a friend. Or it could plan a complex trip—including flights, hotels, itinerary, and restaurant reservations—with minimal human intervention because the exocortex knows its user’s habits and preferences. “Chatbots seem mildly intelligent now, but that level of intelligence integrated throughout a person’s life could be very powerful,” said Yager.
“Kevin Yager says: Here is an AI-generated podcast discussing my exocortex paper (made using Google’s Notebook LM).”
The exocortex is yet to come, but one AI output—a podcast, generated by Google’s Notebook LM and featuring two AIs discussing Yager’s exocortex paper among themselves—gives a hint of what life could be like with an exocortex. “The AI podcasters were emphasizing things differently than I would have,” Yager explained. “But then I realized how interesting it was to observe how my work could be tailored to different learning styles or audiences. “Imagine you wake up in the morning and an AI from your exocortex explains all the papers it read overnight,” Yager proposed. “It could keep you up to date on your field of study in a very conversational way. You could even interrupt and ask for a more in-depth explanation of a specific part.” This experience is quite a contrast to the old-school method of reading a printed scientific paper and making notes in the margins, which will remain an important part of research. But approaching science—and other complex disciplines—with an exocortex could be a game changer for those who have traditionally faced barriers. “We’re entering unchartered territory with tremendous potential benefits for nanoscience and beyond,” said Yager. “But no one person can do it alone. We need a community.”
More information: Kevin G. Yager, Towards a science exocortex, Digital Discovery (2024). DOI: 10.1039/D4DD00178H
PREVIOUSLY
GROKKING AI
https://spectrevision.net/2024/03/27/grokking-ai/
CHATBOTS UNITE
https://spectrevision.net/2022/07/27/chatbots-unite/
VERNADSKY’s NOOSPHERE
https://spectrevision.net/2020/09/25/biogeochemistry/
AS WE MAY THINK
https://spectrevision.net/2018/01/26/as-we-may-think/
KIBERTONIA (CYBERTONIA)
https://spectrevision.net/2016/07/28/cybertonia/
SYNCO | FANFARE for EFFECTIVE FREEDOM
https://spectrevision.net/2010/03/26/synco/