GALACTICA: Better than ChatGPT for Quality Research  


GALACTICA, the General-Purpose Scientific Language Model from Meta, is a remarkable technological achievement that has revolutionized the way scientists interact with data and information. This sophisticated AI model is the product of years of research and development, and it has been trained on a vast corpus of scientific text and data, encompassing a wide range of fields, from physics and chemistry to biology and neuroscience.


Thanks to its advanced natural language processing capabilities, GALACTICA can perform a variety of scientific tasks with a high degree of accuracy and precision. For instance, it can predict citations and references with remarkable accuracy, allowing scientists to quickly identify relevant studies and build on existing research. Additionally, GALACTICA is capable of performing complex mathematical reasoning tasks, such as solving equations and optimizing algorithms, making it an invaluable tool for researchers in fields like engineering and computer science.


Furthermore, GALACTICA is also capable of predicting molecular properties and protein annotations, which is particularly useful in fields like drug discovery and biochemistry. With its powerful machine learning algorithms and sophisticated data analysis tools, GALACTICA has become an indispensable tool for scientists around the world, helping them to accelerate their research, discover new insights, and make groundbreaking discoveries.


To learn more about GALACTICA and its capabilities, visit the official website at Galactica Demo, where you can find detailed information about the model, its training data, and its applications in a variety of scientific fields. Whether you are a seasoned researcher or a curious novice, GALACTICA is sure to inspire you with its power and versatility, and to help you unlock new discoveries and insights in your own scientific journey.


  • torch>=1.12
  • transformers==4.25.1
  • tokenizers
  • parallelformers==1.2.7
  • accelerate
  • markdown>=3.4
  • bleach[css]~=5.0.1
  • psutil


From pip:
pip install galai
From repository:
pip install git+

Models available:

There are five GALACTICA models available which we detail below:





125 M


1.3 B


6.7 B


30 B


120 B


  • GALACTICA is a powerful stand-alone language model that has been designed to handle a wide range of scientific tasks and applications. With its advanced natural language processing capabilities and sophisticated machine learning algorithms, GALACTICA can perform a variety of tasks, from citation prediction and mathematical reasoning to molecular property prediction and protein annotation.
  • However, to get the most out of GALACTICA, it’s important to use the correct prompts and tokens. This is because GALACTICA is not instruction-tuned, which means that it requires specific input to deliver accurate and reliable results.
  • In this article, we will guide you through some of the special tokens and prompt styles that you need to use to get the best possible results from GALACTICA. We will provide you with practical examples using the standard (6.7B) model, which is the most commonly used version of GALACTICA.
  • By following these guidelines, you will be able to harness the full power of GALACTICA and unlock new insights and discoveries in your scientific research. So whether you’re a seasoned researcher or a curious novice, read on to learn more about how to get the best results from GALACTICA.


  1. Predict Citations:

You need to use [START_REF]:

model.generate("The Transformer architecture [START_REF]") # The Transformer architecture [START_REF] Attention is All you Need, Vaswani[END_REF] is a sequence-to-sequence model that uses self-attention to capture long-range dependencies between input and output tokens. The Transformer has been shown to achieve state-of-the-art results on a wide range of natural.

  1. Give Reasoning:

Reasoning uses the special <work> token:

model.generate("A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>")
# What force should be applied to accelerate an object of mass 3kg to 10m/s? <work>\nWe can use Newton's second law: F = ma. We can substitute variables to get:\n\n\\[ F = \\left(66kg.

  1. Predict Protein Annotations:

Here is an example using FASTA sequence:

#'[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords\n\nCytoplasm, Methyltransferase, rRNA processing, S-adenosyl-L-methionine, Transferase\n\n## References\n\nQuestion: What are some articles for Ribosomal RNA small subunit methyltransferase H?\n\n

Answer: \n\n[START_REF] Comparative Genomics of 28 Salmonella enterica Isolates: Evidence for CRISPR-Mediated Adaptive Sublineage Evolution, Fricke[END_REF]\n\n</s>'

  1. Generate Molecules:

An example using SMILES notation of a molecule:

model.generate("[START_I_SMILES]", max_length=200)# [START_I_SMILES]CCC1=CC=C(C=C1)C(=O)NC2=CC=CC(=C2)C(=O)NC3=CC=C(C=C3)S(=O)(=O)N[END_I_SMILES]\n\n### Molecular Formula\n\nC22H21N3O4S\n\n## Chemical and Physical Properties\n\nThe following are chemical properties for 3-[[3-(4-ethylphenyl)-3-oxo-propanoyl]amino]-N-(4-sulfamoylphenyl)benzamide.\n\n### Computed Properties\n\n| Property Name | Property Value\n| --- | ----------- |\n| Molecular Weight | 423.5\n| XLogP3-AA Log P | 3.2\n| Hydrogen Bond Donor Count | 3\n| Hydrogen Bond Acceptor Count

  1. Summarization

You can add “TLDR:” for TLDR summaries:

TEXT = """Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.""" model.generate(TEXT + "\n\nTLDR:", max_length=400)# ...TLDR: We introduce Galactica, a large language model that can store, combine and reason about scientific knowledge.</s>

  1. Citation
@inproceedings{GALACTICA,    title={GALACTICA: A Large Language Model for Science},    author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},    year={2022}}


5 thoughts on “GALACTICA: Better than ChatGPT for Quality Research  

  • website
    October 13, 2023 at 3:35 pm

    It’s an remarkable piece of writing for all the internet users;
    they will take advantage from it I am sure.

  • textile fair
    October 16, 2023 at 9:10 pm

    I visit every day a few blogs and sites to read content, however this website presents
    feature based writing.

  • web,
    October 17, 2023 at 5:21 am

    I am curious to find out what blog system you are utilizing?
    I’m having some small security problems with my latest blog and I’d like to
    find something more secure. Do you have any recommendations?

  • situs 4d gacor
    October 19, 2023 at 6:02 pm

    I think the admin of this web page is genuinely working hard for his site, because here every information is quality based information.

  • homepage
    October 20, 2023 at 12:48 am

    Hello There. I discovered your blog the usage of msn. That is an extremely well written article.

    I will make sure to bookmark it and return to learn more
    of your helpful information. Thank you for the post.
    I’ll definitely return.

Leave a Reply

Your email address will not be published. Required fields are marked *.

You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>