N L P i e

NLPie Research is pushing the frontier of neural‐language‐model innovation. Explore our compact, powerful models designed for speed, accuracy, and creativity.

Try Our Models

Our Numbers

0+

Research Papers

0+

Models Developed

0+

Total Downloads

Our Vision

We are dedicated to advancing the frontier of Natural Language Processing. Our vision is to democratize cutting-edge NLP technology by creating and sharing powerful, open-source models. We believe in the strength of community and strive to build tools that are accessible, reliable, and serve as a foundation for the next generation of language-based AI.

Logo

Loading research...

Our Models

MiniALBERT Release

This collection contains distilled MiniALBERT variants for general-purpose (Wikipedia) and domain-specific NLP tasks.

nlpie/miniALBERT-128

Fill-Mask • Updated March 26, 2024

A 128-dimensional recursive distilled ALBERT model trained on English Wikipedia for masked language modeling. Uses 6 recursions (~11M parameters).

View Paper →

nlpie/bio-miniALBERT-128

Fill-Mask • Updated March 26, 2024

Same miniALBERT-128 architecture fine-tuned on PubMed abstracts using BioBERT-v1.1 as teacher (~11M params).

View Paper →

nlpie/clinical-miniALBERT-312

Fill-Mask • Updated March 26, 2024

A 312-dimensional miniALBERT distilled using BioClinicalBERT on MIMIC-III clinical notes (~18M params).

View Paper →

Compact Biomedical Models

Streamlined BioBERT-derived models distilled for efficient biomedical NLP applications.

nlpie/compact-biobert

Fill-Mask • Updated March 26, 2024

CompactBioBERT distilled from BioBERT via combined MLM + layer + output distillation; ~65M parameters.

View Paper →

nlpie/tiny-biobert

Fill-Mask • Updated March 26, 2024

TinyBioBERT obtained via layer-wise transformer distillation (~15M parameters) trained on PubMed.

View Paper →

nlpie/distil-biobert

Fill-Mask • Updated March 26, 2024

DistilBioBERT distilled from BioBERT via straightforward distillation (~65M params) on PubMed.

View Paper →

nlpie/bio-distilbert-cased

Fill-Mask • Updated March 26, 2024

Cased variant of DistilBioBERT (~65M params) trained via continual learning on PubMed.

View Paper →

nlpie/bio-tinybert

Fill-Mask • Updated March 26, 2024

BioTinyBERT: continual-learning distilled TinyBERT (~15M parameters) on PubMed abstracts.

View Paper →

nlpie/bio-mobilebert

Fill-Mask • Updated March 26, 2024

BioMobileBERT lightweight distilled model optimized for efficiency in biomedical NLP.

View Paper →

Get in Touch

Have questions, suggestions, or collaboration ideas? We'd love to hear from you. Whether you're interested in using our models, contributing to our research, or just want to say hello, drop us a message and our team will get back to you within 1–2 business days. Your feedback helps NLPie keep innovating.