We are pleased to announce the primary releases of hfhub and tok are actually on CRAN.
hfhub is an R interface to Hugging Face Hub, permitting customers to obtain and cache recordsdata
from Hugging Face Hub whereas tok implements R bindings for the Hugging Face tokenizers
library.
Hugging Face quickly turned the platform to construct, share and collaborate on
deep studying functions and we hope these integrations will assist R customers to
get began utilizing Hugging Face instruments in addition to constructing novel functions.
We even have beforehand introduced the safetensors
bundle permitting to learn and write recordsdata within the safetensors format.
hfhub
hfhub is an R interface to the Hugging Face Hub. hfhub at the moment implements a single
performance: downloading recordsdata from Hub repositories. Model Hub repositories are
primarily used to retailer pre-trained mannequin weights along with every other metadata
essential to load the mannequin, such because the hyperparameters configurations and the
tokenizer vocabulary.
Downloaded recordsdata are ached utilizing the identical format because the Python library, thus cached
recordsdata will be shared between the R and Python implementation, for simpler and faster
switching between languages.
We already use hfhub within the minhub bundle and
within the ‘GPT-2 from scratch with torch’ weblog publish to
obtain pre-trained weights from Hugging Face Hub.
You can use hub_download()
to obtain any file from a Hugging Face Hub repository
by specifying the repository id and the trail to file that you just wish to obtain.
If the file is already within the cache, then the perform returns the file path imediately,
in any other case the file is downloaded, cached after which the entry path is returned.
<- hfhub::hub_download("gpt2", "mannequin.safetensors")
path
path#> /Users/dfalbel/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/mannequin.safetensors
tok
Tokenizers are liable for changing uncooked textual content into the sequence of integers that
is commonly used because the enter for NLP fashions, making them an crucial part of the
NLP pipelines. If you need a increased degree overview of NLP pipelines, you may wish to learn
our earlier weblog publish ‘What are Large Language Models? What are they not?’.
When utilizing a pre-trained mannequin (each for inference or for advantageous tuning) it’s very
vital that you just use the very same tokenization course of that has been used throughout
coaching, and the Hugging Face group has achieved an incredible job ensuring that its algorithms
match the tokenization methods used most LLM’s.
tok gives R bindings to the 🤗 tokenizers library. The tokenizers library is itself
carried out in Rust for efficiency and our bindings use the extendr undertaking
to assist interfacing with R. Using tok we are able to tokenize textual content the very same means most
NLP fashions do, making it simpler to load pre-trained fashions in R in addition to sharing
our fashions with the broader NLP group.
tok will be put in from CRAN, and at the moment it’s utilization is restricted to loading
tokenizers vocabularies from recordsdata. For instance, you’ll be able to load the tokenizer for the GPT2
mannequin with:
<- tok::tokenizer$from_pretrained("gpt2")
tokenizer <- tokenizer$encode("Hello world! You can use tokenizers from R")$ids
ids
ids#> [1] 15496 995 0 921 460 779 11241 11341 422 371
$decode(ids)
tokenizer#> [1] "Hello world! You can use tokenizers from R"
Spaces
Remember you could already host
Shiny (for R and Python) on Hugging Face Spaces. As an instance, we have now constructed a Shiny
app that makes use of:
- torch to implement GPT-NeoX (the neural community structure of StableLM – the mannequin used for chatting)
- hfhub to obtain and cache pre-trained weights from the StableLM repository
- tok to tokenize and pre-process textual content as enter for the torch mannequin. tok additionally makes use of hfhub to obtain the tokenizer’s vocabulary.
The app is hosted at on this Space.
It at the moment runs on CPU, however you’ll be able to simply change the the Docker picture if you’d like
to run it on a GPU for quicker inference.
The app supply code can also be open-source and will be discovered within the Spaces file tab.
Looking ahead
It’s the very early days of hfhub and tok and there’s nonetheless quite a lot of work to do
and performance to implement. We hope to get group assist to prioritize work,
thus, if there’s a characteristic that you’re lacking, please open a difficulty within the
GitHub repositories.
Reuse
Text and figures are licensed underneath Creative Commons Attribution CC BY 4.0. The figures which were reused from different sources do not fall underneath this license and will be acknowledged by a observe of their caption: “Figure from …”.
Citation
For attribution, please cite this work as
Falbel (2023, July 12). Posit AI Blog: Hugging Face Integrations. Retrieved from https://blogs.rstudio.com/tensorflow/posts/2023-07-12-hugging-face-integrations/
BibTeX quotation
@misc{hugging-face-integrations, writer = {Falbel, Daniel}, title = {Posit AI Blog: Hugging Face Integrations}, url = {https://blogs.rstudio.com/tensorflow/posts/2023-07-12-hugging-face-integrations/}, 12 months = {2023} }